Ph.D Thesis


Ph.D StudentSimon Dror
SubjectGenerative Models: Affecting Current Practice with
Traditional Methods
DepartmentDepartment of Computer Science
Supervisor PROF. Michael Elad
Full Thesis textFull thesis text - English Version


Abstract

Inverse problems in the field of signal processing refer to the estimation of a signal when given corrupted or partial measurements of it. In this thesis, we focus on solving such problems, using both the traditional sparse representation model and the more recent deep neural networks (NN) approach. Importantly, we show how one could utilize the mathematically well-understood results of the former, to improve the common practice of the latter, leading to novel architectures and optimization techniques.


We start with the sparse representation model, which builds on the assumption that a signal can be decomposed into a linear combination of a small number of basic atoms, taken from a typically overcomplete dictionary. Here, solving an inverse problem refers to a pursuit of the sparsest representation of a signal, under the given dictionary. From a Bayesian standpoint, this procedure provides a MAP estimate under the sparse prior. We improve the performance of these techniques by approximating the (computationally infeasible) MMSE estimator.


We then move on to high-dimensional signals. Here, global dictionaries are unattainable due to the curse of dimensionality. This obstacle has been overcome through the use of the Convolutional Sparse Coding (CSC) model, among other methods. This model assumes that signals can be decomposed such that each local environment consists of a small number of shift-invariant filters. In our work, we propose a connection between this model and traditional patch-based techniques. Specifically, we show that the pursuit algorithms of the former lead to a MAP estimator, while the latter provides an approximation for the MMSE estimator of the global model. We then utilize these findings in the design of a novel neural network architecture, which leads to state-of-the-art performance in image denoising, while using only a fraction of the number of parameters compared to other deep-learning based models.


We then move on to deep generative models. These models impose a NN that maps low dimensional latent vectors to high dimensional signals. Specifically, in our work, we study an inverted process. In other words, when given an input signal, we seek its latent representation vector for a given model. First, we show that this technique is of interest when solving various inverse problems as well as other tasks, such as synthesizing a morphing process between two (or more) input images. Then, we study the generative model inversion task, and by relying on sparse representation theory, we provide theoretical conditions as well as provable algorithms for recovering the latent vector. Importantly, we show that these methods outperform the common gradient-decent approach for inverting deep generative models.