Ph.D Thesis

Ph.D StudentTurek Javier S.
SubjectTopics in Sparse Representation Modeling and
DepartmentDepartment of Computer Science
Supervisors PROF. Irad Yavneh
PROF. Michael Elad
Full Thesis textFull thesis text - English Version


Sparse modeling is a powerful method for representing information contained in signals. The notion of sparsity has been proven to be very useful in describing the behavior of natural signals of inherent low dimension, leading to efficient sparsity-based models. One example is the synthesis model which describes signals as a linear combination of a few basic elements from a dictionary of elementary components. On the other hand the analysis model considers signals in terms of their inner product with these elementary components in the dictionary. When moving from signals to matrices and operators, sparsity is encountered in different fields like modeling the correlation between different data. When possible, this leads to advantages such as memory reduction and runtime improvements, as sparse matrices are easier to store and matrix-by-vector multiplications are cheaper to apply. This thesis studies sparse modeling of signals and matrices, and their deployment for solving estimation problems and extending their applicability to real-life solutions.

We begin with the study of Bayesian estimators for the synthesis and analysis models and relations between probabilistic and deterministic formulations. Assuming a particular family of unitary dictionaries, we show that signals can be reconstructed using estimators with closed-form solutions for both models. Our work provides an evaluation of these methods' worst-case gain-factors. Bayesian estimators are further explored in the general case for the analysis model, and we provide new methods for reconstructing signals within that model.

In the second part of this thesis, we focus on applying the synthesis model to a source separation problem. This problem is the mitigation of clutter artifacts in cardiac ultrasound imaging. The goal of the separation process is to separate the signal into tissue and clutter components, enabling the removal of the clutter part from the original signal. Since the tissue component varies with the patient, the dictionary should be specific and adapted to the data. We develop three different strategies to train the dictionary for the separation process: from the same patient, off-line from other patients, and merging both techniques for best performance. Moreover, we then extend these methods to a novel application of simultaneous clutter removal and speckle noise reduction, where a joint sparsity scheme is used to fuse the first and second harmonics.

In the last part of this work, we look at the problem of estimating the sparse inverse covariance matrix as suggested by Banerjee Most existing methods that solve this problem have severe memory limitations that restrict their applicability to low and medium-scale dimensions. We employ a block-coordinate descent approach that breaks the problem into small blocks and updates the solution for one block at a time. For each block we compute a Newton direction and apply a line-search procedure. The proposed method is capable of computing the sparse inverse covariance for millions of variables and converges in a few iterations, leading to state-of-the-art runtime and memory performance. In addition, we develop a multilevel framework that accelerates existing solvers for this problem, further reducing the runtime of our method.