|Ph.D Student||Sulam Jeremias|
|Subject||From Local to Global Sparse Modeling|
|Department||Department of Computer Science||Supervisor||PROF. Michael Elad|
|Full Thesis text|
Signal models have always been central to the development of new and better algorithms. This thesis is concerned with sparse representations modeling, which builds upon the observation that natural signals can be well approximated by a few elements from a vast collection of atoms, commonly termed dictionary. Over the last decade, several works have studied the problem of retrieving the sparse set of atoms that best represent a given measurement signal, and proposing ways of adapting and training this model from real-world data. The latter task, known as the dictionary learning problem, has empowered sparse enforcing methods to achieve remarkable results in many different fields from signal and image processing and restoration to higher level tasks such as detection, classification and several other machine learning applications.
This sparse model, while greatly successful, has typically been applied to small and local signal patches due to computational constraints. More precisely, efficient algorithms were suggested for solving global problems by addressing a collection of relatively independent local subproblems. This paradigm results in a series of inconsistencies, however, which we loosely refer to as a local-global gap, with both practical and theoretical implications. In this thesis, we will first propose different and complementary strategies to significantly alleviate several of these issues by deploying multi-scale analysis tools and global regularization techniques, such as the expected patch log-likelihood and Laplacian regularization. We will then circumvent these inconsistencies altogether by tackling the problem of learning a sparse representation model for high dimensional signals. Building on the double sparsity model and a cropped wavelets dictionary, this will take us to propose a new dictionary learning algorithm resulting in large trainable atoms, dubbed Trainlets.
Towards the second part of this thesis, we will consider the Convolutional Sparse Coding (CSC) model, which will be shown to be a surprising answer for the local-global gap. We will expand much of the classical sparse representations theory to the convolutional case, providing uniqueness, stability and recovery guarantees based on a new local measure of sparsity. This analysis provides a theoretical justification to abundant recent work, and will guide the development of new pursuit and dictionary learning algorithms. Towards the last part, we will analyze the Multi-Layer Convolutional Sparse Coding model, which proposes a global construction composed of a cascade of convolutional layers. We will propose a sound pursuit algorithm for signals following these model assumptions by adopting a projection approach, providing new and improved bounds on the stability of its solution and analyzing different algorithmic alternatives. A dictionary learning algorithm will be naturally derived from our study, enabling to train the nested convolutional filters from real data, and applying them to several applications.
This thesis condenses a tour of different alternatives that seek to better serve global problems while leveraging the powerful locally sparse modeling framework. The outcomes of this work are several new algorithms, practical solutions, novel models and theoretical results that, I hope and believe, will empower the next generation of signal modeling.