|Ph.D Student||Peleg Tomer|
|Subject||Extending Sparsity-Based Models for Signal and|
|Department||Department of Electrical Engineering||Supervisor||Professor Michael Elad|
|Full Thesis text|
Signal modeling based on sparse representations have become very popular in the past decade for handling numerous signal and image processing applications. However, despite the great success in understanding and utilizing this model, there are still fundamental questions that have not been fully answered up-to-date: (i) Aside from sparsity over a pre-specified dictionary, what other useful assumptions can be incorporated into this model? (ii) How can we extend this model to capture the inherent relations between sets of related signals? (iii) Are there other ways to practice sparsity? These questions have attracted considerable attention in recent years, and addressing them is the main focus of this PhD thesis. Our goal is to extend and improve the core sparisty-based model by either a richer statistical modeling of sparse representations or by adopting the analysis viewpoint to sparse representations.
In the first part of the thesis we introduce an extension to the sparsity-based model that takes into account statistical dependencies between various components of the model. These include intra-dependencies within a single sparsity pattern (through a Boltzmann Machine) and interdependencies between a pair of related sparse representations (through a Restricted Boltzmann Machine). This extension offers improved flexibility compared to previous structured sparsity models, and can be better adapted to the data in hand. We show that the suggested approach better serves natural image patches compared to previous approaches. We also utilize this statistical modeling to develop an efficient scheme for the task of single image super-resolution, where the goal is to increase the number of pixels of a given image, while minimizing visual artifacts.
In the second part of the thesis we explore the co-sparse analysis model, an alternative and less-explored viewpoint to sparse representations. The analysis model holds the premise for providing improved representation power, compared to the classical synthesis-based sparsity model. To bring the co-sparse analysis model to fulfill its full potential, a dictionary associated with it should be found such that it is best suited to the data it serves. Therefore, the main focus of our study is the analysis dictionary _ its properties and ways to learn it. We reveal two central properties of this dictionary that govern the corresponding pursuit recovery performance. Furthermore, we proposed an extension of the classical K-SVD algorithm to the analysis case, learning the analysis dictionary from a set of signal examples.