|M.Sc Student||Mulayoff Rotem|
|Subject||Universal Sparse Representation|
|Department||Department of Electrical and Computer Engineering||Supervisor||ASSOCIATE PROF. Tomer Michaeli|
|Full Thesis text|
Sparse representation over redundant dictionaries constitutes a good model for many classes of signals (e.g., patches of natural images, segments of speech signals, etc.). However, despite its popularity, very little is known about the representation capacity of this model. In this thesis, we study how redundant a dictionary must be so as to allow any vector to admit a sparse approximation with a prescribed sparsity and a prescribed level of accuracy. We address this problem both in a worst-case setting and in a typical-case one. For each scenario we derive lower and upper bounds on the minimal required overcompleteness. Our bounds have simple closed-form expressions that allow to easily deduce the asymptotic behavior in large dimensions. In particular, we find that the required overcompleteness grows exponentially with the sparsity level and polynomially with the allowed representation error. This implies that universal sparse representation is practical only at moderate sparsity levels, but can be achieved at relatively high accuracy. Additionally, by considering a specific type of dictionaries, we show that universal sparse representation can be attained with low computational complexity. Finally, we suggest an algorithm to construct universal sparse representation dictionaries, based on minimizing the dictionary’s coherence. We demonstrate that our approach has significant advantages over existing methods.