|M.Sc Student||Livny Gal|
|Subject||Finite Length Analysis of Lossy Compression via Sparse|
|Department||Department of Electrical Engineering||Supervisor||Professor Igal Sason|
One of the main goals of information theory has been to design practical codes for reliable transmission and lossy compression, approaching the fundamental information-theoretic limits with feasible complexity. Recently, a new type of codes has been suggested for reliable communication over memoryless channels, and for lossy compression of memoryless and stationary sources with continuous alphabets. These codes, named as sparse regression codes (SPARCs) or sparse superposition codes, rely on a coding technique where the codewords are linear combinations of columns of the design matrix of the code. SPARCs were originally developed for communication over the additive white Gaussian noise (AWGN) channel, and they were proved to asymptotically achieve the channel capacity. Subsequently, SPARCs were adapted for lossy compression, and it was shown that they asymptotically attain the rate-distortion function of a Gaussian memoryless source with a computational complexity which grows polynomially in the blocklength of the source output.
The main focus of this thesis is the examination of the performance of SPARCs for lossy compression of memoryless sources, obtained by tightening the existing asymptotic bounds on the probability of excess distortion, and by adapting these bounds to finite blocklengths. Furthermore, an improvement of the encoding scheme for lossy compression with SPARCs is proposed, analyzed, and examined by computer simulations. We also discuss the tradeoff between performance and complexity of SPARCs in the context of lossy compression.