|Ph.D Student||Yankelevsky Yael|
|Subject||Sparsity-Based Processing of Graph Structured Data|
|Department||Department of Computer Science||Supervisor||Professor Michael Elad|
|Full Thesis text|
The era of big data introduces new challenges to classic signal processing. In numerous problems, the signals to be handled have a complex underlying geometry which is no longer Euclidean and could instead be represented by a graph. Examples of such signals can be found in applications of transportation, energy, social networks, sensor networks, and more.
A popular and highly effective approach taken for solving common signal processing problems is sparse representation of the signals over a trained dictionary. Equipped with a vast collection of numerical algorithms as well as an established theoretical foundation, this approach has proven successful in capturing the prominent characteristics of the given data. However, its application has been mainly restricted to images or uniformly sampled signals. For data residing on non-Euclidean topologies, represented by weighted graphs, an additional challenge is inferring the underlying geometric structure of the data and incorporating it into the learning process.
In this thesis, we study the extension of sparsity based modeling to graph signals, covering both practical and theoretical aspects.
Aiming to infer and preserve the local intrinsic geometry of the data, we capture this geometry using two graphs: the topological graph, accounting for the internal structure common to the individual measurements, and the manifold graph, accounting for the relations between different measurements in the given set.
Inspired by ideas from spectral graph theory, manifold learning and sparse representations, we introduce a new dictionary learning approach that preserves the data geometry, as reflected by the aforementioned graphs.
Furthermore, we show that in cases where the underlying graph is unknown, we can infer it from the data as an integral part of the dictionary learning process.
The efficiency of this approach is demonstrated on a variety of applications, including sensor network data completion and enhancement, image denoising and inference of correlation patterns in image patches.
Consequently, we employ similar concepts and devise a supervised dictionary learning algorithm for graph signals, facilitating classification tasks. Specifically, we target the challenging scenario of multi-label classification, and propose taking into account the correlation between different classes as part of the model geometry, enabling a significant improvement in classification accuracy compared with other dictionary-based methods.
In the third part of the thesis, we present a recent extension of the above algorithms, relying on a novel graph-wavelet construction tailored to the particular given data structure. By integrating a structure-aware multi-scale transform, combined with adaptiveness to the data itself, the resulting algorithm accommodates a wider family of graph signals and is able to cope with data of much higher dimensions.
Finally, we revisit the manifold regularized sparse coding problem and derive the complementing theory. We formulate the worst-case guarantees for obtaining a stable solution, and analyze the conditions for which this new model is indeed advantageous over the classic sparse representation model.
Altogether, we contribute a theoretically established sparse representation model for graph structured data and a set of sparse coding and dictionary learning algorithms derived from it, all demonstrating improved performance compared with traditional, structure-agnostic dictionaries.