|M.Sc Student||Abassi Ahmad Zaid|
|Subject||Lower-Dimensional Representation: Compression, Faithfulness|
|Department||Department of Electrical and Computer Engineering||Supervisor||PROF. Ron Meir|
|Full Thesis text|
In this day and age when massive amounts of data are available to computers and
humans alike, it is of prime importance to allow tractable manipulation of, and inference from, acquired data. The most fundamental and ubiquitous concept in Data Science is that of dimensionality reduction (DR). DR refers to a controlled transformation of data to reduce the number of variables needed to consider in order to simplify representation and avoid the undesirable phenomena accompanying unbounded higher-dimensions.
From a combinatorial explosion in possible values due to higher dimensionality, through lack of efficient optimization arising from the complexity of the ambient space and its geometry to statistical ill-posedness and overfitting, (high) dimensionality is truly a curse.
While Principal Component Analysis (PCA) and Reduced-Rank Regression (RRR), with their many variants, constitute the most basic techniques of dimensionality reduction, many inherently different techniques exist for distinct purposes across multiple fields and disciplines. From rate-distortion (RD) and the Information Bottleneck (IB) to the Generalized Karhunen-Loeve transform (GKL) and manifold learning, different techniques exist for distinct purposes and each technique requires its own theory and study.
Inspired by transfer and unsupervised learning, we introduce Encumbered Principal
Component Analysis (EPCA), in which we seek a low-rank linear approximation of data, where the consideration is not only faithfulness to the original data, but also relevance towards predicting other data correlated with the original. We study analytically this new dimensionality reduction technique and the representation it affords us, comparing it qualitatively with standard PCA and RRR using numerical analysis and simulations to demonstrate the theory. Our main tools for this purpose are from matrix analysis, perturbation theory, and linear algebra.
Finally, we introduce ReLU Reduced Rank Regression (R4), where we seek an optimal nonlinear low-rank approximation of data transformed by a standard ReLU nonlinearity.
We qualitatively study this problem and its solution as deeply as possible given its
computational hardness and analytical intractability. This part is of prime importance to the theory of deep learning for its intuitive and clear relation to optimal representations in deep neural networks, an important and unresolved problem.