|Ph.D Student||Papyan Vardan|
|Subject||Global Versus Local Modeling of Signals|
|Department||Department of Computer Science||Supervisor||Professor Michael Elad|
|Full Thesis text|
Many image restoration algorithms in recent years are based on patch-based models. The core idea is to decompose the input image into fully overlapping patches, restore each of them independently using a local model, and then merge the results by plain patch averaging. Tackling the global restoration problem using such local independent processing naturally creates a local-global gap. We begin this thesis by analyzing this gap in a simple toy example and explaining means for its amendment. We then propose an alternative to the patch-averaging paradigm. In particular, we consider Convolutional Sparse Coding (CSC), which is a global sparsity-inspired model defined in terms of a shift invariant local sparse prior. On the theoretical side, we prove that the analysis of such a global model relies on local properties. While on the practical side, we show how the inference and the training of the model can be solved using simple local operations, similar to those used in the patch-averaging algorithm. We then propose a multi-layered extension to the CSC, termed multi-layer CSC (ML-CSC), which is shown to be tightly connected to Convolutional Neural Networks (CNN). Leveraging this connection, and extending the theoretical study of the CSC to the ML-CSC, we provide theoretical guarantees for CNN in the view of sparsity.