Ph.D Student Ph.D Thesis Hong Tao Numerical Optimization and Multigrid Computational Methods with Applications Department of Computer Science PROF. Irad Yavneh DR. Michael Zibulevsky

Abstract

Optimization problems can be found naturally in many different fields. In many engineering problems, we get a thirst to organize things in their best way which defines an optimization problem of a certain formulation. Numerical optimization focuses on the discovery of fast numerical algorithms for such an optimization problem. Multigrid computational methods study the way to solve differential problems efficiently by using a hierarchy of discretizations. In this thesis, we mainly study numerical optimization and multigrid computational methods as well as their applications.

Regularization by denoising (RED) is a recently developed framework to construct advanced priors through state-of-the-art image denosising algorithms. We begin with the study of applying vector extrapolation to accelerating the existing solvers in RED. Following, we propose a general framework named weighted proximal methods (WPMs) for RED. We show that the previous two solvers in RED, namely the fixed point and accelerated proximal methods, are two special cases of WPMs. By setting a simple weighting, we show that WPMs converge faster than previous solvers.

In the second part of this thesis, we first adapt Nesterov's scheme to accelerate iterative methods for linear problems. In this work, we propose an analytic solution to obtain the optimal parameter inside Nesterov's scheme and discuss the robustness of Nesterov's scheme. Our second work in this part merges multigrid optimization with sequential subspace optimization formulating a new accelerated scheme, dubbed SESOP-MG, which inherits the merits of these two methods. Additionally, we study the asymptotic convergence factor of the two-grid version of  SESOP-MG for linear problems and propose a fixed-stepsize version for linear problems to save computation of SESOP-MG further.

Compressive sensing (CS) is a signal processing technique used to acquire and reconstruct signals of interest efficiently. Sparsity and incoherence are two conditions in which the recovery in CS is possible. Sparsity means that the signal can be represented sparsely in some domain which can be realized by learning a dictionary or choosing a prescribed one for particular signals. Incoherence is satisfied by designing a sensing matrix which also plays a role in acquiring signals. Our first work in this direction presents a model to learn the dictionary and sensing matrix simultaneously. The resulting CS system yields a higher signal reconstruction accuracy than previous CS systems. Our second work considers the efficiency of acquiring signals with a structured sensing matrix consisting of a sparse matrix and a dense matrix which allows fast implementation.