|Ph.D Student||Yakov Vaisbourd|
|Subject||Decomposition and First Order Methods for Large Scale|
|Department||Department of Industrial Engineering and Management||Supervisor||Full Professor Beck Amir|
|Full Thesis text|
In the last decades we have witnessed an exponential growth in the capability to acquire and store information. This trend have led to an increasing demand for optimization methods designed to solve large scale models. In this thesis we consider several aspects of the two most common approaches for large scale optimization - decomposition and first order methods. The first approach suggests to break down the problem into a set of a smaller and easier to solve subproblems, which are repeatedly solved until the optimal solution of the original problem is obtained. The second approach enjoys from a low computational and storage requirements due to the fact that its main computational step is usually a matrix-vector multiplication. Thus, both approaches are perfectly suited for large scale applications, and commonly, when a huge scale optimization problem is to be solved, the method of choice is obtained by blending the two ideas.
We consider three models designed to solve problems in various fields such as image and natural language processing, bioinformatics and more. In practical situations a common property of the models under consideration is that they involve a large amount of decision variables, but they do differ by their complexity level.
The models that we consider are the convex Regularized Consensus model, the non-convex though polynomial Trust Region Subproblem and finally the NP-hard Sparse Principal Component Analysis model. Thus, in this thesis we exhibit various schemes and analysis approaches, aiming to successfully apply a decomposition or a first order approach in view of the conflicting goals - scalability and quality of solution.