טכניון מכון טכנולוגי לישראל
הטכניון מכון טכנולוגי לישראל - בית הספר ללימודי מוסמכים  
Ph.D Thesis
Ph.D StudentElena Olvovsky
SubjectNovel Gradient-Type Optimization Algorithms for Extremely
Large-Scale Nonsmooth Convex Optimization
DepartmentDepartment of Mathematics
Supervisors Professor Emeritus Yoffe Alexander
Full Professor Nemirovski Arkadi


Abstract

In this work we develop new gradient-type methods for solving extremely large-scale convex optimization problems. This type of problems rises in many applications, e.g., medical imaging, design of mechanical structures, etc.


Up to the present moment, the common belief is that the best tools for solving large-scale "well-structured'' convex problems are Polynomial Time Interior Point methods (PTIP). The most attractive feature of these algorithms is their computational efficiency : the computational effort sufficient to find an ε-solution

is proportional to the "number of accuracy digits'' with the proportionality coefficient growing polynomially with the design dimension of the problem. This property means rapid convergence in terms of the number of calculations of the solution approximation (iterations).

This provides for the possibility to get high-accuracy solutions. However, all known polynomial time algorithms share a common drawback: the computational effort per iteration grows nonlinearly with the design dimension of the problem. The modern computational facilities rule out processing of nonlinear convex algorithms employing the PTIP for design dimensions of 105 order of magnitude and above.


In our research we address very large scale optimization problems. The above mentioned limitation of PTIP requires exploration of alternatives. A less common approach is to use the so called "Black-Box" represented techniques. These techniques exhibit  dimension-independent (or nearly dimension-independent) and

nearly optimal, in the sense of Information-Based Complexity Theory, rate of convergence. Recently Non-Euclidean Restricted Memory method (NERML) was introduced by A. Ben-Tal and A. Nemirovskii. This novel subgradient-type technique is adjustable to the "geometry" of the problem to be solved and

also capable to utilize information gathered about the function in previous iterations.


We address the two problems which arise in the application of NERML.

First, we provide an incremental implementation of NERML. This implementation has an added benefit (over incremental implementations of other algorithms)

of addressing a wide range of practical problems. In real life applications the objective function possesses certain specific structure. In our work we develop Incremental implementation of NERML aimed to solve optimization problems of minimizing composed nondecreasing convex function of several convex functions (the existing incremental methods addressed only the "sum of functions" case, specific to image reconstruction).


Second, we broaden Non-Euclidean Restricted Memory Level Method to solve problems with functional constraints in such a manner that it can be used in the conjunction with the Incremental implementation.

For both Incremental and Constrained algorithms, we prove convergence and estimate efficiency.