טכניון מכון טכנולוגי לישראל
הטכניון מכון טכנולוגי לישראל - בית הספר ללימודי מוסמכים  
Ph.D Thesis
Ph.D StudentShoham Sabach
SubjectIterative Methods for Solving Optimization Problems
DepartmentDepartment of Mathematics
Supervisor Professor Emeritus Reich Simeon
Full Thesis textFull thesis text - English Version


Abstract

This dissertation concerns iterative methods for solving diverse optimization problems in infinite-dimensional and Euclidean spaces. It contains six chapters. My contributions to the fields of Optimization Theory, Nonlinear Analysis and Numerical Methods are interpreted on the broad spectrum between practical methods for solving real-world problems to iterative algorithms for approximating solutions of optimization problems in infinite-dimensional spaces.


    The first five chapters of this dissertation focus on my research in the infinite-dimensional case. The iterative methods proposed in the third chapter are based on several results in Fixed Point Theory and Convex Analysis which were obtained in the first two chapters. We first studied new properties of Bregman distances with respect to two classes of convex functions: Legendre functions and totally convex functions. These developments lead to many results regarding fixed points of nonexpansive operators which are defined with respect to Bregman distances instead of the norm. We deal with a wide variety of optimization problems such as fixed point problems, equilibrium problems, minimization problems, variational inequalities and the convex feasibility problem. The fourth chapter is devoted to a long and detailed study of the problem of finding zeroes of monotone mappings. In this area we wish to develop iterative methods which generate approximation sequences which converge strongly to a zero.


    My research in finite-dimensional spaces appears in the last but not the least chapter and deals with developing an algorithm for solving optimization problems which arise from real-world applications. We developed a new numerical method for finding minimum norm solutions of convex optimization problems. This algorithm is the first attempt to solve such problems directly and not by solving a ``sequence'' of subproblems.