טכניון מכון טכנולוגי לישראל
הטכניון מכון טכנולוגי לישראל - בית הספר ללימודי מוסמכים  
Ph.D Thesis
Ph.D StudentPan Dror
SubjectOptimization Methods for Solving Structured Nonconvex
Minimization Problems
DepartmentDepartment of Industrial Engineering and Management
Supervisors Assistant Professor Shoham Sabach
Professor Amir Beck
Full Thesis textFull thesis text - English Version


Abstract

We consider two approaches towards solving hard optimization problems (nonconvex, and in some cases nonsmooth) that have special structures.

In the first approach, we seek to compute a global optimal solution for such a problem in reasonable run time. We develop a branch and bound algorithm that globally solves special continuous optimization problems where a potentially nonconvex objective function is to be minimized under nonconvex inequality constraints that satisfy some specific solvability assumptions. The assumptions hold for some special quadratically constrained quadratic programming that contain only ball, out-of-ball and linear constraints, and a nonconvex quadratic objective to minimize. Such problems can be seen as extensions of the standard trust region subproblem, and the proposed algorithm utilizes the ability to efficiently compute all the local and global minimizers of that problem. We give numerical experiments showing that for moderate dimensions, the algorithm can stop and verify optimality in a reasonable time. Application of the algorithm to sparse source localization problems is presented.

In the second approach, we apply an iterative scheme called majorization-minimization on some special nonconvex and nonsmooth optimization problems. The method is based on minimizing at each iterate a properly constructed consistent majorizer of the objective function. We describe a variety of classes of functions for which such a construction is possible. We introduce an inexact variant of the method, in which only approximate minimization of the consistent majorizer is performed at each iteration. Both the exact and the inexact algorithms are shown to be descent methods whose accumulation points have a property that is stronger than standard stationarity. We give examples of cases in which the exact method can be applied. Finally, we show that the inexact method can be applied to the so-called sparse source localization problem. Such a problem is a special setting of a more general model of composition of a support function over a smooth map. Under some additional assumptions on the majorizers used, the inexact algorithm is applicable by utilizing a fast optimization method on a smooth convex dual of its subproblems.