טכניון מכון טכנולוגי לישראל
הטכניון מכון טכנולוגי לישראל - בית הספר ללימודי מוסמכים  
Ph.D Thesis
Ph.D StudentGarber Dan
SubjectProjection-Free Algorithms for Convex Optimization
and Online Learning
DepartmentDepartment of Industrial Engineering and Management
Supervisor Professor Elad Hazan
Full Thesis textFull thesis text - English Version


Abstract

The problem of minimizing a linear function over a convex set is many times algorithmically simpler and more efficient than its non-linear convex counterpart. Examples include polytopes that arise in combinatorial optimization problems, certain sets of matrices with bounded singular values, and balls induced by p-norms, and generalizations of.


This phenomena motivates the computational model of convex optimization and online learning using a linear optimization oracle. 

In this computational model we give several new results that improve over the previous state-of-the-art:


1.      We present a variant of the conditional gradient method, a first-order method for constrained smooth minimization using a linear optimization oracle that converges exponentially fast when the convex set is a polytope and the objective function is strongly convex. This gives an exponential improvement in convergence rate over previous results.


2.      Based on the machinery developed to derive the above result, we derive a linear optimization-based algorithm with optimal regret for online convex optimization, in case the feasible set is a polytope. This resolves open questions posed by Kalai and Vempala, and Hazan and Kale, in this important case.

The new online algorithm also gives rise to linear optimization-based algorithms for non-smooth and stochastic convex optimization over polytopes, with the same rates as projection-based first-order methods in terms of the accuracy parameter.


3.      We show that the vanilla conditional gradient algorithm converges at an accelerated rate (quadratic improvement with respect to the standard, previously known, rate) when applied to smooth and strongly convex optimization over strongly convex sets, which include various balls induced by p-norms, and generalizations of.


4.      We present a linear optimization-based online algorithm for a natural online extension of the fundamental leading eigenvector problem from numerical linear algebra. In contrast to previous linear oracle-based algorithms, the new algorithm

i) enjoys a regret bound with a much more favorable dependency on the dimension, and ii) requires computations that depend only on the sparsity of the data and not explicitly on the dimension. We also present a simpler and more efficient algorithm for the easier stochastic version of the problem.