|Ph.D Student||Levy Yehuda|
|Subject||Stochastic Optimization with Non-Convex Pitfalls:|
Structure and Algorithms
|Department||Department of Industrial Engineering and Management||Supervisor||Professor Elad Hazan|
|Full Thesis text|
The power of the Stochastic Optimization (SO) paradigm in machine learning is in its ability to generalize many problems from the realm of statistical learning, and supply universal tools to solving them. Extensive investigation throughout the last decades has yield efficient algorithms with worst case guarantees; this has lead many practitioners to embrace the SO paradigm in modeling and solving real world problems. While the performance limits of a learner are well understood in settings with convex loss mechanisms; there remain substantial gaps concerning the performance limits of such a learner in settings with a non-convex loss mechanism.
In this thesis we study structural/algorithmic aspects of some intriguing learning scenarios where (stochastic) non-convex optimization arises, focusing on settings that capture a wide class of learning problems.
Concretely, we tackle non-convex pitfalls that are prevalent in learning frameworks such as Neural-Network optimization and semi-supervised SVMs. We concentrate on phenomena that interfere with the convergence of standard optimization methods, and devise alternative algorithms that are better adapted to the underlying structure of these problems. Our emphasis is on devising efficient algorithms with provable guarantees. This usually goes through a careful characterization of the underlying non-convex phenomena.