טכניון מכון טכנולוגי לישראל
הטכניון מכון טכנולוגי לישראל - בית הספר ללימודי מוסמכים  
Ph.D Thesis
Ph.D StudentKoren Tomer
SubjectUncertainty in Machine Learning: Algorithms and Limitations
DepartmentDepartment of Industrial Engineering and Management
Supervisor Professor Elad Hazan
Full Thesis textFull thesis text - English Version


Abstract

This thesis studies mathematical models of learning and optimization under various forms of uncertainty, understanding their inherent complexities and limitations, and designing efficient algorithms that perform optimally in the face of uncertainty.

The first part of the thesis focuses on partial information models in online learning?a modern and well-established framework in machine learning for prediction and decision-making under uncertainty. In online learning, a player has to repeatedly choose actions and incur the respective losses assigned by the environment, with the goal of minimizing the regret: the difference between the player’s total loss and the total loss of the best fixed action in hindsight. First, we consider a general graph-structured feedback model, that include most common feedback models as special cases, and give a complete characterization of the regret rates in the induced online problem in terms of natural combinatorial properties of the feedback graph. We then focus on the well-studied bandit feedback model, and study two natural variants of the multi-armed bandit (MAB) problem: (i) MAB with switching costs, and (ii) MAB where regret is measured with respect to a set of stateful policies, rather than fixed actions. For both problems, we provide a tight regret analysis and exhibit the first discrepancies in the literature between the asymptotic regret rates of online learning with bandit and full feedback, resolving a long-standing and fundamental gap in our understanding of learning with bandit feedback. Finally, we study the computational price of uncertainty in online learning and show that the adversarial nature of online learning can result with an exponential increase in the computational complexity of learning, as compared to the statistical case.

In the second part of the thesis we develop efficient algorithms for learning and optimization under uncertainty, relying on techniques from online learning and online convex optimization. In particular, we give simple and efficient algorithms for linear Ridge and Lasso regressions in a setting where the observation at training time is limited to a small fixed number of attributes of each example, and prove that they need virtually the same total number of attributes as do full-information algorithms for the problem, significantly improving upon previous work on the problem. Additionally, we design the first efficient methods for the Robust Optimization framework that find an approximate robust solution using a number of calls to a solver for the original non-robust problem, which is independent of the dimension of the problem.