|M.Sc Student||Edward Moroshko|
|Subject||New Min-Max Algorithms and Analysis for Online Regression|
|Department||Department of Electrical Engineering||Supervisor||Professor Crammer Yacov|
|Full Thesis text|
In online learning the performance of an algorithm is typically compared to the performance of a fixed function from some class, using a quantity called regret. The purpose of a learning algorithm is to have a regret as small as possible. About a decade ago, the last-step min-max approach has been proposed for online regression, where the algorithm tries to minimize the maximal (worst case) regret with respect to the best-performing linear function. In fact, to make the min-max optimization problem well defined it has been assumed that the choices of the adversary are bounded, yielding artificially only extreme cases. In this work we introduce a new algorithm that weighs the examples in such a way that the min-max problem will be well defined. We provide analysis of the algorithm and develop new regret bounds that are better in some situations than other state-of-the-art bounds.
In many real-world problems there is no fixed best target function during the runtime of the algorithm. Instead, the best (local) target function is drifting over time. In this work we extend the last-step min-max approach to the drifting setting and introduce a new algorithm which is a last-step min-max optimal in context of a drift. We analyze the algorithm in the worst-case regret framework and show that it maintains an average loss close to that of the best slowly changing sequence of linear functions, as long as the total of drift is sublinear. We show formally that in some situations our bound is better than existing bounds, and synthetic simulations demonstrate the advantages of our algorithm over previous approaches, especially in a worst-case constant drift setting.