|M.Sc Student||Vaits Nina|
|Subject||Re-Adapting the Regularization of Weights for Non|
|Department||Department of Electrical Engineering||Supervisor||Professor Yacov Crammer|
|Full Thesis text|
In online learning, the learner’s performance is compared with the best-performing prediction-function from some fixed class using the cumulative loss. The learner is considered to be good when its cumulative loss is not much larger than the cumulative loss of the best function that is chosen off-line, i.e. this gap should be arbitrarily close to zero. In real world, however, many applications are applied in non-stationary environment, where the best prediction function is not fixed and may drift over time. This work introduces a new online learning algorithm for regression which works in non-stationary setting using per-feature-learning rate. Analysis of this algorithm shows that as long as the cumulative drift of the best performing sequence of functions is sub-linear in the sequence length, our algorithm suffers sub-linear regret. We also introduce an extended form of the former algorithm which achieves sub-linear regret for the non-stationary setting, yet, for the special case where no drift occurs (i.e. the setting is stationary) this algorithm achieves logarithmic regret, as many other algorithms designed mainly for the stationary settings. Simulations demonstrate the usefulness of this algorithm compared with other state-of-the-art approaches.