|M.Sc Student||Luz Kobi|
|Subject||Online Choice of Active Learning Algorithms|
|Department||Department of Computer Science||Supervisors||Professor Ran El-Yaniv|
|Professor Emeritus Yoram Baram|
This work is concerned with the question of how to online combine an ensemble of active learners so as to expedite the learning progress in pool-based active learning. Seeking top performing active learning algorithms among the numerous algorithms proposed in the literature, we found no consistent winner across problems. Moreover, different types of problems clearly favor particular algorithms. This situation motivates an online learning approach whereby one attempts to online utilize an ensemble of algorithms so as to achieve a performance which is close to the best algorithm in hindsight. We develop an active learning master algorithm, based on a known competitive algorithm for the multi-armed bandit problem. A major challenge in successfully choosing top performing active learners online is to reliably estimate their progress during the learning session. Standard classifier evaluation techniques, such as cross-validation or leave-one-out usually fail when used to estimate the performance of an active learner as the set of labeled instances selected by a good active learner tend to be acutely biased towards `hard' instances that do not reflect the true underlying distribution. To this end we propose a simple maximum entropy criterion that provides effective estimates in realistic settings. We study the performance of the proposed master algorithm using an ensemble containing two of the best known active learning algorithms as well as a new algorithm. The resulting active learning master algorithm is empirically shown to consistently perform almost as well as and sometimes outperform the best algorithm in the ensemble on a range of classification problems.