|Ph.D Student||Esmeir Saher|
|Subject||Anytime Algorithms for Learning Anytime Classifiers|
|Department||Department of Computer Science||Supervisor||Professor Shaul Markovitch|
|Full Thesis text|
Machine learning techniques are gaining prevalence in the production of a wide range of classifiers for complex real-world applications with nonuniform testing and misclassification costs. The increasing complexity of these applications poses a real challenge to resource management during learning and classification. In this work we introduce a novel framework for operating in such complex environments. The core of our framework is an anytime decision tree learner that allows learning time to be increased in return for lower classification costs. It builds a tree top-down and exploits additional time resources to obtain better estimations for the utility of the different candidate splits. Using sampling techniques, we approximate the cost of the subtree under each candidate split and favor the one with a minimal cost. As a stochastic algorithm, the learner is expected to be able to escape local minima, into which greedy methods may be trapped. Our proposed approach can be applied in two anytime setups: the contract setup, where the allocation of resources is known in advance, and the interruptible setup, where the algorithm might be queried for a solution at any moment. In addition to constraints on learning time, many real-life scenarios involve limits on classification resources. To deal with such situations, we developed a set of learning algorithms that can produce any-cost classifiers, i.e., classifiers that can make accurate decisions within a strict bound on testing costs. The classification budget can be known to the learner, known to the classifier but not to the learner, or not predetermined. Experiments with a variety of datasets, and under different scenarios, were conducted to compare the performance of our proposed methods to that of the state-of-the-art tree learners. The results show that for the majority of domains we produce significantly better trees. In the cost-insensitive setup, where test costs are ignored, we could produce smaller and more accurate trees. When test costs are involved, our learned trees were significantly more efficient for classification. Our learning algorithms are also shown to exhibit good anytime behavior with diminishing returns.