|M.Sc Student||Segev Noam|
|Subject||Transfer Learning using Decision Forests|
|Department||Department of Computer Science||Supervisor||Professor Ran El-Yaniv|
|Full Thesis text|
The goal of transfer learning is to create high performance predictive models on a target task, augmenting sparsely labeled training examples with training sets, or previously built models, of related learning tasks. Transfer learning can be motivated by a common scenario in which we obtain a large annotated training set for the problem at hand (“source”) and use it to build a classiﬁer, only to learn that the examples came from a related, but diﬀerent problem. Now only a small training set is available for the actual problem variant (“target”). While the two problem variants are related, a single model may not work well for both, and learning on the source alone may not suﬃce.
In this work we propose three inductive transfer algorithms based on random forests. Two of our algorithms reﬁne a classiﬁer learned on the source set using the available target set, while the last uses both sets directly during tree induction. We also combine our proposed algorithms in ensembles, building a committee of experts, and use them to detect fraud in online banking transactions. The proposed methods exhibit impressive experimental results over a range of problems, even match and sometimes outperform known strong models.