טכניון מכון טכנולוגי לישראל
הטכניון מכון טכנולוגי לישראל - בית הספר ללימודי מוסמכים  
Ph.D Thesis
Ph.D StudentGal-on Maayan
SubjectOn Detection and Adaptation to Changes in a Learning
Problem
DepartmentDepartment of Electrical Engineering
Supervisor Professor Shie Mannor
Full Thesis textFull thesis text - English Version


Abstract


In Transfer Learning, a predictor is trained using additional information from a different, yet related, problem. The main obstacle in employing this information is that the differences between the distributions governing the two problems are not known in advance. In most studies on TL, attempts were made to adapt either the data or the predictor from the related problem without explicit exploration of the differences between the problems. Understanding and quantifying the differences as well as the similarities between the problems can aid in attending to the differences. This research addressed both aspects of the TL problem: detection the differences between the two problems, as well as adaptation between them.


In our study of adaptation solutions, we approached an ambitious setting, in which, in addition to some changes in the labeling function, there may also be some changes in the feature representation of the problems. To bridge the gap between the problems, we suggest an algorithm that finds a mapping between the distributions of the problems. This mapping serves as an initial adaptation phase, after which the predictor for the problem of interest can be trained on the unified sample set of the problems.


Our study of the differences between learning problems included a formulation of a new score that measures the rate of similarity between distributions. This score is suited in particular for testing similarity, rather than the more common equality test studied in statistics. The proposed score formalizes an intuitive notion of permitted variations between the distributions and demonstrates attractive statistical and computational properties.


Additionally, we examined the change in a learning problem in a dynamic setting. In this setting, a reliable detection of changes is imperative to maintain the high performance of a predictor on a data stream. To this end, we propose a procedure for online detection of abrupt and gradual changes in a data-stream. We focus on changes that effect the prediction problem, as these are the only ones of substance in our setting. The method is based on estimating the risk and its variability by obtaining statistics of the loss distribution of a learning algorithm. Our analyses of the procedure showed that, if the underlying learning algorithm is stable, drifts in the learning problem will be detected with high probability.


Finally, in this study we addressed how detection of the discrepancy between the problems can facilitate transfer learning. We formulate an algorithm that detects the areas of discrepancy, and, to aid transfer between the problem, actively samples from these areas. This is desirable, as these are the areas where we need to learn

the new problem, or alternatively, apply adaptation. We provide learning bounds that show that this detection, together with the examples acquired from areas where the problems differ, can aid the learning process.