|Ph.D Student||Pechyony Dmitry|
|Subject||Theory and Practice of Transductive Learning|
|Department||Department of Computer Science||Supervisor||Professor Ran El-Yaniv|
|Full Thesis text|
In transductive learning we are given a training set of labeled examples and a test set of unlabeled examples. The goal is to guess an accurate labeling of the test points. In this thesis we present several results for learning within transductive setting, focusing on both theoretical and practical aspects. The guiding line of the thesis is to connect the theoretical results with the practical and efcient algorithm. The results of the thesis include the development of performance guarantees for the existing algorithms. Also we develop a new learning algorithm based on the existing generic performance guarantees.
We present two novel techniques for deriving explicit data-dependent error bounds for transductive algorithms. The first technique is based on the trans- ductive notion of uniform and weak stability of learning algorithm, and the sec- ond technique is based on transductive Rademacher complexity. These bounding techniques are applicable to many transductive algorithms and we demonstrate their efectiveness by deriving bounds for several known ones.
We also consider a large volume learning principle for transduction that pri- oritizes transductive equivalence classes according to the "volume" they occupy in the hypothesis space. We approximate volume computation using a geomet- ric interpretation of the hypothesis space. The resulting transductive learning algorithm is a non-convex optimization problem that can be solved efciently. A comparison of our algorithm with large-margin methods (SVM and TSVM) over large number of datasets demonstrates an overwhelming advantage in several interesting domains but a clear disadvantage in others.