|M.Sc Student||Matari Yakir|
|Subject||Improving Semantic Classification in Deep Learning|
|Department||Department of Electrical and Computer Engineering||Supervisor||ASSOCIATE PROF. Yacov Crammer|
We consider the image classification problem using deep models. Most of the works in the recent years consider only the flat precision (FP) measure as a benchmark. We follow a work done in recent years (called DeVISE), where a new measure named Hierarchical Precision (HP) is defined and used to measure the semantic accuracy of a classification model given an underlying hierarchy. While in DeVISE and other following works an extra side-information (e.g. textual corpus) is needed to build the semantic model, we suggest two innovative approaches which we call Hierarchical Regularization and Hierarchical Softmax, for using solely the underlying hierarchy information for semantic classification. Hierarchical Regularization is using regularized embedding scheme, and showed good results on CIFAR-100, but failed to generalize to ImageNet ILSVRC 2012 1K task, it is probably because of the coarse regularization that was used (Hierarchical distance as the regularization penalty). Hierarchical Softmax is using an Hierarchical Softmax scheme inspired by YOLO network, and show competitive FP and HP results on ImageNet ILSVRC 2012 1K task against the main former approaches (mainly DeVISE). Also, zero-shot experiments were run, and showed ambiguous insight, that is, when testing on classes that are in the close neighborhood (2-hop) of the classes in the training set our method is comparable and even better in some parameters then DeVISE, but when testing on a farther neighborhood (3-hop), the results are degraded drastically. This phenomenon is explained by the nature of our method which uses a sub-tree that is built above the training set classes, therefore when exploited on far (in hierarchy distance terms) classes from this sub-tree the efficiency of our method is significantly reduced.