|M.Sc Student||Rabinovitz Carmel|
|Subject||Learning Unsupervised Domain-Invariant Image Representations|
for Manipulation with Contrastive Domain
|Department||Department of Electrical and Computer Engineering||Supervisor||DR. Aviv Tamar|
|Full Thesis text|
This work explores ways for learning domain invariant image representations for various robotic manipulation tasks in an unsupervised manner. Robotic manipulation with visual inputs requires image features that capture the physical properties of the scene, e.g., the position, orientation, and configuration of objects. Recently, it has been suggested to learn such features in an unsupervised manner from simulated, self-supervised, robot interaction; the idea being that high-level physical properties are well captured by modern physical simulators, and their representation from visual inputs may transfer well to the real world. In particular, learning methods based on noise contrastive estimation have shown promising results for learning such features.
To robustify the simulation-to-real transfer, domain randomization (DR) was suggested for learning features that are invariant to irrelevant visual properties such as colors, textures or lighting. In this work, however, we show that a naive application of DR to unsupervised learning based on contrastive noise estimation does not promote invariance, as the loss function maximizes mutual information between the features and both the relevant and irrelevant visual properties. We propose a simple modification of the contrastive loss to fix this, exploiting the fact that we can control the simulated randomization of visual properties in modern physical simulators. Our approach learns physical features that are significantly more robust to irrelevant visual domain variation, as we demonstrate using both rigid and non-rigid objects, in simulation and simulation-to-real transfer settings.