|M.Sc Student||Endrawis Shadi|
|Subject||Efficient Self-Supervised Data|
Collection for Offline Robot Learning
|Department||Department of Computer Science||Supervisor||DR. Aviv Tamar|
|Full Thesis text|
A practical approach to robot reinforcement learning is to first collect a large batch of real or simulated robot interaction data, using some data collection policy, and then learn from this data to perform various tasks, using offline learning algorithms. Previous work focused on manually designing the data collection policy, and on tasks where suitable policies can be easily designed, such as random picking policies for collecting data about object grasping. For more complex tasks, however, it may be difficult to find a data collection policy that explores the environment effectively, and produces data that is diverse enough for the downstream tasks.
In this work, we propose that data collection policies should actively explore the environment to collect diverse data. In particular, we develop a simple-yet-effective goal-conditioned reinforcement-learning method that actively focuses data collection on novel observations, thereby collecting a diverse data-set. The method extends and improves upon popular intrinsic motivation based methods for diverse exploration. We evaluate our method on simulated robot manipulation tasks with visual inputs and show that our method leads to more diverse and evenly distributed data, and more importantly, that the data collection which actively tries to reach novel states leads to significant improvements in the downstream learning tasks.