|M.Sc Student||Zeno Chen|
|Subject||Task Agnostic Continual Learning using Online|
|Department||Department of Electrical Engineering||Supervisor||Dr. Daniel Soudry|
Deep Neural Networks have been successfully used for solving a variety of tasks such as images classification, speech recognition, natural language processing and more. Those networks were trained on a single task under the assumption that the training samples are drawn i.i.d from a fixed distribution. However, this assumption is a major limitation when a system needs to continuously adapt to a changing environment.
In continual learning setting a learning algorithms is faced with sequentially-arriving tasks, with no access to samples from previous or future tasks. Continual learning is a major challenge to existing artificial intelligence algorithms which are based on artificial neural networks. During the learning process crucial information of the previous task is lost, which leads to catastrophic forgetting. Catastrophic forgetting occurs when we alter the parameters of the neural network in order to minimize the cost function of current task without taking into account the cost function of previous task.
Catastrophic forgetting was extensively investigated over the course of the past decades and various methods to overcome this problem were suggested. A large body of continual learning research assumes that task boundaries are known during training. However, research for scenarios in which task boundaries are unknown during training has been lacking. In this work we present, for the first time, a method for preventing catastrophic forgetting for scenarios with task boundaries that are unknown during training or --- task-agnostic continual learning.