טכניון מכון טכנולוגי לישראל
הטכניון מכון טכנולוגי לישראל - בית הספר ללימודי מוסמכים  
Ph.D Thesis
Ph.D StudentSusman Lee
SubjectProperties of Task Representation in Recurrent Neural
Networks
DepartmentDepartment of Applied Mathematics
Supervisors ASSOCIATE PROF. Omri Barak
PROF. Naama Brenner


Abstract

A fundamental feature of biological systems is their ability to form interactions with their environment, sensing external stimuli and producing appropriate actions. To facilitate these interactions, organisms adapt and learn - whereby they form and modify internal representations of objects in their environment.

In the brain, these representations manifest as spatio-temporal patterns of electrical activity across populations of neurons, and are believed to be sustained in the long-term by synaptic connections.

Despite their central importance, a mechanistic understanding of neural representations is still lacking; how do they form, what constrains them, how do they determine the success of desired behavior, and how robustly they are stored over time - are questions that have been and continue to be at the center of interest. In this work, we aim to elucidate mechanisms underlying fundamental properties of representations, through computational studies of artificial recurrent neural networks.

The performance of trained networks is a topic of great importance with many theoretical and practical consequences. We study random feedback neural networks and inquire about the link between representation of external signals, and the resulting performance of networks trained to autonomously generate the signals. We characterize the geometrical and statistical attributes of the internal representations, which can be expressed analytically in simplified settings. We find a nontrivial interplay between internal and external properties of the task at hand, yielding "preferred" task parameters in the seemingly unstructured networks, where the expressiveness of the network is predicted to be maximal, and performance predicted to be optimal.

The mechanism supporting the robustness of acquired information has been recently challenged by experimental evidence of significant synaptic volatility.

In a second study, we explore the possibility of memory storage within a global component of network connectivity, while individual connections fluctuate.

We find that homeostatic stabilization of fluctuations differentially affects different aspects of network connectivity. Specifically, memories stored as time-varying attractors of neural dynamics are found to be resilient against erosion.

Such dynamic attractors can be learned by biologically plausible learning-rules and support associative retrieval. Our results suggest a link between the properties of learning-rules and those of network-level memory representations, and point at experimentally measurable signatures.