|Ph.D Student||Xu Tie|
|Subject||Integrating Temporal and Spatial Information in Single|
Neuron and Network Dynamics
|Department||Department of Medicine||Supervisor||ASSOCIATE PROF. Omri Barak|
|Full Thesis text|
Understanding the amazing adaptation and learning capacity of neural systems relies on the understanding of both cellular and network levels. My research involves two parts addressing them separately. During the first part, we develop a simple model that captures single neuron excitability dynamics over extended timescales. It is observed experimentally that the single neuron excitability is characterized by scale-free fluctuations in response to periodic inputs, along with more reliable responses to varying stimuli, seemingly paradoxical. We propose a model of excitability that is reduced by action potentials, and has a dynamical timescale model of recovery. The dynamics of this timescale and their interaction with excitability reconciles these seemingly paradoxical aspects. Dynamically, this is manifested by a marginally stable regime, in which the neural dynamics is dominated by slow intrinsic fluctuations while retaining high susceptibility to the variation of external stimuli. Besides predicting accurately experimental data, its compact form allows further exploration on the network level.
During the second part, we explore the neural representation underlying a navigation task with different environmental uncertainties. We study a reservoir of recurrent networks with different dynamics that serve as different internal models of the environment. These are obtained by a pre-training protocol that is inspired by the evolutionary process. Although performing similarly in a basic setting, the neural representation and learned strategy differ among the networks. This leads to very different results when challenged with various environmental uncertainties. By identifying two principle categories of spatial regularities - metric and topological, we show that the optimal representation for each is distinct, and there is an apparent tradeoff between them, reflecting the underlying dynamics. We identify a class of networks that performs well on both fronts. The dynamics of this network are characterized by long transients and quasi-fixed points that serve as a memory of past stimuli. The different dynamics are supported by a low-rank structure developed during the pre-training. Our work highlights the richness of solutions for a single cognitive task and the importance of dynamics as priors for different tasks. We link the connectivity, dynamics, representation, and behavior under the same framework, offering a starting point for future research.