[Seminar] Linking Complex Behaviour to High-dimensional Neural Representations


Thursday, August 25, 2022 - 15:00 to 16:30


Seminar Room C209, Center Bldg.


Date: Thursday, August 25, 2022
Time: 15:00 – 16:30
Venue: Seminar Room C209, Center Bldg.

Speaker: Prof. N Alex Cayco Gajic [web]
                LNC2École Normale Supérieure

Title:Linking Complex Behaviour to High-dimensional Neural Representations


Recently, systems neuroscience has experienced a surge in interest in the neural control of complex behaviours. This shift has occurred in part due to technological advances in automated behavioral annotation enabling precise quantification of movement, and in multi-region recording techniques which have shown that motor, task, and reward information is widespread across brain regions. However, our current understanding of neural circuit computations are based on decades of reduced experimental paradigms that have aimed to limit behavioural variability. For example, the cerebellar cortex has a famously "crystalline" circuitry that has been argued to optimally implement associative learning in the context of conditioning experiments. However it is unclear how these theories extend towards more complex behaviours that implicate the cerebellum. In the first part of this talk, I will present our lab's latest efforts towards this end by quantifying the dimensionality of granule cell representations in freely behaving mice, extending classic theories of the cerebellar cortex towards reinforcement learning, and building hierarchical models of mouse locomotor coordination to disentangle different covariates of motor learning. 

Second, in order to better understand how neural representations give rise to behaviour, better tools are key to identify behaviourally-relevant structure in large-scale data. Common methods such as PCA identify zero-lag covariance patterns across neurons that evolve over the course of experiment. However, this view may miss structure that is shared across trials or time, including task-relevant neural sequences and representations that evolve over learning. Towards this end, we have developed a new unsupervised dimensionality reduction method, sliceTCA, that decomposes the data tensor into components that identify different classes of shared variability (across neurons, time, or trials). In the second part of the talk I will demonstrate how sliceTCA is able to demix these different sources of shared variability in three example large-scale datasets, including a multi-region dataset from the IBL. Finally, I will provide geometric intuition for how sliceTCA can capture latent representations that are embedded in both low and high-dimensional subspaces, thereby capturing more behaviourally-relevant structure in neural data than classic methods.

Host: Neural Computation Unit
Contact: ncus@oist.jp

All-OIST Category: 

Subscribe to the OIST Calendar: Right-click to download, then open in your calendar application.