OCNC2023 Lecturers and Abstract
Week 1
Tuesday, June 20
Tomoki Fukai
Title: Neural network modeling of cognitive functions
The brain's ability to learn and memorize things is crucial for cognitive behavior of animals. Though our understanding of the underlying mechanisms of learning is limited, researchers have achieved many insights into the mechanisms in the last few decades. In my lecture, I will explain the basic properties of several (both classic and recent) models of neural information processing. These models range from feedforward network models with error correction learning and backpropagation to reservoir computing in recurrent neural networks. Then, I will show how these models can account for cognitive behaviors of animals such as pattern recognition, spatial navigation and decision making. I want to emphasize the essential role of low-dimensional features of neural dynamics in learning.
Related readings:
- Anthony Joseph Decostanzo, Chi Chung Fung and Tomoki Fukai (2019) Hippocampal neurogenesis reduces the dimensionality of sparsely coded representations to enhance memory encoding. Front Comput Neurosci 12: 1-21.
- Tomoki Kurikawa, Tatsuya Haga, Takashi Handa, Rie Harukuni and Tomoki Fukai (2018) Neuronal stability in medial frontal cortex sets individual variability in decision-making. Nat Neurosci, 21:1764-1773.
- Toshitake Asabuki, Naoki Hiratani and Tomoki Fukai (2018) Interactive reservoir computing for chunking information streams. PLoS Comput Biol, 14(10):e1006400.
- Tatsuya Haga and Tomoki Fukai (2018) Recurrent network model for learning goal-directed sequences through reverse replay. Elife 7: e34171.
- Mastrogiuseppe F, Ostojic S (2018) Linking Connectivity, Dynamics, and Computations in Low-Rank Recurrent Neural Networks. Neuron 99: 609-623.
- Song HF, Yang GR, Wang XJ. Reward-based training of recurrent neural networks for cognitive and value-based tasks. Elife. 2017 Jan 13;6. pii: e21492.
- Sussillo D, Abbott LF (2009) Generating coherent patterns of activity from chaotic neural networks. Neuron. 63: 544-557.
Wednesday, June 21
Bernd Kuhn
Title: 1. Ion channel physiology and the Hodgkin-Huxley model of neuronal activity
In my lecture I will talk about electric activity in neurons. I will start with the basics of ion channels, and specifically focus on voltage-gated channels and their dynamics in response to membrane voltage. Neurons use a combination of different voltage-gated channels to generate fast (about 1 ms), depolarizing action potentials. I will explain the first action potential model by Hodgkin and Huxley . Finally, I will discuss more recent additions or fine-tuning of the time-honored Hodgin-Huxley model.
Title: 2. Functional optical imaging
Functional optical imaging has becomes one of the key techniques in neuroscience. In my second lecture I will introduce fluorescence and the most important imaging methods. I will explain what we can learn from them but also discuss their limitations.
Suggested Readings:
- Johnston and Wu: Foundation of cellular neurophysiology, MIT press
- Helmchen, Konnerth: Imaging in Neuroscience, 2011
- Yuste, Lanni, Konnerth: Imaging Neurons, 2000
Thursday, June 22
Kenji Doya
Title: Introduction to reinforcement learning and Bayesian inference
The aim of this tutorial is to present the theoretical cores for modeling animal/human action and perception. In the first half of the tutorial, we will focus on "reinforcement learning", which is a theoretical framework for an adaptive agent to learn behaviors from exploratory actions and resulting reward or punishment. Reinforcement learning has played an essential role of understanding the neural circuit and neurochemical systems behind adaptive action learning, most notably the basal ganglia and the dopamine system. In the second half, we will familiarize ourselves with the framework of Bayesian inference, which is critical in understanding the process of perception from noisy, incomplete observations.
Suggested Readings:
- Doya K (2021). Canonical cortical circuits and the duality of Bayesian inference and optimal control. Current Opinion in Behavioral Sciences, 41, 160-167. https://doi.org/10.1016/j.cobeha.2021.07.003
- Pouget A, Beck JM, Ma WJ, Latham PE (2013). Probabilistic brains: knowns and unknowns. Nature Neuroscience, 16, 1170-8. https://doi.org/10.1038/nn.3495
- Doya K: Reinforcement learning: Computational theory and biological mechanisms. HFSP Journal, 1(1), 30-40 (2007). Free on-line access: http://dx.doi.org/10.2976/1.2732246
- Doya K, Ishii S: A probability primer. In Doya K, Ishii S, Pouget A, Rao RPN eds. Bayesian Brain: Probabilistic Approaches to Neural Coding, pp. 3-13. MIT Press (2007). Free on-line access: http://mitpress.mit.edu/catalo/item/default.asp?ttype=2&tid=11106
Thursday, June 22
Erik De Schutter
Title: Modeling biochemical reactions, diffusion and reaction-diffusion systems
In this talk I will use calcium dynamics modeling as a way to introduce deterministic solution methods for reaction-diffusion systems. The talk covers exponentially decaying calcium pools, diffusion, calcium buffers and buffered diffusion, and calcium pumps and exchangers. I will describe properties of buffered diffusion systems and ways to characterize them experimentally. Finally I will compare the different modeling approaches.
In the second talk I will turn towards stochastic reaction-diffusion modeling. Two methods will be described: Gillespie's Stochastic Simulation algorithm extended to simulate diffusion, and particle-based methods. I will briefly describe the STEPS software and give some examples from our research.
I will finish with describing how the STEPS framework can be used to go beyond the compartmental model to simulate neurons in 3D.
Suggested Readings:
- U.S. Bhalla and S. Wils: Reaction-diffusion modeling. In Computational Modeling Methods for Neuroscientists, E. De Schutter ed., MIT Press, Boston. 61–92 (2009)
- E. De Schutter: Modeling intracellular calcium dynamics. In Computational Modeling Methods for Neuroscientists, E. De Schutter ed., MIT Press, Boston. 61–92 (2009)
- F. Santamaria, S. Wils, E. De Schutter and G.J. Augustine: Anomalous diffusion in Purkinje cell dendrites caused by dendritic spines. Neuron 52: 635–648 (2006).
- A.R. Gallimore, et al.: Switching on depression and potentiation in the cerebellum. Cell Reports 22: 722-733 (2018).
Friday, June 23
Erik De Schutter
Title: Introduction to modeling neurons
I will discuss methods to model single neurons, going from very simple to morphologically detailed. I will briefly introduce cable-theory, the mathematical description of current flow in dendrites. By discretizing the cable equation we come to compartmental modeling, the standard method to simulate morphologically detailed models of neurons. I will also give an overview of dendritic properties predicted by cable theory and experimental data confirming these predictions. I will discuss the challenges in fitting compartmental models to experimental data with an emphasis on active properties.
Suggested Readings:
- Several chapters in Computational Modeling Methods for Neuroscientists, E. De Schutter ed., MIT Press, Boston (2009).
- Y. Zang, S. Dieudonné and E. De Schutter: Voltage- and Branch-specific Climbing Fiber Responses in Purkinje Cells. Cell Reports 24: 1536–1549 (2018).
Week2
Monday, June 26
Gerald Pao
Title: Manifolds of brain activity dynamics and dimensionality estimation
Tuesday, June 27
Upinder Bhalla
Title:The Neuron as a Network: Dendritic and Synaptic Computation through Molecules and Ions.
Neurons perform extremely complex computations, and it takes a large neural network to represent even a subset of their capacity in the limited domain of electrical signaling. There is increasing support for dendritic clusters and spines as key units of subcellular computation. In this lecture I'll draw upon
several lines of work to show the occurrence of subcellular neuronal computation, and then dive into the likely mechanisms through a series of models. I will look at sequence computation, plasticity rules, and spine formation rules. Finally I will link back to circuits and ask how neuronal connectivity can be set up that takes advantage of very local dendritic computations.Suggested Readings:
- Beniaguev, Segev and London Neuron, 2021
- Pulikkottil, Somashekar and Bhalla , Curr. Op. Neuro, 2021
- Bhalla, eLife, 2017
- Branco, Clark and Hausser, Science, 2010
I've been doing computational neuroscience with a dash of experiments since my PhD days, and the field continues to inspire and excite. I've worked on olfaction, simulators, memory, and hippocampus over the years. One of my long-term interests has been how single neurons perform such powerful computations. This question has become yet more acute when we see the amazing things that AI does, and then step back to figure out how a) the brain did it first and b) how does the brain do so much on about 20 Watts? I'll be delighted to go wildly off topic and speculate with all of you about this intersection of AI, neuroscience, and much more.
Wednesday, June 28 (16:00-18:00)
Claudia Clopath
Title: TBD
Thursday, June 29
Kazumasa Tanaka
Title: Introduction to hippocampal memory and its representation
The three hours lecture covers basic topics of hippocampal memory, including synaptic plasticity, neuronal activity mapping, types of memories that hippocampus involves, systems consolidation, and hippocampal physiology.
Suggested readings:
- The Neurobiology of Learning and Memory by Jerry W. Rudy
- Neves G, Cooke SF, Bliss TV. Synaptic plasticity, memory and the hippocampus: a neural network approach to causality. Nat Rev Neurosci. 2008 Jan;9(1):65-75. doi: 10.1038/nrn2303. Erratum in: Nat Rev Neurosci. 2012 Dec;13(12):878. PMID: 18094707.
- Josselyn SA, Tonegawa S. Memory engrams: Recalling the past and imagining the future. Science. 2020 Jan 3;367(6473):eaaw4325. doi: 10.1126/science.aaw4325. PMID: 31896692; PMCID: PMC7577560.
- Chapter 11 in The Hippocampus Book by Per Andersen, Richard Morris, David Amaral, Tim Bliss & John O’Keefe.
Welcome to OIST! My lecture focuses on the experimental neuroscience, with an aim for better and deeper implementation of what you would learn in computational and theoretical lectures at OCNC. As it is only three-hours lecture, I would need to stay within a superficial introduction of hippocampal literature taking physiological, molecular/cellular biological, and psychological approaches, but I would be looking forward to exciting discussions among participants with diverse backgrounds.
Friday, June 30
Sam Reiter
Title: Why do we sleep?
We (and most, if not all other animals) spend a significant fraction of our life asleep. Alarmingly, it’s not clear why! In my lecture I will introduce a range of ideas about the function of sleep, including synaptic homeostasis, offline practice and the concept of 'savings', memory consolidation and replay, SWS/REM sleep stages, the scanning hypothesis, and the reduction of metabolic waste. The ubiquity of sleep across animals has made it a useful area of comparative work. I will discuss how ecological niche appears to affect sleep, unihemispheric sleep, and evolutionary considerations.
Related readings:
- Joiner, W. J. Unraveling the Evolutionary Determinants of Sleep. Curr. Biol. 26, R1073–R1087 (2016).
- Findlay, G., Tononi, G. & Cirelli, C. The evolving view of replay and its functions in wake and sleep. Sleep Adv 1, zpab002 (2020).
- Blumberg, M. S., Lesku, J. A., Libourel, P.-A., Schmidt, M. H. & Rattenborg, N. C. What Is REM Sleep? Curr. Biol. 30, R38–R49 (2020).
Saturday, July 1
Yukiko Goda
Title: Features of synaptic strength regulation
Synapses are the key mediators of information transmission in the brain. The efficacy of synaptic transmission, i.e. synaptic strength, determines the extent of information received by the target neuron, and the changes in synaptic strengths are thought to constitute the neural substrate for learning. Whereas synaptic strength changes associated with learning are thought to be specific to active inputs in principle, most often, single synapses do not operate in isolation and nearby synapses influence each other. Such local interactions, in turn, shape dendritic integration of information by the target neuron. Given the ambiguities of the spatial spread of synaptic strength changes and the types of plasticity associated with the changes, the minimal operating unit of synaptic plasticity and the rules for its implementation remain enigmatic. The lecture will highlight and discuss experimental insights on features of synaptic strength regulation that are consequential for neural circuit properties.
Suggested readings:
- Chater TE, Goda Y. (2021) My Neighbour Hetero - deconstructing the mechanisms underlying heterosynaptic plasticity. Curr Op Neurobiol 67, 106-114. doi: 10.1016/j.conb.2020.10.007
- Larsen RS, Sjostrom PJ (2015) Synapse-type-specific plasticity in local circuits. Curr Op Neurobiol 35, 127-135. doi: 10.1016/j.conb.2015.08.001
- Chipman PH, Fung CCA, Fernandez A, Sawant A, Tedoldi A, Kawai A, Gautam SG, Kurosawa M, Abe M, Sakimura K, Fukai T, Goda Y. (2021) Astrocyte GluN2C NMDA receptors control basal synaptic strengths of hippocampal CA1 pyramidal neurons in the stratum radiatum. eLife 10, e70818. doi: 10.7554/eLife.70818
- Zador AM (2019) A critique of pure learning and what artificial neural networks can learn from animal brains. Nat Commun 10:3770. doi: 10.1038/s41467-019-11786-6
Week3
Monday, July 3
Steve Prescott
Title: Dynamical analysis of neuronal excitability
Contrary to textbook explanations of neuronal excitability, action potentials (or spikes) are not generated when membrane voltage reaches a fixed threshold. Instead, spike generation represents the nonlinear interaction between diverse ion channels. Channels with different kinetics and voltage-sensitivities compete or cooperate with one another to shape the voltage trajectory. Phase-plane analysis offers a valuable way to study those interactions by visualizing how slow- and fast-changing variables evolve relative to one another. Significant insight is gleaned by considering how nullclines intersect, including how those intersections change as parameters (e.g. stimulus intensity) are varied. My lecture will introduce phase plane and bifurcation analysis using simple (reduced) neuron models, and will also address how to relate simple models to more complicated models and to experiments (e.g. using dynamic clamp). We will also discuss the basis for and consequences of different spike initiation dynamics.
Suggested Readings:
- Rinzel J, Ermentrout GB. Analysis of neural excitability and oscillations. In: Koch C, Segev I, eds. Methods in neuronal modeling: From ions to networks. The MIT Press; 1998: 251-291.https://www.researchgate.net/publication/237128320_Analysis_of_Neural_Excitability_and_Oscillations
- Prescott SA, De Koninck Y, Sejnowski TJ. Biophysical basis for three distinct dynamical mechanisms of action potential initiation. PLoS Comput. Biol. 2008; 4: e1000198.
- Rho Y-A &Prescott SA. Identification of molecular pathologies sufficient to cause neuropathic excitability in primary somatosensory afferents using dynamical systems theory. PLoS Comput Biol 2012; 8: e1002524.
I am a Professor at the University of Toronto and Senior Scientist at the Hospital for Sick Children. I did my MD/PhD at McGill University (2005) and postdoctoral training at the Salk Institute (2008) before setting up my own lab at the University of Pittsburgh. I moved to Toronto in 2012. Using experiments and simulations, my lab studies somatosensory coding, especially how pathological changes in coding lead to chronic pain. At a cellular level, I’m interested in how neurons generate action potentials (spikes) through the nonlinear interaction of their ion channels. We also study how excitability is homeostatically regulated and how it impacts neural coding (e.g. synchronization).
Tuesday, July 4
Mike Häusser
Title: Dendritic computation
One of the central questions in neuroscience is how particular tasks, or “computations”, are implemented by neural networks to generate behaviour, and how patterns of activity are stored during learning. Over the past century, the prevailing view has been that information processing and storage in neural networks result primarily from the properties of synapses and the connectivity of neurons within the network. As a consequence, the contribution of single neurons to computation in the brain has long been underestimated. I will describe experimental and theoretical work that challenges this view by showing that the active and passive properties of mammalian dendrites allow them to implement sophisticated computations (including rules for synaptic plasticity). This work has helped to lay the foundation for understanding how computations in single neurons can contribute to guiding behaviour.
Suggested Readings:
- Bicknell BA, Häusser M (2021). A synaptic learning rule for exploiting nonlinear dendritic computation. Neuron Dec 15;109(24):4001-4017.e10.
- Fisek et al. (2023). Cortico-cortical feedback engages active dendrites in visual cortex. Nature doi: 10.1038/s41586-023-06007-6
- Hausser M and Mel B (2003). Dendrites: bug or feature? Current Opinion in Neurobiology 13(3):372-83.
- London M, Häusser M. (2005). Dendritic computation. Annual Review of Neuroscience 28:503-32.
I grew up in Canada and Germany and did my PhD work at Oxford University under the supervision of Julian Jack. I then was a postdoc in Heidelberg and Paris in the labs of Bert Sakmann and Philippe Ascher, respectively. My group uses a combination of two-photon imaging, two-photon optogenetics and Neuropixels recordings during behaviour in order to understand the cellular and circuit basis of neural computations in the mammalian brain, with a strong focus on dendritic computation.
In my spare time I like to read and play the cello.
Wednesday, July 5
Rosalyn Moran
Title: The Free Energy Principle: A Neurobiological Generative AI?
The era of Generative AI is certainly upon us. In this talk, I will present a theory of cortical function, and beyond, known as the Free Energy Principle which offers an alternative rationale and implementation for a Generative AI based on the brain. The Free Energy Principle, has been proposed as an ‘all-purpose model’ of the brain and human behavior that crucially closed the loop with action informing inference. As a formal and technical ‘first principles’ mathematical account of how brains work, it has garnered increasing attention from computer science to philosophy. The theory is based on the mathematical formulation of surprise minimization, to do so a brain can minimize its Free Energy (a computable bound on surprise), and drive, not only perception and cognition but crucially also actions. As a framework, the Free Energy Principle and its corollary ‘Active Inference’ thus represents a fundamental departure from current systems in Artificial Intelligence, as it calls for the implementation of a top-down system, rather than a bottom-up system (driven by masses of training data) that are currently the state-of-the-art frameworks in AI research. In this talk I will demonstrate how we utilised the Free Energy Principle and Active Inference as an AI solution to simulated real-world problems.
Suggested Reading:
- Cullen, M., Davey, B., Friston, K. J., & Moran, R. J. (2018). Active inference in OpenAI Gym: A paradigm for computational investigations into psychiatric illness. Biological psychiatry: cognitive neuroscience and neuroimaging, 3(9), 809-818.
Rosalyn Moran a Professor of Computational Neuroscience at the IOPPN, King’s College London and the Deputy Director of King's Institute for Artificial Intelligence. Her work spans engineering and cognitive and computational neuroscience. In her lab she uses the Free Energy Principle as a principle to develop new methods in artificial intelligence and in modelling the brain’s normative and pathological function. She has previously held faculty positions at Virginia Tech and the University of Bristol.
Thursday, July 6
Jinny Kim
Title: Introduction to neural connectivity mapping
Mapping neural connectivity crucial for understanding the brain functions and changes that occur during learning, aging, and diseases. To achieve this, powerful technologies have been developed for labeling, visualizing, and analyzing the complex connections between neurons in the brain. This lecture will provide an introduction to the field of connectomics, including the underlying principles and techniques used to map neural circuits. We will begin by discussing the basic anatomy of the brain and the types of neurons found in it. We will then cover various techniques used to map neural circuits, including tracer techniques, genetic and computational methods.
Additionally, we will delve into the different types/rules of neural connections, such as excitatory and inhibitory connections, and how they contribute to neural function. We will also discuss how neural connectivity mapping is helping us to better understand complex brain functions and neurological disorders. By the end of this lecture, you will have a basic understanding of the field of neural connectivity mapping and its applications, as well as an appreciation for the complexity of neural circuitry in the brain.
My research interests focus on developing advanced imaging techniques to investigate the neural circuits underlying behavior in model organisms, including mice and mouse lemur. One of my major contributions to the field of neuroscience is the development of synaptic connectivity mapping technique (mGRASP and neuTube), which has been widely used to map neural circuits in a range of organisms, including mice, fruit flies, and zebrafish. It has helped researchers to identify previously unknown connections between neurons and to understand how neural circuits are altered in various diseases and conditions.
I am a neuroscientist who obtained my B.S. and M.S. degrees in Biology from the Sung Kyun Kwan university, Korea in 1995 and 1997, respectively. I then pursued my interest in neuroscience and completed my PhD at the Max-Planck-Institute for medical research, Germany, in 2001. Following my doctoral studies, I conducted postdoctoral research at National Institutes of Health, USA, (2002-2007). I also worked as a Research Specialist at Janelia Research Campus, Howard Hughes Medical Institute, USA from 2008 to 2010. In 2011, I was appointed by the Korea Institute of Science and Technology (KIST) in Seoul, Korea, where I currently serves as the Director of the Brain Science Institute.