Lecturers and Abstract

Week 1

Bernd Kuhn

Title: 1. Ion channel physiology and the Hodgkin-Huxley model of neuronal activity

In my lecture I will talk about electric activity in neurons. I will start with the basics of ion channels, and specifically focus on voltage-gated channels and their dynamics in response to membrane voltage. Neurons use a combination of different voltage-gated channels to generate fast (about 1 ms), depolarizing action potentials. I will explain the first action potential model by Hodgkin and Huxley . Finally, I will discuss more recent additions or fine-tuning of the time-honored Hodgin-Huxley model.

Title: 2. Functional optical imaging

Functional optical imaging has becomes one of the key techniques in neuroscience. In my second lecture I will introduce fluorescence and the most important imaging methods. I will explain what we can learn from them but also discuss their limitations.

Suggested Readings:

  • Johnston and Wu: Foundation of cellular neurophysiology, MIT press
  • Helmchen, Konnerth: Imaging in Neuroscience, 2011
  • Yuste, Lanni, Konnerth: Imaging Neurons, 2000

Erik De Schutter

Title: Introduction to modeling neurons

I will discuss methods to model single neurons, going from very simple to morphologically detailed. I will briefly introduce cable-theory, the mathematical description of current flow in dendrites. By discretizing the cable equation we come to compartmental modeling, the standard method to simulate morphologically detailed models of neurons. I will also give an overview of dendritic properties predicted by cable theory and experimental data confirming these predictions.  I will discuss the challenges in fitting compartmental models to experimental data with an emphasis on active properties.

Suggested Readings:

  • Several chapters in Computational Modeling Methods for Neuroscientists, E. De Schutter ed., MIT Press, Boston (2009).
  • Y. Zang, S. Dieudonné and E. De Schutter: Voltage- and Branch-specific Climbing Fiber Responses in Purkinje Cells. Cell Reports 24: 1536–1549 (2018).

Tomoki Fukai

Title: Neural network modeling of cognitive functions

The brain's ability to learn and memorize things is crucial for cognitive behavior of animals. Though our understanding of the underlying mechanisms of learning is limited, researchers have achieved many insights into the mechanisms in the last few decades. In my lecture, I will explain the basic properties of several (both classic and recent) models of neural information processing. These models range from feedforward network models with error correction learning and backpropagation to reservoir computing in recurrent neural networks. Then, I will show how these models can account for cognitive behaviors of animals such as pattern recognition, spatial navigation and decision making. I want to emphasize the essential role of low-dimensional features of neural dynamics in learning.

Related readings:

  • Anthony Joseph Decostanzo, Chi Chung Fung and Tomoki Fukai (2019) Hippocampal neurogenesis reduces the dimensionality of sparsely coded representations to enhance memory encoding. Front Comput Neurosci 12: 1-21.
  • Tomoki Kurikawa, Tatsuya Haga, Takashi Handa, Rie Harukuni and Tomoki Fukai (2018) Neuronal stability in medial frontal cortex sets individual variability in decision-making. Nat Neurosci, 21:1764-1773.
  • Toshitake Asabuki, Naoki Hiratani and Tomoki Fukai (2018) Interactive reservoir computing for chunking information streams. PLoS Comput Biol, 14(10):e1006400.
  • Tatsuya Haga and Tomoki Fukai (2018) Recurrent network model for learning goal-directed sequences through reverse replay. Elife 7: e34171.
  • Mastrogiuseppe F, Ostojic S (2018) Linking Connectivity, Dynamics, and Computations in Low-Rank Recurrent Neural Networks. Neuron 99: 609-623.
  • Song HF, Yang GR, Wang XJ. Reward-based training of recurrent neural networks for cognitive and value-based tasks. Elife. 2017 Jan 13;6. pii: e21492.
  • Sussillo D, Abbott LF (2009) Generating coherent patterns of activity from chaotic neural networks. Neuron. 63: 544-557.

Erik De Schutter

Title: Modeling biochemical reactions, diffusion and reaction-diffusion systems

In this talk I will use calcium dynamics modeling as a way to introduce deterministic solution methods for reaction-diffusion systems. The talk covers exponentially decaying calcium pools, diffusion, calcium buffers and buffered diffusion, and calcium pumps and exchangers. I will describe properties of buffered diffusion systems and ways to characterize them experimentally. Finally I will compare the different modeling approaches.
In the second talk I will turn towards stochastic reaction-diffusion modeling. Two methods will be described: Gillespie's Stochastic Simulation algorithm extended to simulate diffusion, and particle-based methods. I will briefly describe the STEPS software and give some examples from our research.
I will finish with describing how the STEPS framework can be used to go beyond the compartmental model to simulate neurons in 3D.

Suggested Readings:

  • U.S. Bhalla and S. Wils: Reaction-diffusion modeling. In Computational Modeling Methods for Neuroscientists, E. De Schutter ed., MIT Press, Boston. 61–92 (2009)
  • E. De Schutter: Modeling intracellular calcium dynamics. In Computational Modeling Methods for Neuroscientists, E. De Schutter ed., MIT Press, Boston. 61–92 (2009)
  • F. Santamaria, S. Wils, E. De Schutter and G.J. Augustine: Anomalous diffusion in Purkinje cell dendrites caused by dendritic spines. Neuron 52: 635–648 (2006).   
  • A.R. Gallimore, et al.: Switching on depression and potentiation in the cerebellum. Cell Reports 22: 722-733 (2018).

Kenji Doya
Title: Introduction to reinforcement learning and Bayesian inference

The aim of this tutorial is to present the theoretical cores for modeling animal/human action and perception. In the first half of the tutorial, we will focus on "reinforcement learning", which is a theoretical framework for an adaptive agent to learn behaviors from exploratory actions and resulting reward or punishment. Reinforcement learning has played an essential role of understanding the neural circuit and neurochemical systems behind adaptive action learning, most notably the basal ganglia and the dopamine system. In the second half, we will familiarize ourselves with the framework of Bayesian inference, which is critical in understanding the process of perception from noisy, incomplete observations.

Suggested Readings:

  •  Doya K: Reinforcement learning: Computational theory and biological mechanisms. HFSP Journal, 1(1), 30-40 (2007). Free on-line access: http://dx.doi.org/10.2976/1.2732246
  • Doya K, Ishii S: A probability primer. In Doya K, Ishii S, Pouget A, Rao RPN eds. Bayesian Brain: Probabilistic Approaches to Neural Coding, pp. 3-13. MIT Press (2007). Free on-line access: http://mitpress.mit.edu/catalo/item/default.asp?ttype=2&tid=11106

Week2

Gerald Pao

Title: Manifolds of brain activity dynamics and dimensionality estimation

Sukbin Lim

Title: Exploring Perception and Working Memory: Circuit Models with Modular Sensory-Memory Interaction

While higher association areas have long been considered as a locus of working memory, recent human studies found memory signals in early sensory areas, prompting a re-evaluation of their role in working memory. In this lecture, I will initially outline two distinct frameworks used to understand perception and working memory: Bayesian perception models and recurrent network models for working memory. Subsequently, I will demonstrate how integrating these two frameworks to account for sensory-memory interactions can clarify changes in the internal representation of stimuli and their behavioral readout during memory tasks, addressing a challenge in conventional models.

Suggested readings:

  • Wang, X. J. (2001). "Synaptic reverberation underlying mnemonic persistent activity." Trends Neurosci 24(8): 455-463.
  • Khona, M. and I. R. Fiete (2022). "Attractor and integrator networks in the brain." Nat Rev Neurosci 23(12): 744-766.
  • Wei, X. X. and A. A. Stocker (2015). "A Bayesian observer model constrained by efficient coding can explain 'anti-Bayesian' percepts." Nat Neurosci 18(10): 1509-1517.
  • Yang, J., Zhang, H. and S. Lim (2024). "Sensory-memory interactions via modular structure explain errors in visual working memory." https://elifesciences.org/reviewed-preprints/95160

Message to Students:

I'm an Assistant Professor of Neural Science at NYU Shanghai and a Global Network Assistant Professor at NYU. Prior to joining NYU Shanghai, I was a postdoctoral researcher at University of California, Davis and University of Chicago. I hold a PhD in Mathematics from NYU and a BS from Seoul National University.

My core research interests lie in modeling and analyzing neuronal systems, with emphasis on network interactions among neurons and synapses that play a crucial role in brain computation. Utilizing a range of theories, including dynamical systems, information, and control theory, I develop and analyze neural network models and synaptic plasticity rules for cognitive functions such as learning, memory, and decision-making.

Samuel Reiter

Title: Why do we sleep?

We (and most, if not all other animals) spend a significant fraction of our life asleep. Alarmingly, it’s not clear why! In my lecture I will introduce a range of ideas about the function of sleep, including synaptic homeostasis, offline practice and the concept of 'savings', memory consolidation and replay, SWS/REM sleep stages, the scanning hypothesis, and the reduction of metabolic waste. The ubiquity of sleep across animals has made it a useful area of comparative work. I will discuss how ecological niche appears to affect sleep, unihemispheric sleep, and evolutionary considerations.

Related readings:

  • Joiner, W. J. Unraveling the Evolutionary Determinants of Sleep. Curr. Biol. 26, R1073–R1087 (2016).<http://paperpile.com/b/pLh3a3/fTv2>
  • Findlay, G., Tononi, G. & Cirelli, C. The evolving view of replay and its functions in wake and sleep. Sleep Adv 1, zpab002 (2020).<http://paperpile.com/b/pLh3a3/h7db>
  • Blumberg, M. S., Lesku, J. A., Libourel, P.-A., Schmidt, M. H. & Rattenborg, N. C. What Is REM Sleep? Curr. Biol. 30, R38–R49 (2020).<http://paperpile.com/b/pLh3a3/5zxg>

Hakwan Lau

Title: How (not) to Model Consciousness

Once ridiculed, the science of consciousness has grown substantially over the past 35 years. However, a central conceptual confusion remains: We systematically conflate the target theoretical notion of consciousness - in the sense of having ongoing subjective experiences - with the everyday and clinical notion of being able to respond meaningfully to external stimuli. Early on, philosophical analysis has helpfully clarified this issue. I will describe some research in my lab which, following that tradition, aimed at tackling the problems experimentally. However, paradoxically, in recent years we have also seen considerable regression, as the relevant concepts became actively lumped together again. For example, in the cases of AI and animal consciousness, views that are evidently inconsistent or unscientific have been fervently promoted to the general public. Instead of trying to hold individual researchers accountable, I highlight that these are just the inevitable consequences of the uniquely problematic funding mechanisms adopted by this field, which are at this point largely biased towards populism over rigorous academic evaluations. The said conceptual confusion, despite being quite obvious, benefits certain kinds of theories, and accordingly they form powerful allies. One likely consequence is that some of the best work on consciousness may no longer explicitly carry the label ‘consciousness’ - a trend that is already beginning to be seen in both science and philosophy today.

Suggested Readings:

  • H Lau (2023) What is a Pseudoscience of Consciousness? Lessons from Recent Adversarial Collaborations. PsyArXiv
  • H Lau, M Michel, JE LeDoux, SM Fleming (2022) The mnemonic basis of subjective experience. Nature Reviews Psychology 1 (8), 479-488

Message to Students:

Hello I am a PI at RIKEN Center for Brain Sciences, but by Sep this year I will move to Korea to co-direct this center. If you are interested in coming to work in Korea please let me know. We will also start some work on perception in rodents.

Kazumasa Tanaka

Title: Introduction to hippocampal memory and its representation

The three hours lecture covers basic topics of hippocampal memory, including synaptic plasticity, neuronal activity mapping, types of memories that hippocampus involves, systems consolidation, and hippocampal physiology.

Suggested readings:

  • The Neurobiology of Learning and Memory by Jerry W. Rudy
  • Neves G, Cooke SF, Bliss TV. Synaptic plasticity, memory and the hippocampus: a neural network approach to causality. Nat Rev Neurosci. 2008 Jan;9(1):65-75. doi: 10.1038/nrn2303. Erratum in: Nat Rev Neurosci. 2012 Dec;13(12):878. PMID: 18094707.
  • Josselyn SA, Tonegawa S. Memory engrams: Recalling the past and imagining the future. Science. 2020 Jan 3;367(6473):eaaw4325. doi: 10.1126/science.aaw4325. PMID: 31896692; PMCID: PMC7577560.
  • Chapter 11 in The Hippocampus Book by Per Andersen, Richard Morris, David Amaral, Tim Bliss & John O’Keefe.

Message to Students:

Welcome to OIST! My lecture focuses on the experimental neuroscience, with an aim for better and deeper implementation of what you would learn in computational and theoretical lectures at OCNC. As it is only three-hours lecture, I would need to stay within a superficial introduction of hippocampal literature taking physiological, molecular/cellular biological, and psychological approaches, but I would be looking forward to exciting discussions among participants with diverse backgrounds.

Yukiko Goda

Title: Features of synaptic strength regulation

Week3

Matthew Larkum

Title: Single-cell computation: exploring the powerful properties of biological neurons and the implications for networks

This lecture will discuss the remarkable computational capabilities of single neurons, with a focus on the active and intrinsic properties of pyramidal neurons in the rodent cerebral cortex. The roles of dendritic spikes in enabling complex intracellular signalling and sophisticated input/output transformation will be highlighted. The lecture will focus on neurons of the cerebral cortex and how their intrinsic properties serve the cortex's ability to process and integrate sensory data. The significance of neuromodulation and thalamocortical loops in sustaining this integration will also be presented, particularly emphasizing the disruptions observed under anaesthesia. Through this presentation, it will be demonstrated how these cellular mechanisms contribute to broader neural circuit functions, impacting our understanding of perception, attention, and memory. This discussion will bridge cellular biophysics with neural network operations, offering insights into how the detailed workings of single neurons challenge and inform the modelling of artificial neural networks.

Suggested Readings:

  • The Guide to Dendritic Spikes of the Mammalian Cortex In Vitro and In Vivo. Larkum, M.E., Wu, J., Duverdin, S.A., Gidon, A., 2022. Neuroscience 489, 15–33. https://doi.org/10.1016/j.neuroscience.2022.02.009.
  • Dendritic action potentials and computation in human layer 2/3 cortical neurons. Gidon, A., Zolnik, T.A., Fidzinski, P., Bolduan, F., Papoutsi, A., Poirazi, P., Holtkamp, M., Vida, I., Larkum, M.E., 2020. Science 367, 83–87. https://doi.org/10.1126/science.aax6239.
  • Active cortical dendrites modulate perception. Takahashi, N., Oertner, T.G., Hegemann, P., Larkum, M.E., 2016. Science 354, 1587–1590. https://doi.org/10.1126/science.aah6066.
  • NMDA spikes enhance action potential generation during sensory input. Palmer, L.M., Shai, A.S., Reeve, J.E., Anderson, H.L., Paulsen, O., Larkum, M.E., 2014. Nat Neurosci 17, 383–390. https://doi.org/10.1038/nn.3646.
  • Active Properties of Neocortical Pyramidal Neuron Dendrites. Major, G., Larkum, M.E., Schiller, J., 2013. Annu. Rev. Neurosci. 36, 1–24. https://doi.org/10.1146/annurev-neuro-062111-150343.
  • Active dendritic currents gate descending cortical outputs in perception. Takahashi, N., Ebner, C., Sigl-Glöckner, J., Moberg, S., Nierwetberg, S., Larkum, M.E., 2020. Nat Neurosci 23, 1277–1285. https://doi.org/10.1038/s41593-020-0677-8.
  • General Anesthesia Decouples Cortical Pyramidal Neurons. Suzuki, M., Larkum, M.E., 2020. Cell 180, 666-676.e13. https://doi.org/10.1016/j.cell.2020.01.024.
  • Perirhinal input to neocortical layer 1 controls learning. Doron, G., Shin, J.N., Takahashi, N., Drüke, M., Bocklisch, C., Skenderi, S., De Mont, L., Toumazou, M., Ledderose, J., Brecht, M., Naud, R., Larkum, M.E., 2020. Science 370, eaaz3136. https://doi.org/10.1126/science.aaz3136.
  • Memories off the top of your head. Shin, J.N., Doron, G., Larkum, M.E., 2021. Science 374, 538–539. https://doi.org/10.1126/science.abk1859.
  • Are Dendrites Conceptually Useful?. Larkum, M.E., 2022. Neuroscience 489, 4–14. https://doi.org/10.1016/j.neuroscience.2022.03.008.
  • Layer 6b controls brain state via apical dendrites and the higher-order thalamocortical system. Zolnik, T.A., Bronec, A., Ross, A., Staab, M., Sachdev, R.N.S., Molnár, Z., Eickholt, B.J., Larkum, M.E., 2024. Neuron 112, 805-820.e4. https://doi.org/10.1016/j.neuron.2023.11.021.
  • A Perspective on Cortical Layering and Layer-Spanning Neuronal Elements. Larkum, M.E., Petro, L.S., Sachdev, R.N.S., Muckli, L., 2018. Front. Neuroanat. 12, 56. https://doi.org/10.3389/fnana.2018.00056.
  • Dendritic calcium spikes are clearly detectable at the cortical surface. Suzuki, M., Larkum, M.E., 2017. Nat Commun 8, 276. https://doi.org/10.1038/s41467-017-00282-4.

Brief Bio:

Professor Matthew Larkum received a Bachelor of Science with honours from Sydney University in 1991 and a PhD from the University of Bern in 1996. He was a postdoc in the laboratory of Nobel prize-winner Bert Sakmann till 2003 performing experiments involving direct dendritic recordings from cortical pyramidal neurons. In 2004, he started his own lab in Switzerland and moved to Berlin in 2011. His research utilises cutting-edge electrophysiological and imaging techniques for understanding the computational properties of cortical neurons underlying cognition, learning and memory, and consciousness perception. His laboratory is currently investigating a unifying hypothesis that the incredible cognitive power of the cortex derives from an associative mechanism built in at the cellular level such that the architecture of the cortex is tightly coupled with the computational capabilities of single cells. He proposes a Dendritic Integrated Information Theory of consciousness derived from experiments showing a neural correlate for loss-of-consciousness involving the coupling across dendrites in cortical neurons.

Alex Cayco Gajic

Title: Dimensionality reduction beyond neural subspaces

Recent technical advances in electrophysiology and imaging methods now give access to tens of thousands of neurons recorded in behaving animals. Yet as the number of simultaneously recorded neurons grows, we need quantitative tools that can help us to find structure in such high-dimensional data. In the first part of this lecture I will present an overview of classic linear and nonlinear methods, their relationships to each other, and recent extensions, to help you navigate the vast array of methods in your future research. In the second part, I will present sliceTCA, a recent method that we have developed which extends PCA to data tensors to disentangle different classes of neural, temporal, and trial covariability.

Tomas Parr

Title: An Introduction to Active Inference

To interpret sensory data, our brains make use of internal world models that attempt to explain how those data were generated. When we act, we change the dynamics of the external world, and consequently the sensory data it delivers us. In theoretical neurobiology, Active Inference has emerged as a way to describe these bidirectional interactions with our environments. This lecture sets out the key ideas that underwrite Active Inference, including the idea that the central imperative for living creatures is to maintain a good fit between their models and their world. Sometimes this is framed simply as the need to avoid things that would be surprising under a given world model - where avoidance of surprise may be achieved through changing our model to fit the world and acting upon the world to make it fit our model. We will start by discussing the motivation behind the Active Inferential approach, including perspectives based upon stochastic dynamical systems and 'Bayesian Mechanics'. Next, we consider the forms of generative models we might use, including those formulated in continuous time and those that deal with sequential, prospective, planning. The latter offers an opportunity to consider how we evaluate future trajectories through planning as inference, and how we might choose our data so as to optimise our models. We address the relationship between the forms of these models and the computational architectures that result from their solutions via Bayesian message passing - something that is key in understanding the relationship between generative modelling and neuroanatomy. Finally, we will consider some specific models in the context of several perceptual, cognitive, and motor phenomena to illustrate these ideas more concretely. As such, this lecture aims to provide an overview of Active Inference in theory and in practice.

Suggested Readings:

  • Active Inference: The Free Energy Principle in Mind, Brain, and Behavior, Thomas Parr, Giovanni Pezzulo, Karl J. Friston, https://doi.org/10.7551/mitpress/12441.001.0001
  • The Anatomy of Inference: Generative Models and Brain Structure, Thomas Parr and Karl J. Friston, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6243103/
  • A Free Energy Principle for a Particular Physics, Karl Friston, https://arxiv.org/pdf/1906.10184

Message to Students:

I am a theoretical neurobiologist and physician based at the Nuffield Department of Clinical Neurosciences, University of Oxford, as an Academic Clinical Fellow in Neurology. My research utilises the Active Inference approach to try to understand how internal world models determine the function of the healthy brain, and how pathological behaviour can be understood in terms of disruption to these models. I completed my medical degree and PhD at University College London, where I worked in the Theoretical Neurobiology group led by Karl Friston in Queen Square. With Karl and Giovanni Pezzulo, I co-wrote the textbook on Active Inference - published by MIT Press.

Arvind Kumar

Title: Brain dynamics and representation of information in the brain

How is information represented in brain activity is a fundamental question. Trial-by-trial variability and co-variability pose a major challenge and constrain how neurons may encode information in their spike pattern. In my talk first I will provide a synthesis of how neural diversity (excitability and connectivity) shape the brain dynamics. Given the stochastic nature of brain dynamics, firing rate based coding emerges as a robust way to encode information in spiking activity. In terms of their firing rate responses neurons are highly selective and are tuned to specific subsets of features. At a population level the notion of tuning curves generalizes to population activity manifolds. It is interesting to note that neurons in different brain regions have different tuning curves. In the second part of my talk I will try to address this question and show how tuning curve shapes may form the basis of speed-accuracy trade-offs.

I am aware that I am the last speaker at the summer school. I hope that with these two topics (brain dynamics and neural coding) I will also be able to provide a nice synthesis of many topics covered in the summer school.

Suggested readings:

  • Lenninger Movitz, Skoglund Mikael, Herman Pawel, Kumar Arvind (2023) Are single-peaked tuning curves tuned for speed rather than accuracy?. eLIFE, 12():e84531.
  • Guo Lihao, Kumar Arvind (2023) Role of interneuron subtypes in controlling trial-by-trial output variability in the neocortex. Communications Biology, (6):874
  • Doiron B, Litwin-Kumar A, Rosenbaum R, Ocker GK, Josić K. The mechanics of state-dependent neural correlations. Nature neuroscience. 2016 Mar;19(3):383-93.

I grew up in India and studied electrical engineering there. A random discovery of the book From Neuron to Brain inspired me to switch to neuroscience. I was trained to do neuroscience by Ad Aertsen, Stefan Rotter and Mayank Mehta during my Phd and postdoctoral years. I am using computational models to ask questions about brain function. Mainly I am interested in understanding dynamical properties and information processing in biological neural networks. My research group in Stockholm is investigating the functional and dynamical consequences of neuronal diversity and how the interplay of network connectivity and network dynamics affects information exchange across the brain. In parallel we are developing computational models of brain disorders such as Parkinson’s disease and exploiting models to build model-driven data analysis pipelines. Previously I spent some time developing a theoretical framework to understand communication among brain areas and neural coding. When not doing neuroscience, In my free time I still aspire to play some cricket but these days I am spending more of my time helping a new brain learn.