Lecturers and Abstract
Week 1
Bernd Kuhn
Title: 1. Ion channel physiology and the Hodgkin-Huxley model of neuronal activity
In my lecture I will talk about electric activity in neurons. I will start with the basics of ion channels, and specifically focus on voltage-gated channels and their dynamics in response to membrane voltage. Neurons use a combination of different voltage-gated channels to generate fast (about 1 ms), depolarizing action potentials. I will explain the first action potential model by Hodgkin and Huxley .
Finally, I will discuss more recent additions or fine-tuning of the time-honored Hodgin-Huxley model.
Title: 2. Functional optical imaging
Functional optical imaging has becomes one of the key techniques in neuroscience. In my second lecture I will introduce fluorescence and the most important imaging methods. I will explain what we can learn from them but also discuss their limitations.
Suggested Readings:
- Johnston and Wu: Foundation of cellular neurophysiology, MIT press
- Helmchen, Konnerth: Imaging in Neuroscience, 2011
- Yuste, Lanni, Konnerth: Imaging Neurons, 2000
Erik De Schutter
Title: Introduction to modeling neurons
I will discuss methods to model single neurons, going from very simple to morphologically detailed. I will briefly introduce cable-theory, the mathematical description of current flow in dendrites. By discretizing the cable equation we come to compartmental modeling, the standard method to simulate morphologically detailed models of neurons. I will also give an overview of dendritic properties predicted by cable theory and experimental data confirming these predictions. I will discuss the challenges in fitting compartmental models to experimental data with an emphasis on active properties.
Suggested Readings:
- Several chapters in Computational Modeling Methods for Neuroscientists, E. De Schutter ed., MIT Press, Boston (2009).
- Y. Zang, S. Dieudonné and E. De Schutter: Voltage- and Branch-specific Climbing Fiber Responses in Purkinje Cells. Cell Reports 24: 1536–1549 (2018).
Tomoki Fukai
Title: Neural network modeling of cognitive functions
The brain's ability to learn and memorize things is crucial for cognitive behavior of animals. Though our understanding of the underlying mechanisms of learning is limited, researchers have achieved many insights into the mechanisms in the last few decades. In my lecture, I will explain the basic properties of several (both classic and recent) models of neural information processing. These models range from feedforward network models with error correction learning and backpropagation to reservoir computing in recurrent neural networks. Then, I will show how these models can account for cognitive behaviors of animals such as pattern recognition, spatial navigation and decision making. I want to emphasize the essential role of low-dimensional features of neural dynamics in learning.
Related readings:
- Anthony Joseph Decostanzo, Chi Chung Fung and Tomoki Fukai (2019) Hippocampal neurogenesis reduces the dimensionality of sparsely coded representations to enhance memory encoding. Front Comput Neurosci 12: 1-21.
- Tomoki Kurikawa, Tatsuya Haga, Takashi Handa, Rie Harukuni and Tomoki Fukai (2018) Neuronal stability in medial frontal cortex sets individual variability in decision-making. Nat Neurosci, 21:1764-1773.
- Toshitake Asabuki, Naoki Hiratani and Tomoki Fukai (2018) Interactive reservoir computing for chunking information streams. PLoS Comput Biol, 14(10):e1006400.
- Tatsuya Haga and Tomoki Fukai (2018) Recurrent network model for learning goal-directed sequences through reverse replay. Elife 7: e34171.
- Mastrogiuseppe F, Ostojic S (2018) Linking Connectivity, Dynamics, and Computations in Low-Rank Recurrent Neural Networks. Neuron 99: 609-623.
- Song HF, Yang GR, Wang XJ. Reward-based training of recurrent neural networks for cognitive and value-based tasks. Elife. 2017 Jan 13;6. pii: e21492.
- Sussillo D, Abbott LF (2009) Generating coherent patterns of activity from chaotic neural networks. Neuron. 63: 544-557.
Erik De Schutter
Title: Modeling biochemical reactions, diffusion and reaction-diffusion systems
In this talk I will use calcium dynamics modeling as a way to introduce deterministic solution methods for reaction-diffusion systems. The talk covers exponentially decaying calcium pools, diffusion, calcium buffers and buffered diffusion, and calcium pumps and exchangers. I will describe properties of buffered diffusion systems and ways to characterize them experimentally. Finally I will compare the different modeling approaches.
In the second talk I will turn towards stochastic reaction-diffusion modeling. Two methods will be described: Gillespie's Stochastic Simulation algorithm extended to simulate diffusion, and particle-based methods. I will briefly describe the STEPS software and give some examples from our research.
I will finish with describing how the STEPS framework can be used to go beyond the compartmental model to simulate neurons in 3D.
Suggested Readings:
- U.S. Bhalla and S. Wils: Reaction-diffusion modeling. In Computational Modeling Methods for Neuroscientists, E. De Schutter ed., MIT Press, Boston. 61–92 (2009)
- E. De Schutter: Modeling intracellular calcium dynamics. In Computational Modeling Methods for Neuroscientists, E. De Schutter ed., MIT Press, Boston. 61–92 (2009)
- F. Santamaria, S. Wils, E. De Schutter and G.J. Augustine: Anomalous diffusion in Purkinje cell dendrites caused by dendritic spines. Neuron 52: 635–648 (2006).
- A.R. Gallimore, et al.: Switching on depression and potentiation in the cerebellum. Cell Reports 22: 722-733 (2018).
Kenji Doya
Title: Introduction to reinforcement learning and Bayesian inference
The aim of this tutorial is to present the theoretical cores for modeling animal/human action and perception. In the first half of the tutorial, we will focus on "reinforcement learning", which is a theoretical framework for an adaptive agent to learn behaviors from exploratory actions and resulting reward or punishment. Reinforcement learning has played an essential role of understanding the neural circuit and neurochemical systems behind adaptive action learning, most notably the basal ganglia and the dopamine system. In the second half, we will familiarize ourselves with the framework of Bayesian inference, which is critical in understanding the process of perception from noisy, incomplete observations.
Suggested Readings:
- Doya K: Reinforcement learning: Computational theory and biological mechanisms. HFSP Journal, 1(1), 30-40 (2007). Free on-line access: http://dx.doi.org/10.2976/1.2732246
- Doya K, Ishii S: A probability primer. In Doya K, Ishii S, Pouget A, Rao RPN eds. Bayesian Brain: Probabilistic Approaches to Neural Coding, pp. 3-13. MIT Press (2007). Free on-line access: http://mitpress.mit.edu/catalo/item/default.asp?ttype=2&tid=11106
Week2
Alexander Mathis
Title: Elucidating skill learning in primates
Motor control, the art of coordinating muscles to produce intricate movements, is a marvel of biological intelligence. From the graceful dance of a ballerina to the dexterous manipulation of objects, these movements are a testament to the brain’s prowess of mastering numerous degrees of freedom, which can take years of training and coaching to master, involving both explicit and implicit learning. Yet, understanding how the brain achieves skilled behavior remains one of the fundamental challenges in neuroscience. I will discuss methods for quantifying skilled behavior, as well as modeling and experimental approaches to elucidate the principles of sensorimotor control. We will in particular discuss recent advances in reinforcement learning and what they tell us about motor control.
Suggested Readings:
- Decoding the brain: From neural representations to mechanistic models, MW Mathis, AP Rotondo, EF Chang, AS Tolias, A Mathis, Cell
- Task-driven neural network models predict neural dynamics of proprioception, AM Vargas, A Bisi, AS Chiappa, C Versteeg, LE Miller, A Mathis, Cell
- Acquiring musculoskeletal skills with curriculum-based reinforcement learning, AS Chiappa, P Tano, N Patel, A Ingster, A Pouget, A Mathis, Neuron
Brief Bio / Message to Students
Alexander Mathis is an Assistant Professor at EPFL. His group works at the intersection of computational neuroscience and machine learning. Ultimately, his group is interested in understanding the statistics of behavior as well as sensorimotor control. While doing so, he strives to develop easily usable open-source software tools such as DeepLabCut. Previously, he was a Marie-Curie Fellow at Harvard University and the University of Tübingen. In 2012, he completed his doctorate training at Ludwig Maximilians University in Munich (LMU), after studying pure mathematics in Munich. With his students, he won the MyoChallenge for learning learning contact-rich manipulation skills for physiologically realistic musculoskeletal models at NeurIPS in 2022 and 2023. He was awarded the Frontiers of Science Award for DeepLabCut, the Eric Kandel Young Neuroscientists Prize and the Robert Bing Prize.
In 2011, he learned a lot by attending the OIST Computational Neuroscience Course and is looking forward to be back!
Mackenzie Mathis
Title: Measuring and modeling the sensorimotor system to study learning
In this lecture I will start with the core question my lab works on: how does the brain enable adaptive behavior? To answer this, a series of methods were needed. I will first discuss pose estimation for measuring movement from video. Then, I will discuss recent advances in joint modeling of neural and behavioral data. Then I will discuss how we can map pose estimation to muscle-level control and finally map this back to the neural dynamics during learning.
Suggested Readings:
- A Primer on Motion Capture with Deep Learning: Principles, Pitfalls, and Perspectives. Neuron, 2020. Alexander Mathis, Steffen Schneider, Jessy Lauer, Mackenzie W Mathis. https://www.cell.com/neuron/fulltext/S0896-6273(20)30717-0
- Learnable latent embeddings for joint behavioural and neural analysis. Nature, 2023. Steffen Schneider, Jin Hwa Lee & Mackenzie Weygandt Mathis. https://www.nature.com/articles/s41586-023-06031-6
- Measuring and modeling the motor system with machine learning. Current Opinion in Neurobiology, 2021. Sebastien B. Hausmann, Alessandro Marin Vargas, Alexander Mathis, Mackenzie W. Mathis. https://www.sciencedirect.com/science/article/pii/S0959438821000519
Brief Bio / Message to Students:
Prof. Mackenzie W. Mathis is the Bertarelli Foundation Chair of Integrative Neuroscience and an Assistant Professor at the Swiss Federal Institute of Technology, Lausanne (EPFL). Following the award of her PhD at Harvard University in 2017 with Prof. Naoshige Uchida, she was awarded the prestigious Rowland Fellowship at Harvard to start her independent laboratory (2017-2020). Before starting her group, she worked with Prof. Matthias Bethge at the University of Tübingen in the summer of 2017 with the support of the Women & the Brain Project ALS Fellowship. She is an ELLIS Scholar, Vallee Scholar, a former NSF Graduate Fellow, and her work has been featured in the news at Bloomberg BusinessWeek, Nature, and The Atlantic. She was awarded the FENS EJN Young Investigator Prize 2022, the Eric Kandel Young Neuroscientist Prize in 2023, The Robert Bing 2024 Prize, and the National Swiss Science Prize Latsis 2024.
Her lab works on mechanisms underlying adaptive behavior in intelligent systems. Specifically, the laboratory combines machine learning, computer vision, and experimental work in rodents with the combined goal of understanding the neural basis of adaptive motor control.
Padmini Rangamani
Title: Spatial models of calcium dynamics in dendritic spines
Dendritic spines are small, bulbous compartments that function as postsynaptic sites and undergo intense biochemical and biophysical activity. The role of the myriad signaling pathways that are implicated in synaptic plasticity is well studied. A recent abundance of quantitative experimental data has made the events associated with synaptic plasticity amenable to quantitative biophysical modeling. In this talk, I will describe how we can build spatial models of spine signaling and the sorts of questions we can answer with them. I will spend some time talking about reaction-diffusion equations and how they can be used to get insights into the role of spine geometry and then switch to Mcell as a powerful tool for particle-based simulations.
Gerald Pao
Title: Manifolds of brain activity dynamics and dimensionality estimation
Samuel Reiter
Title: Why do we sleep?
We (and most, if not all other animals) spend a significant fraction of our life asleep. Alarmingly, it’s not clear why! In my lecture I will introduce a range of ideas about the function of sleep, including synaptic homeostasis, offline practice and the concept of 'savings', memory consolidation and replay, SWS/REM sleep stages, the scanning hypothesis, and the reduction of metabolic waste. The ubiquity of sleep across animals has made it a useful area of comparative work. I will discuss how ecological niche appears to affect sleep, unihemispheric sleep, and evolutionary considerations.
Related readings:
- Joiner, W. J. Unraveling the Evolutionary Determinants of Sleep. Curr. Biol. 26, R1073–R1087 (2016).<http://paperpile.com/b/pLh3a3/fTv2>
- Findlay, G., Tononi, G. & Cirelli, C. The evolving view of replay and its functions in wake and sleep. Sleep Adv 1, zpab002 (2020).<http://paperpile.com/b/pLh3a3/h7db>
- Blumberg, M. S., Lesku, J. A., Libourel, P.-A., Schmidt, M. H. & Rattenborg, N. C. What Is REM Sleep? Curr. Biol. 30, R38–R49 (2020).<http://paperpile.com/b/pLh3a3/5zxg>
Kazumasa Tanaka
Title: Introduction to hippocampal memory and its representation
The three hours lecture covers basic topics of hippocampal memory, including synaptic plasticity, neuronal activity mapping, types of memories that hippocampus involves, systems consolidation, and hippocampal physiology.
Suggested readings:
- The Neurobiology of Learning and Memory by Jerry W. Rudy
- Neves G, Cooke SF, Bliss TV. Synaptic plasticity, memory and the hippocampus: a neural network approach to causality. Nat Rev Neurosci. 2008 Jan;9(1):65-75. doi: 10.1038/nrn2303. Erratum in: Nat Rev Neurosci. 2012 Dec;13(12):878. PMID: 18094707.
- Josselyn SA, Tonegawa S. Memory engrams: Recalling the past and imagining the future. Science. 2020 Jan 3;367(6473):eaaw4325. doi: 10.1126/science.aaw4325. PMID: 31896692; PMCID: PMC7577560.
- Chapter 11 in The Hippocampus Book by Per Andersen, Richard Morris, David Amaral, Tim Bliss & John O’Keefe.
Message to Students:
Welcome to OIST! My lecture focuses on the experimental neuroscience, with an aim for better and deeper implementation of what you would learn in computational and theoretical lectures at OCNC. As it is only three-hours lecture, I would need to stay within a superficial introduction of hippocampal literature taking physiological, molecular/cellular biological, and psychological approaches, but I would be looking forward to exciting discussions among participants with diverse backgrounds.
Week3
Lucy Palmer
Title: Dendritic Integration: from mice to humans
One of the great mysteries of neuroscience is how neurons integrate information to drive brain function. Since they are the site of synaptic input on neurons, dendrites provide an ideal substrate for the dynamic encoding of information required during behaviour. However, due to their small size, measuring the dynamics of dendritic processing is only possible using advanced cellular techniques. Here, I will discuss the use of patch clamp electrophysiology and two-photon microscopy to measure dendritic integration in both mice and human cortical neurons. I will give an overview of dendritic properties and synaptic integration, and will discuss recent results from my laboratory which illustrate the important role of dendritic integration in sensory processing, learning, memory and error prediction.
Suggested Readings:
- Stuart, G. J. and Spruston, N. Dendritic integration: 60 years of progress. Nat Neurosci 18, 1713–1721 (2015).
- Larkum, M. Are dendrites conceptually useful? Neuroscience (2022) doi:10.1016/j.neuroscience.2022.03.008.
Brief Bio / Message to Students:
Lucy Palmer heads the Neural Network Laboratory and Synaptic Biology Theme at the Florey Institute of Neuroscience and Mental Health, University of Melbourne, Australia. She completed her Master of Science at the University of Minnesota and PhD at the Australian National University, and was a postdoctoral researcher at the University of Bern, Switzerland and Charite University, Berlin. Her research uses patch clamp electrophysiology and calcium imaging in vivo to investigate dendritic integration during behaviour, focusing on learning and memory formation in health and disease.
Yu Takagi
Title: Bridging Minds: Human Brain, Generative AI, and Foundational Models
Generative AI is advancing at a remarkable pace, showcasing impressive abilities to interpret its environment, respond to complex instructions, and even interact with its environment. Interestingly, although generative AI can match or even surpass human performance in certain tasks, its underlying mechanisms differ not only from traditional human-inspired AI but also substantially from those of the human brain. Therefore, our understanding of the computations these models carry out—and the extent to which their “brains” resemble the human brain—remains limited.
Our research aims to bridge this gap by using generative AI both as a lens to illuminate brain function and as a target of neuroscientific inquiry. In this talk, I will first review studies that map the correspondence between human neural activity and generative AI representations. I will then highlight our ongoing efforts to build foundational models, and to decode their emergent computations with the same analytic tools we apply to the brain.
Suggested Readings:
- Zador, A., Escola, S., Richards, B. et al. Catalyzing next-generation Artificial Intelligence through NeuroAI. Nat Commun 14, 1597 (2023). https://doi.org/10.1038/s41467-023-37180-x
- Lindsey, Jack, et al. "On the biology of a large language model." Transformer Circuits Thread (2025)
- Takagi and Nishimoto, "High-resolution image reconstruction with latent diffusion models from human brain activity." CVPR (2023)

