Lecturers & Abstract / OCNC2019
Lecturers and Abstracts
Lecturers
- Erik De Schutter (OIST)
- Kenji Doya (OIST)
- Tomoki Fukai (OIST)
- Wenbiao Gan (New York University, USA)
- Geoff Goodhill (University of Queensland, Australia)
- Jun Izawa (Tsukuba University, Japan)
- Mike Hausser (University College London, UK)
- Bernd Kuhn (OIST)
- Sukbin Lim (NYU Shangai, China)
- Eve Marder (Brandeis University, USA)
- Peggy Series (University of Edinburgh, UK)
- Greg Stephens (OIST)
- Sebastian Seung (Princeton University, USA)
- Taro Toyoizumi (RIKEN CBS, Japan)
- Marylka Yoe Uusisaari (OIST)
- Yi Zeng (Chinese Academy of Sciences, China)
Abstract
Week1. Methods (Doya(1), Uusisaari, Kuhn, Fukai, De Schutter(1), Doya(2), De Schutter(2))
Week2. Neurons, Networks and Behavior I (Seung, Zeng, Toyoizumi, Marder, Gan, Izawa)
Week3. Neurons, Networks and Behavior II (Lim, Stephens, Hausser, Goodhill, Doya(3))
Lecturers / Brief Bio & Message to students
Erik De Schutter (OIST)
I was born in Antwerp, Belgium, where I studied medicine and received my MD in 1984. I subsequently specialized as a neuropsychiatrist but started work on computational modeling at the same time. This led to a research career in computational neuroscience, as a postdoc at Caltech, running my own group at the University of Antwerp and since 2007 at OIST.
I taught for 19 years at European CNS summer schools and was part of the last twelve OCNCs. It is always exciting to meet the diverse groups of highly motivated young scientists attending our courses. Summer courses have an important function in teaching computational methods and approaches, and in establishing social networks among the participants. Ideally every neuroscientist, including experimentalists and clinicians, should attend a CNS course because computational methods have become essential tools to understand the complex systems we are studying. There is a broad range of modeling approaches available. I have specialized in data-driven, bottom-up methods that are more accessible to experimentalists because they are mainly parameter driven. This includes large compartmental models of neurons with active dendrites, networks with realistic connectivity using conductance based neuron models and stochastic reaction-diffusion models of molecular interactions. I will not have time to present much of our research, but feel free to ask me or my collaborators about our work! Most of the work in my lab concerns the cerebellum, including its main neuron the Purkinje cell.
Kenji Doya (OIST)
Kenji Doya took BS in 1984, MS in 1986, and Ph.D. in 1991 at U. Tokyo. He became a research associate at U. Tokyo in 1986, U. C. San Diego in 1991, and Salk Institute in 1993. He joined ATR in 1994 and became the head of Computational Neurobiology Department, ATR Computational Neuroscience Laboratories in 2003. In 2004, he was appointed as the principal investigator of Neural Computation Unit, Okinawa Institute of Science and Technology (OIST) and started Okinawa Computational Neuroscience Course (OCNC) as the chief organizer. As OIST re-established itself as a graduate university in 2011, he became a professor and the vice provost for research.
He serves as the co-editor in chief of Neural Networks from 2008 and received Donald O. Hebb Award from International Neural Network Society in 2018.
He is interested in understanding the functions of basal ganglia and neuromodulators based on the theory of reinforcement learning.
Tomoki Fukai (OIST)
I was born in Tokyo in 1958. I studied physics and received a Ph.D from Waseda University in 1985. Then, I spent two years in India as a visiting fellow of Tata Institute, where I met statistical physics of associative memory models. I was appointed as an associate professor of Tokai University in 1991, and then as a professor of Information Communication Engineering Department and Brain Research Center of Tamagawa University in 2001. In 2005, I became the director of Theoretical Neuroscience Group in RIKEN Brain Science Institute, where I also ran the Laboratory for Neural Circuit Theory and serves as adjunct professors of the University of Tokyo and Kyoto University. In 2019, I was appointed as a professor of OIST and launched the Neural Coding and Brain Computing Unit. My continued interests are in understanding how neural networks in the brain create cognitive functions, especially episodic memory function. My challenge in OIST will be to link neuroscience findings to novel AI technologies.
OCNC is an exciting experience not only for students but also for lecturers. I am very much looking forward to meeting enthusiastic and motivated students from the world.
Wenbiao Gan (New York University, USA)
I was born in China and studied laser physics at Tsinghua University. I became interested in how the brain works during my senior year of college, and went on to obtain a Ph.D. in Neurobiology from Columbia University in 1995. I am now a professor in the Department of Neuroscience and Physiology and Skirball Institute at New York University School of Medicine. My research focuses on understanding how the brain is able to integrate new information continuously while stably maintaining previously stored memories. By imaging changes of postsynaptic dendritic spines in living mouse cerebral cortex, we have found that the majority of dendritic spines in diverse regions of the cortex could persist throughout adulthood and serve as a structural basis for long-term information storage. Over the years, we have investigated how motor learning, fear learning and extinction, stress hormone glucocorticoids, microglia, and sleep affect synaptic plasticity and maintenance in the living mouse cortex.
Bernd Kuhn (OIST)
I studied physics at the University of Ulm, Germany. For my diploma and PhD I moved to the Max Planck Institute of Biochemistry, Martinsried, Germany, focusing on the development of novel voltage-sensitive dyes and their application in cultured neurons. To optimize voltage imaging and to use voltage-sensitive dyes with two-photon excitation I accepted a postdoctoral fellowship at the Max Planck Institute for Medical Research, Heidelberg, Germany. In my second postdoc position at Princeton University, NJ, USA, I made viral vectors delivering the gene of calcium indicators and used them for in vivo imaging in the cerebellum. Additionally, I used voltage-sensitive dyes for in vivo two-photon imaging in barrel cortex of the mouse. Since 2010 I work at OIST (Optical Neuroimaging Unit). We mainly work on in vivo voltage and calcium two-photon imaging and methods development for neuroscience.
Sukbin Lim (NYU Shanghai, China)
Sukbin Lim is an assistant professor of neural and cognitive sciences at NYU Shanghai. She obtained her Ph.D. at New York University. Her postdoctoral work was in the Center for Neuroscience at University of California, Davis, and in the Department of Neurobiology at The University of Chicago.
Professor Lim’s research focuses on modeling and analysis of neuronal systems. Utilizing a broad spectrum of dynamical systems theory, the theory of stochastic processes, and information and control theories, she develops and analyses neural network models and synaptic plasticity rules for learning and memory. Her work accompanies analysis of neural data and a collaboration with experimentalists to provide and test biologically plausible models.
Research Interests:
Network modeling and analysis for short-term memory
Modeling long-term synaptic plasticity for learning and long-term memory
Analysis of variability or noise in neuronal systems
Eve Marder (Brandeis University, USA)
I started my scientific life as a biologist, but have been working with theorists since the late 1980's, including people like Nancy Kopell and Larry Abbott. For the past 15-20 years my lab has been combining experimental and computational work, and I usually have 2-3 postdocs who are exclusively doing computational work. In general, our computational work has "led" and suggested some of the most useful and important experimental coming from my lab.
Greg Stephens (OIST)
Trained with phd in theoretical physics of the very early universe, I became fascinated with first with neuroscience and then more broadly in the physics of living systems. Work in our group is helping to pioneer a new field – the physics of behavior: from individual organisms to entire societies. While the science of the living world is overwhelmingly focused on the microscopic: the structure of DNA, the machinery of cells that can convert energy and transports materials, or the pattern of electrical activity in our brains from which thoughts arise, all of these processes serve the greater evolutionary goals of the organism: to find food, avoid predators and reproduce. This is the behavioral scale, and despite it’s importance, our quantitative understanding of behavior remains poorly understood. But how do we quantify the emergent dynamics of entire organisms? What principles characterize living movement? Research in our group addresses these fundamental questions with a modern biophysics approach and model systems ranging from the nematode C. elegans to zebrafish and honeybee collectives. We combine theoretical ideas from statistical physics, information theory and dynamical systems and work in close collaboration with scientists from OIST and around the world to develop and analyze novel, quantitative experiments of organisms in natural motion. Finally, as I moved to my new field I was fortunate to experience a course such as this one, and I hope that you will enjoy a similarly remarkable experience.
Geoff Goodhill (The University of Queensland)
Professor Geoff Goodhill holds a joint appointment between the Queensland Brain Institute and School of Mathematics and Physics at The University of Queensland. He is a computational neuroscientist interested in how brains process information, particularly during neural development. He originally trained in Mathematics, Physics, Artificial Intelligence and Cognitive Science in the UK. Following postdoctoral work at the Salk Institute he spent 8 years as an Assistant then Associate Professor at Georgetown University in Washington DC, and then moved to the University of Queensland in 2005. He has published over 100 peer-reviewed papers and trained over 30 PhD students and postdocs. From 2005-2010 he was Editor-in-Chief of the journal Network: Computation in Neural Systems, and is currently on the Editorial Boards of several journals including Communications Biology and Neural Computation.
Jun Izawa (Tsukuba University, Japan)
Jun Izawa is an Associate Professor in the Engineering, Information, and Systems at the University of Tsukuba. During 2005-2010, he was a Post-Doctoral Research Fellow at the Johns Hopkins Medical Institute. During 2011-2012, he was a researcher at ATR Computational Neuroscience Laboratories. He received his Ph.D. in Computational Intelligence and Systems Science from Tokyo Institute of Technology in 2004.
His research interests include motor learning, reinforcement learning, and neuro-rehabilitation.
Michael Hausser (University College London, UK)
I grew up in Canada and Germany and did my PhD in Oxford with Julian Jack (working on dopamine neurons). I then did a postdoc in the lab of Bert Sakmann in Heidelberg (patching dendrites), and then second postdoc with Philippe Ascher in Paris (patching interneuron circuits). I then started my lab at UCL and have remained there ever since. I’m now a Professor of Neuroscience and a Wellcome Trust Principal Research Fellow. In addition to running my lab (http://www.dendrites.org), I’m also the Facilitator of the International Brain Laboratory (https://www.internationalbrainlab.com).
In my spare time I play cello (working on Bach suites) and tennis (working on my backhand).
I’m very much looking forward to returning to Okinawa - which is an amazing place - and to meeting you all.
Taro Toyoizumi(RIKEN Center for Brain Science)
Taro Toyoizumi is a Team Leader at RIKEN Center for Brain Science. He received his B.S. in physics from Tokyo Institute of Technology in 2001, and his M.S. and Ph.D. in computational neuroscience from The University of Tokyo in 2003 and 2006, respectively. He studied at the Center for Theoretical Neuroscience at Columbia University as a JSPS and Patterson Trust Postdoctoral Fellow. He then came to RIKEN Brain Science Institute as a Special Postdoctoral Researcher in 2010, was promoted to a Lab Head in 2011, and holds the current position since 2018. He has been studying the theory of neural plasticity by asking how neural circuits self-organize in the environment. Toyoizumi has received the International Neural Network Society, Young Investigator Award in 2008 and the Commendation for Science and Technology by the MEXT Japan, Young Scientists’ Prize in 2016.
Greg Stephens (OIST)
Trained with phd in theoretical physics of the very early universe, I became fascinated with first with neuroscience and then more broadly in the physics of living systems. Work in our group is helping to pioneer a new field – the physics of behavior: from individual organisms to entire societies. While the science of the living world is overwhelmingly focused on the microscopic: the structure of DNA, the machinery of cells that can convert energy and transports materials, or the pattern of electrical activity in our brains from which thoughts arise, all of these processes serve the greater evolutionary goals of the organism: to find food, avoid predators and reproduce. This is the behavioral scale, and despite it’s importance, our quantitative understanding of behavior remains poorly understood. But how do we quantify the emergent dynamics of entire organisms? What principles characterize living movement? Research in our group addresses these fundamental questions with a modern biophysics approach and model systems ranging from the nematode C. elegans to zebrafish and honeybee collectives. We combine theoretical ideas from statistical physics, information theory and dynamical systems and work in close collaboration with scientists from OIST and around the world to develop and analyze novel, quantitative experiments of organisms in natural motion. Finally, as I moved to my new field I was fortunate to experience a course such as this one, and I hope that you will enjoy a similarly remarkable experience.
Sebastian Seung (Princeton University)
Sebastian Seung is Anthony B. Evnin Professor in the Neuroscience Institute and Department of Computer Science at Princeton University. Seung has done influential research in both computer science and neuroscience. Over the past decade, he has helped pioneer the new field of connectomics, applying deep learning and crowdsourcing to reconstruct neural circuits from electron microscopic
images. His lab created EyeWire.org, a site that has recruited over 250,000 players from 150 countries to a game to map neural connections. His book Connectome: How the Brain's Wiring Makes Us Who We Are was chosen by the Wall Street Journal as Top Ten Nonfiction of 2012. Before joining the Princeton faculty in 2014, Seung studied at Harvard University, worked at Bell Laboratories, and taught at the Massachusetts Institute of Technology. He is External Member of the Max Planck Society, and winner of the Ho-Am Prize in Engineering.
Marylka Yoe Uusisaari (OIST)
Dr. Marylka Yoe Uusisaari entered the neuroscience field from computer science and programming background in Helsinki, Finland, intrigued by the similarities and differences between artificial and neuronal computation and intelligence. This interest in the brain as the computational controller of the body machinery evolved into a quest to understand how abstract and distributed patterns of neural activity in the various planning-related brain structures are transformed into concrete and timely motor actions. The main focus of her Neuronal Rhythms in Movement (nRIM; https://groups.oist.jp/nrim) unit at OIST is in the network and dynamics of the olivo-cerebellar system (OCS), believed to be the structure that constructs the temporal framework for behavior. The nRIM unit employs methods ranging from genetic manipulations and morphological analysis, in vitro investigation of network dynamics with electrophysiology and imaging as well as probing and control of the OCS clock functions in a behaving animal. In order to bring the whole-body kinematics into experimental focus, we will use high-speed recording of the body motions, motion-capture systems and complement the paradigms with virtual/augmented reality environments.
Yi Zeng (Chinese Academy of Sciences)
Yi Zeng is a Professor and Deputy Director at Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, a principle investigator at Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, and a board member for the National Governance Committee for the New Generation Artificial Intelligence, Ministry of Science and Technology China. His major research interests focus on technical models for Brain-inspired AI, Brain simulation, AI Ethics and Governance.
Go back to Top
Lecture Abstracts
Week1
Monday, June 24
Parallel Sessions;
Kenji Doya
Title: Introduction to numerical methods for differential equations
This tutorial introduces the basic concepts of differential equations and how to solve them, or simulate their behaviors in time, using a computer. Key concepts like eigenvalues and stability are explained while solving simple differential equation. Some examples of Hodgkin-Huxley type neuron models are introduced.
Suggested Reading:
- Koch C: Biophysics of Computation: Information Processing in Single Neurons. Oxford University Press (1999).
Marylka Yoe Uusisaari
Title: Foundations of computation by biological neurons
Is a brain a computer? If so, what kind of computer?
Clearly, the brains process information, and we can assign descriptors like "memory", "execution", "learning" to various parts of an animal brain, but how far does the analogy go between a human-designed computer and a brain that evolved through millions of years of evolution?
In this lecture, we will lightly cover many fundamental topics of neuroscience in an attempt to provide students from theoretical, physical or mathematical backgrounds the conceptual tools to understand how biological neuronal networks store and process information. You will learn about fundamental building blocks of the brains (neurons, their structure and functions), about communication between neurons, basics of sensory-motor functions as well as evolution and development.
Suggested Reading:
- https://scienceblogs.com/developingintelligence/2007/03/27/why-the-brain-is-not-like-a-co
Tuesday, June 25
Bernd Kuhn
Title: 1. Ion channel physiology and the Hodgkin-Huxley model of neuronal activity
In my lecture I will talk about electric activity in neurons. I will start with the basics of ion channels, and specifically focus on voltage-gated channels and their dynamics in response to membrane voltage. Neurons use a combination of different voltage-gated channels to generate fast (about 1 ms), depolarizing action potentials. I will explain the first action potential model by Hodgkin and Huxley . Finally, I will discuss more recent additions or fine-tuning of the time-honored Hodgin-Huxley model.
Title: 2. Functional optical imaging
Functional optical imaging has becomes one of the key techniques in neuroscience. In my second lecture I will introduce fluorescence and the most important imaging methods. I will explain what we can learn from them but also discuss their limitations.
Suggested Readings:
- Johnston and Wu: Foundation of cellular neurophysiology, MIT press
- Helmchen, Konnerth: Imaging in Neuroscience, 2011
- Yuste, Lanni, Konnerth: Imaging Neurons, 2000
Wednesday, June 26
Tomoki Fukai
Title: Neural network modeling of cognitive functions
The brain's ability to learn and memorize things is crucial for cognitive behavior of animals. Though our understanding of the underlying mechanisms of learning is limited, researchers have achieved many insights into the mechanisms in the last few decades. In my lecture, I will explain the basic properties of several (both classic and recent) models of neural information processing. These models range from feedforward network models with error correction learning and backpropagation to reservoir computing in recurrent neural networks. Then, I will show how these models can account for cognitive behaviors of animals such as pattern recognition, spatial navigation and decision making. I want to emphasize the essential role of low-dimensional features of neural dynamics in learning.
Related readings:
- Anthony Joseph Decostanzo, Chi Chung Fung and Tomoki Fukai (2019) Hippocampal neurogenesis reduces the dimensionality of sparsely coded representations to enhance memory encoding. Front Comput Neurosci 12: 1-21.
- Tomoki Kurikawa, Tatsuya Haga, Takashi Handa, Rie Harukuni and Tomoki Fukai (2018) Neuronal stability in medial frontal cortex sets individual variability in decision-making. Nat Neurosci, 21:1764-1773.
- Toshitake Asabuki, Naoki Hiratani and Tomoki Fukai (2018) Interactive reservoir computing for chunking information streams. PLoS Comput Biol, 14(10):e1006400.
- Tatsuya Haga and Tomoki Fukai (2018) Recurrent network model for learning goal-directed sequences through reverse replay. Elife 7: e34171.
- Mastrogiuseppe F, Ostojic S (2018) Linking Connectivity, Dynamics, and Computations in Low-Rank Recurrent Neural Networks. Neuron 99: 609-623.
- Song HF, Yang GR, Wang XJ. Reward-based training of recurrent neural networks for cognitive and value-based tasks. Elife. 2017 Jan 13;6. pii: e21492.
- Sussillo D, Abbott LF (2009) Generating coherent patterns of activity from chaotic neural networks. Neuron. 63: 544-557.
Thursday, June 27
Erik De Schutter
Title: Introduction to modeling neurons
I will discuss methods to model single neurons, going from very simple to morphologically detailed. I will briefly introduce cable-theory, the mathematical description of current flow in dendrites. By discretizing the cable equation we come to compartmental modeling, the standard method to simulate morphologically detailed models of neurons. I will also give an overview of dendritic properties predicted by cable theory and experimental data confirming these predictions. I will discuss the challenges in fitting compartmental models to experimental data with an emphasis on active properties.
Suggested Readings:
- Several chapters in Computational Modeling Methods for Neuroscientists, E. De Schutter ed., MIT Press, Boston (2009).
- Y. Zang, S. Dieudonné and E. De Schutter: Voltage- and Branch-specific Climbing Fiber Responses in Purkinje Cells. Cell Reports 24: 1536–1549 (2018).
Friday, June28
Kenji Doya
Title: Introduction to reinforcement learning and Bayesian inference
The aim of this tutorial is to present the theoretical cores for modeling animal/human action and perception. In the first half of the tutorial, we will focus on "reinforcement learning", which is a theoretical framework for an adaptive agent to learn behaviors from exploratory actions and resulting reward or punishment. Reinforcement learning has played an essential role of understanding the neural circuit and neurochemical systems behind adaptive action learning, most notably the basal ganglia and the dopamine system. In the second half, we will familiarize ourselves with the framework of Bayesian inference, which is critical in understanding the process of perception from noisy, incomplete observations.
Suggested Readings:
- Doya K: Reinforcement learning: Computational theory and biological mechanisms. HFSP Journal, 1(1), 30-40 (2007). Free on-line access: http://dx.doi.org/10.2976/1.2732246
- Doya K, Ishii S: A probability primer. In Doya K, Ishii S, Pouget A, Rao RPN eds. Bayesian Brain: Probabilistic Approaches to Neural Coding, pp. 3-13. MIT Press (2007). Free on-line access: http://mitpress.mit.edu/catalo/item/default.asp?ttype=2&tid=11106
Saturday, June 29
Erik De Schutter
Title: Modeling biochemical reactions, diffusion and reaction-diffusion systems
In this talk I will use calcium dynamics modeling as a way to introduce deterministic solution methods for reaction-diffusion systems. The talk covers exponentially decaying calcium pools, diffusion, calcium buffers and buffered diffusion, and calcium pumps and exchangers. I will describe properties of buffered diffusion systems and ways to characterize them experimentally. Finally I will compare the different modeling approaches.
In the second talk I will turn towards stochastic reaction-diffusion modeling. Two methods will be described: Gillespie's Stochastic Simulation algorithm extended to simulate diffusion, and particle-based methods. I will briefly describe the STEPS software and give some examples from our research.
I will finish with describing how the STEPS framework can be used to go beyond the compartmental model to simulate neurons in 3D.
Suggested Readings:
- U.S. Bhalla and S. Wils: Reaction-diffusion modeling. In Computational Modeling Methods for Neuroscientists, E. De Schutter ed., MIT Press, Boston. 61–92 (2009)
- E. De Schutter: Modeling intracellular calcium dynamics. In Computational Modeling Methods for Neuroscientists, E. De Schutter ed., MIT Press, Boston. 61–92 (2009)
- F. Santamaria, S. Wils, E. De Schutter and G.J. Augustine: Anomalous diffusion in Purkinje cell dendrites caused by dendritic spines. Neuron 52: 635–648 (2006).
- A.R. Gallimore, et al.: Switching on depression and potentiation in the cerebellum. Cell Reports 22: 722-733 (2018).
Week2
Monday, July 01
Sebstian Seung
Title Petascale neural circuit reconstruction
Neuronal circuits can be reconstructed from brain images acquired by 3D electron microscopy. The technique has never been widespread, chiefly because image analysis has required so much painstaking human labor. Over the past dozen years, image analysis has become much more automated through the application of convolutional neural networks. I will illustrate the state of the art with reconstructions from whole fly brain. I will also present progress in mapping activity and connectivity of large neural populations in mammalian cortex.
Tuesday, July 02
Yi Zeng
Title: From Brain Knowledge Engine to Brain-inspired Artificial Intelligence
Brain-inspired Artificial Intelligence designs and Implements Brain-inspired computational models for realizing Artificial Intelligence through brain-inspired structures (morphology, connectome, etc.), mechanism, and operational principles. It is based on findings from Brain and Neuroscience research on how the brain works at multiple scales. In this talk, firstly, I will introduce the cross species multi-scale brain knowledge engine. Secondly, I will introduce the brain-inspired cognitive engine, which is a multi-scale spiking neural network (SNN) model that are inspired by the cognitive brain at multiple scales. SNN for pattern recognition, reinforcement learning and decision making will be covered. Thirdly, I will introduce some higher levels of cognitive function simulation, including self-recognition, theory of mind, etc. Last but not least, I will talk about AI Ethical principles, and how they can technically be grounded into technical models for Brain-inspired AI, to ensure responsible development of it and keep it for human friendly.
Wednesday, July 03
Taro Toyoizumi
Title: Exploring the learning principle in the brain
Animals adapt to the environment for survival. Synaptic plasticity is considered a major mechanism underlying this process. However, the best-known form of synaptic plasticity, i.e., Hebbian plasticity that depends on pre- and post-synaptic activity, can surge coincident activity in model neurons beyond a physiological range. Our lab has explored how neural circuits learn about the environment by synaptic plasticity. The instability of Hebbian plasticity could be mitigated by a global factor that modulates its outcome. For example, TNF-alpha that mediates homeostatic synaptic scaling is released by glia, reflecting the activity level of surrounding neurons. I show that a specific interaction of Hebbian plasticity with this global factor accounts for the time course of adaptation to the altered environment (Toyoizumi et al. 2015). At a more theoretical level, I ask what is the optimal synaptic plasticity rule for achieving an efficient representation of the environment. A solution is the error-gated Hebbian rule, whose update is proportional to the product of Hebbian change and a specific global factor. I show that this rule, suitable also in neuromorphic devices, robustly extracts hidden independent sources in the environment (Isomura and Toyoizumi 2016, 2018, 2019).
Finally, I introduce that synapses change by intrinsic spine dynamics, even in the absence of synaptic plasticity. I show that physiological spine-volume distribution and stable cell assemblies are both achieved when intrinsic spine dynamics are augmented in a model (Humble et al.2019).
Suggested readings:
- Ł. Kuśmierz, T. Isomura, and T. Toyoizumi, Current Opinion in Neurobiology 46, 170-177 (2017). "Learning with three factors: modulating Hebbian plasticity with errors"
- T. Keck et al., Philosophical Transaction of the Royal Society B 372, 1715 (2017). "Integrating Hebbian and homeostatic plasticity: the current state of the field and future research directions"
Thursday, July 04
Eve Marder
Title: Variability, Robustness and Modulation in a Simple Central Pattern Generating Circuit
I will first review the background of what is known about small rhythmic circuits that generate motor patterns. Experimental work on the crustacean stomatogastric ganglion (STG), one of the best studied small central pattern generators, has revealed a 2-6 fold variability in many of the parameters that are important for circuit dynamics. Theoretical work shows that similar network performance can arise from diverse underlying parameter sets. Together, these lines of evidence suggest that each individual animal, at any moment in its life-time, has found a different solution to producing “good enough” motor patterns for healthy performance in the world. This poses the question of the extent to which animals with different sets of underlying circuit parameters can respond reliably and robustly to environmental perturbations and neuromodulation. We use both experimental and computational methods to study the effects of temperature, pH, high K+ concentrations, and neuromodulation on the networks of the STG from the crab, Cancer borealis. While all animals are remarkably robust and reliable to substantial perturbations, extreme perturbations produce "crashes". These crashes vary substantially across the animal and in models with different underlying parameter differences. The idiosyncratic nature of the crashes provides heuristic insight into the diverse nature of individuals to extreme perturbations. Moreover, models of homeostatic regulation of intrinsic excitability give insight into the kinds of mechanisms that could give rise to the highly variable solutions to stable circuit performance. The underlying parameter differences across the animals in a population and their differences in crash behavior provide a necessary substrate for evolution.
Suggested readings:
- Gutierrez, G.J., O’Leary, T., and Marder, E. (2013) Multiple mechanisms switch an electrically coupled, synaptically inhibited neuron between competing oscillator networks. Neuron, 77: 845-858. PMCID: PMC3664401.
- O’Leary, T., Williams, A.H., Franci, A., and Marder, E. (2014) Cell types, network homeostasis and pathological compensation from a biologically plausible ion channel expression model. Neuron, 82: 809–821. PMCID: PMC4109293.
- O’Leary, T. and Marder, E. (2016) Temperature-robust neural function from activity-dependent ion channel regulation. Curr. Biol., 26: 2935-2941. PMCID: PMC5111818.
- Haddad, S.A. and Marder, E. (2018) Circuit robustness to temperature perturbation is altered by neuromodulators. Neuron, 100: 609-623. NIHMS 990593.
- Otopalik, A., Pipkin, J.A. and Marder, E. (2019) Neuronal morphologies built for reliable physiology in a rhythmic motor circuit. eLife 2019; 8:e41728 DOI: 10.7554/eLife.41728
- Alonso, L.M. and Marder, E. (2019) Visualization of the relative contributions of conductances in neuronal models with similar behavior and different conductance densities. eLife, 2019; 8:e42722. DOI: https://doi.org/10.7554/eLife.42722
Friday, July 05
Wenbiao Gan
Title: Experience and sleep-dependent neuronal plasticity and stability
The mammalian brain not only undergoes rapid synaptic changes critical for information encoding, but also is capable of maintaining most synaptic connections important for long-term information storage. Dendritic spines are the postsynaptic sites of most excitatory synapses in the mammalian brain. In this lecture, I will discuss how novel experiences regulate the remodeling of dendritic spines, as well as the role of sleep in dendritic spine plasticity and maintenance. I will also discuss how inhibitory neurons control the generation and duration of branch-specific dendritic Ca2+ spikes of pyramidal neurons and ensure a balance between neuronal plasticity and stability in the cortex. Because synapses are the key elements for information acquisition and retention, understanding how they are formed and maintained in the living brain provides important insights into the structural basis of learning and memory.
Suggested readings:
- Grutzendler, J., Kasthuri, N., Gan, W-B. (2002) Long-term dendritic spine stability in the adult cortex. Nature. 420:812-816. PMID: 12490949.
- Yang G, Lai CS, Cichon J, Ma L, Li W, Gan W-B. (2014) Sleep promotes branch-specific formation of dendritic spines after learning. Science. 344(6188):1173-8. PMID: 24904169.
- Cichon J, Gan W-B. (2015) Branch-specific dendritic Ca2+ spikes cause persistent synaptic plasticity. Nature 520:180-5. PMID: 25822789.
Saturday, July 06
Jun Izawa
Title: Computational Basis of Neuro-Motor Control
The brain can generate quick and smooth motor movements within surprisingly short latencies. Once the movement trajectory is perturbed by a mechanical and a visual disturbance, the brain immediately changes the motor commands to correct the error. This error correction changes not only the ongoing trajectory but also the planed motor commands in the next movements by updating motor memory. How does the brain achieve such adaptive motor control? This lecture will provide a strong framework to understand these sophisticated features of the brain function on motor control, which uses languages in mathematics and tools in engineering. After reviewing a brief history of computational motor control, we will discuss how the brain makes a motor plan, executes motor commands, and updates motor memory.
Suggested readings:
- The Computational Neurobiology of Reaching and Pointing, Reza Shadmehr and Steven P.Wise (MIT Press)
- Human Robotics: Neuromechanics and Motor Control, Etienne Burdet, David W Franklin, and Theodore E Milner( MIT Press)
- Biological learning and control: how the brain builds representations, predicts events, and makes decisions. Shadmehr, Reza, and Sandro Mussa-Ivaldi. (MIT Press)
Week3
Monday, July 08
Sukbin Lim
Title: Network models for learning and memory
Reorganization of neuronal circuits through experience-dependent modification of synaptic connections has been thought to be one of the underlying mechanisms for learning and memory. This idea is supported by in-vitro experimental works that show long-term changes in synaptic strengths in different slice preparations. However, a single neuron receives inputs from many neurons in cortical circuits, and it is difficult to identify the rule governing synaptic plasticity of an individual synapse from in vivo studies. In this talk, I would discuss a novel method to infer synaptic plasticity rules and principles of neural dynamics from neural activities obtained in vivo. The method was applied to the data collected in monkeys performing visual learning tasks. This study connects several experimental works of learning and long-term memory at the cellular and system level and could apply to other cortical circuits to further our understanding of the interactions between circuit dynamics and synaptic plasticity rules.
Greg Stephens
Title:Principles and Possibilities in the Phase Space of Animal Behavior
We all instinctively recognize behavior: it’s what organisms do, from the motility of single cells to the stunning displays of bird flocks. To understand behavior, however, we must characterize complex, living movement as precisely and completely as its underlying molecular, cellular and network mechanisms. Here, we leverage a low-dimensional but complete representation of the posture of nematode C. elegans to reconstruct a continuous 6D phase space of crawling behavior. The reconstruction separates short and long-time dynamics, untangles subtle movement trajectories, and offers a quantitative arena for addressing questions of variability and stereotypy. We find that the phase space dimensions are organized into 3 conjugate pairs with two positive Lyapunov exponents approximately balanced by dissipative dimensions. We suggest that a near-Hamiltonian dynamics of coupled, chaotic oscillators underlie the motor control of C. elegans. Beyond behavior, we emphasize a new approach to the understanding of complex systems which is grounded in the theory of dynamical systems but incorporates modern ideas of statistical inference to infer dynamical principles directly from data.
Tuesday, July 09
Michael Hausser
Title: All-optical interrogation of neural circuits in behaving animals
Understanding the causal relationship between activity patterns in neural circuits and behavior requires the ability to perform rapid and targeted interventions in ongoing neuronal activity. I will discuss a new strategy for closed-loop all-optical strategy for dynamically controlling neuronal activity patterns in awake mice. This involves rapid tailoring and delivery of two-photon optogenetic stimulation based on readout of activity using simultaneous two-photon imaging of the same neural population. This closed-loop feedback control can be used to clamp spike rates at pre-defined levels, boost weak sensory-evoked responses, and activate network ensembles based on detected activity. By optically 'yoking together' neighboring neurons, it can also be used to induce long-term changes in network dynamics. This approach thus allows the rate and timing of activity patterns in neural circuits to be flexibly manipulated ‘on the fly’ during behavior, and will enable a range of new experiments aimed at cracking the neural code.
Suggested readings:
- London M, Häusser M (2005). Dendritic computation. Annual Review of Neuroscience 28:503-32.
- Mainen ZF, Häusser M, Pouget A (2016). A better way to crack the brain. Nature 539(7628):159-161.
- Packer A.M., Russell L.E., Dalgleish H.W., Häusser M. (2015). Simultaneous all-optical manipulation and recording of neural circuit activity with cellular resolution in vivo. Nature Methods 12(2):140-6.
- Zhang Z, Russell LE, Packer AM, Gauld OM, Häusser M (2018). Closed-loop all-optical interrogation of neural circuits in vivo. Nature Methods 15(12):1037-1040
Wednesday, July 10
Geoff Goodhill
Title: Modelling neural development: axon guidance, cortical maps and neural coding
To function properly, the brain must wire up billions of neurons in just the right patterns to represent and process information about the world and generate appropriate behaviours. How does it do this? In the first part of the talk I will discuss how growing axons are steered by molecular gradients, and how noise constrains this process. Once they reach their targets initially crude patterns of activity are then refined by sensory experience. In the second part of the talk I will discuss how simple theoretical principles can explain the impact of experience on visual maps in the cortex. In the last part of the talk I will discuss our most recent work on how neural coding develops in the larval zebrafish brain, and how to separate spontaneous from evoked activity in calcium imaging of large neural populations. All parts of the talk will be based on a tight conversation between mathematical models and experimental data.
Reviews:
- Avitan, L. & Goodhill, G.J. (2018). Code under construction: neural coding over development. Trends in Neurosciences, 41, 599-609.
- Goodhill, G.J. (2016). Can molecular gradients wire the brain? Trends in Neurosciences, 39, 202-211.
- Goodhill, G.J. (2016). Can molecular gradients wire the brain? Trends in Neurosciences, 39, 202-211.
Primary research articles:
- Avitan, L., Pujic, Z., Moelter, J., Van De Poll, M., Sun, B., Teng, H., Amor, R., Scott, E.K. & Goodhill, G.J. (2017). Spontaneous activity in the zebrafish tectum reorganizes over development and is influenced by visual experience. Current Biology, 27, 2407-2419.
- Cloherty, S.J., Hughes, N.J., Hietanen, M.A., Bhagavatula, P.S., Goodhill, G.J. & Ibbotson, M.R. (2016). Sensory experience modifies feature map relationships in visual cortex. eLife, 5:e13911.
- Bicknell, B.A., Dayan, P. & Goodhill, G.J. (2015). The limits of chemosensation vary across dimensions. Nature Communications, 6, 7468.
- Forbes, E.M., Thompson, A.W., Yuan, J, & Goodhill, G.J. (2012). Calcium and cAMP levels interact to determine attraction versus repulsion in axon guidance. Neuron, 74, 490-503.
- Mortimer D, Feldner J, Vaughan T, Vetter I, Pujic Z, Rosoff WJ, Burrage K, Dayan P, Richards LJ, Goodhill GJ (2009). A Bayesian model predicts the response of axons to molecular gradients. Proc. Natl. Acad. Sci. USA, 106, 10296-10301.
Wednesday, July 11
Kenji Doya
Title: Bayesian brain and meta-learning
Following up on the basic topics in reinforcement learning and Bayesian inference in the first week, now we look into behavioral studies suggesting people/animals use Bayesian-like computation in multimodal or dynamic sensory integration, and theories about how such computation can be implemented by populations of neurons.
We will further explore meta-aspects of learning, such as how to control the parameters of learning algorithms, how to find contexts or latent variables behind complex tasks, and how to select and combine appropriate modules as required.
Go back to Top