Lecturers & Abstract / OCNC2018

Top

Lecturers and Abstracts

Lecturers

Abstracts

Week1. Methods (Kawato, Doya(1), Yazaki-Sugiyama, De Schutter(1), Kuhn, Stephens, De Schutter(2), and Doya(2))

Week2. Neurons, Networks, and Behavior I (Tsaneva-Atanasova, Friston, Brecht, Sporns, Druckmann, and Shibata)

Week3. Neurons, Networks, and Behavior II (Gütig, Giugliano and Fukai, and Bazhenov)

Lecturers / Brief Bio & Message to students

Maxim Bazhenov (University of California San Diego, USA)

I have a background in physics and mathematics and I am currently a Professor of Medicine at the University of California, San Diego. The ultimate goal of my lab research is to understand how the brain processes and learns, the underlying mechanisms behind brain activities in normal and pathological states. To address these questions, we use a broad spectrum of approaches including computational modeling and in vitro electrophysiology. My specific research interests include: Sleep and memory consolidation, Reinforcement learning and decision making, Information coding by neuronal networks, Neuronal mechanisms of epileptic seizures. More recently we expanded our research towards the new goals of understanding what we can learn from neuroscience to solve existing problems of the artificial intelligence, such as catastrophic forgetting of the past memories after new training.


In my lecture, I will first talk about basic properties of sleep and cellular/circuit level mechanisms behind characteristic sleep EEG rhythms. I will then discuss role of sleep rhythms in consolidation of new memories. And I will present some of my lab recent results about how sleep can protect old memories from interference and forgetting after new learning.

 

Michael Brecht (Bernstein Center for Computational Neuroscience Berlin, Germany)

I grew up in Muenster, Germany and started to study biochemistry at the University of Tuebingen. I soon became interested in animal behavior and did a one-year internship at Hubbs Sea World Research Institute, San Diego on sperm whale bioacoustics. Thus, I decided to finish in biology rather than in biochemistry. My diploma work with Bruno Preilowski and Michael Merzenich at University of California San Francisco was concerned with rat vibrissal behaviors. Subsequently I did a PhD in the lab of Wolf Singer on the role synchronization of neural activity in the superior colliculus in the control of eye movements. After my PhD I felt that we will not succeed in understanding population activity without an improved knowledge about cellular computations in vivo. I therefore joined Bert Sakmann’s lab as a postdoc to do in vivo whole-cell recordings in rat barrel cortex. From there I went first to Erasmus University, Rotterdam to become an assistant professor and then to Humboldt University, Berlin, where I became a full professor.
Our research group works on (1) the meaning of single neuron activity, (2) cellular mechanisms of complex somatosensory-mediated behaviors, (3) spatial representation, (4) social representations in forebrain.

Erik De Schutter (OIST)

​​I was born in Antwerp, Belgium, where I studied medicine and received my MD in 1984. I subsequently specialized as a neuropsychiatrist but started work on computational modeling at the same time. This led to a research career in computational neuroscience, as a postdoc at Caltech, running my own group at the University of Antwerp and since 2007 at OIST.

I taught for 19 years at European CNS summer schools and was part of the last ten OCNCs. It is always exciting to meet the diverse groups of highly motivated young scientists attending our courses. Summer courses have an important function in teaching computational methods and approaches, and in establishing social networks among the participants. Ideally every neuroscientist, including experimentalists and clinicians, should attend a CNS course because computational methods have become essential tools to understand the complex systems we are studying. There is a broad range of modeling approaches available. I have specialized in data-driven, bottom-up methods that are more accessible to experimentalists because they are mainly parameter driven. This includes large compartmental models of neurons with active dendrites, networks with realistic connectivity using conductance based neuron models and stochastic reaction-diffusion models of molecular interactions. I will not have time to present much of our research, but feel free to ask me or my collaborators about our work! Most of the work in my lab concerns the cerebellum, including its main neuron the Purkinje cell.

[Link]

Kenji Doya (OIST)

Kenji Doya took BS in 1984, MS in 1986, and Ph.D. in 1991 at U. ​​Tokyo. He became a research associate at U. Tokyo in 1986, U. C. San Diego in 1991, and Salk Institute in 1993. He joined ATR in 1994 and became the head of Computational Neurobiology Department, ATR Computational Neuroscience Laboratories in 2003. In 2004, he was appointed as the principal investigator of Neural Computation Unit, Okinawa Institute of Science and Technology (OIST) and started Okinawa Computational Neuroscience Course (OCNC) as the chief organizer. As OIST re-established itself as a graduate university in 2011, he became a professor and the vice provost for research.

He serves as the co-editor in chief of Neural Networks from 2008 and received Donald O. Hebb Award from International Neural Network Society in 2018.
He is interested in understanding the functions of basal ganglia and neuromodulators based on the theory of reinforcement learning.

[Link]

Michele Giugliano (University of Antwerp, Belgium)

Michele Giugliano graduated in Electronic Engineering in 1997 at the Univ. of Genova (Italy), and in 2001 received his PhD in Bioengineering and Computational Neuroscience from the Polytechnic of Milan (Italy).

He then received an award from the Human Frontiers Science Program Organisation to pursue training as postdoctoral fellow in experimental electrophysiology at the Inst. of Physiology of the Univ. of Bern (Switzerland), working with Prof. Hans-Rudolf Luescher and Prof. Stefano Fusi.

In 2005, he joined as junior group leader the experimental laboratory of Prof. Henry Markram at the Brain Mind Institute of the Swiss Federal Institute of Technology of Lausanne, and in 2008 was appointed faculty member at the University of Antwerpen (Belgium), taking over the Theoretical Neurobiology lab founded by Prof. Erik De Schutter and extending its scope to interdisciplinary research in experimental neuroscience and neuroengineering. During the period 2013-2015, he was visiting scientist at the Neuroelectronics Flanders Institute at IMEC, Leuven.

Today, he is full professor in Antwerp and retains visiting academic positions in Lausanne and at the University of Sheffield (UK).

[Link]

 

Bernd Kuhn (OIST)

​​I studied physics at the University of Ulm, Germany. For my diploma and PhD I moved to the Max Planck Institute of Biochemistry, Martinsried, Germany, focusing on the development of novel voltage-sensitive dyes and their application in cultured neurons. To optimize voltage imaging and to use voltage-sensitive dyes with two-photon excitation I accepted a postdoctoral fellowship at the Max Planck Institute for Medical Research, Heidelberg, Germany. In my second postdoc position at Princeton University, NJ, USA, I made viral vectors delivering the gene of calcium indicators and used them for in vivo imaging in the cerebellum. Additionally, I used voltage-sensitive dyes for in vivo two-photon imaging in barrel cortex of the mouse. Since 2010 I work at OIST (Optical Neuroimaging Unit). We mainly work on in vivo voltage and calcium two-photon imaging and methods development for neuroscience.

[Link]

Kazuhisa Shibata (Nagoya University, Japan)

I am broadly interested in how we see (perception), how we make a decision (decision-making), and how we change through experience (learning) with an emphasis on conscious and unconscious processing. Our research group takes an interdisciplinary approach including psychophysics, magnetic resonance imaging (MRI), machine learning, and computational modeling.

[Link]

Olaf Sporns (Indiana University, USA)

Network neuroscience is an emerging new field that occupies the intersection of traditional neuroscience and more recent approaches to study the structure and function of complex networks. I hope to get across some of the exciting and cutting-edge research that is currently pursued in the field. I also hope to discuss some of the promising future directions and some inherent limitations.

[Link]

Greg Stephens (OIST)

​​My research in theoretical biophysics is focused on the interface of physics and biology, broadly defined, and I explore a diverse set of problems ranging from the behavior of small organisms to the natural dynamics of human cognition.  In all of my efforts I combine cross-disciplinary experimental collaborations with ideas drawn broadly from such areas as statistical physics, dynamical systems and information theory to seek unifying principles and simple, functional understanding in living systems.  Trained in quantum gravity and early universe cosmology, as I switched focus, I was fortunate to experience a course such as this one and I hope that you will enjoy a similarly remarkable experience.

[Link]

Krasimira Tsaneva-Atanasova (University of Exeter, UK)

After studying in Bulgaria (MSc Mathematics) and New Zealand​​ (PhD Applied Mathematics), Professor Krasimira Tsaneva-Atanasova trained further as a postdoctoral fellow at the National Institutes of Health in the US and École Normale Supérieure in Paris before obtaining an academic position in the UK. She spent six years at the University of Bristol before joining the University of Exeter in 2013 where she currently holds a personal chair in Mathematics for Healthcare. She enjoys the beauty of Mathematics and its Applications and attempts to share in every occasion with her students, colleagues and, whenever there is an opportunity, with the public.

[Link]

Yoko Yazaki-Sugiyama (OIST)​

​My research interest is to understand the brain function which allows animal to behave properly. Our research unit currently working on the neuronal mechanism of the developmental ‘critical period’ where the neurons are plastic and actively shaping their network by using bird song learning. Why do birds learn to sing only in the limited time window ? We are exploring the mystery of birds’ brain to answer the questions.

[Link]

Go back to Top


Lecture Abstracts

Week1

Monday, June 25

Mitsuo Kawato

Title: Computational Neuroscience contributing to Causal System Neuroscience

Abstract:

I will talk about a brief description of my research history around computational neuroscience and causal system neuroscience. 40 years ago, when I started theoretical studies in neuroscience, I was not certain whether those could lead to a serious and respectable discipline. 35 years ago, I read David Marr’s arguments for necessity of three levels of brain research, computational, algorithmic and hardware, and became at least conceptually confident about necessity and validity of computational neuroscience. 30 years ago, I proposed a cerebellar internal model and felt that the theory incorporated Marr’s three levels. 25 years ago, I started integration of experimental and computational approaches. 20 years ago, I invented the terminology “computational model-based experiment” for the first time. 15 years ago, I started brain machine interface studies integrating robotics, computation and experimental approaches. 10 years ago, I proposed a causal system neuroscience based on these previous studies.
 One of the most important hypotheses in neuroscience is that human mind is caused by a specific spatiotemporal activity pattern in the brain. This is a central hypothesis for computational and system neuroscience, but has never been experimentally examined. A major reason for this failure is that most neuroscientist, from the beginning, gave up the possibility to experimentally control spatiotemporal brain activity in humans. Note that optogenetics is not capable of this. Sophisticated manipulation of firing patterns of many neurons across a whole brain is a very ambitious but essential experimental tool to make a neuroscience causal. Decoded neurofeedback (DecNef) is a novel method to fulfill this requirement by combining real-time fMRI neurofeedback, decoding of multi-voxel patterns by sparse machine learning algorithms, and reinforcement learning by human participants while avoiding “curse of dimensionality”.

Kenji Doya  

Title: Introduction to numerical methods for differential equations

Abstract:

This tutorial introduces the basic concepts of differential equations and how to solve them, or simulate their behaviors in time, using a computer. Key concepts like eigenvalues and stability are explained while solving simple differential equation. Some examples of Hodgkin-Huxley type neuron models are introduced.

Suggested Reading:

  • Koch C: Biophysics of Computation: Information Processing in Single Neurons. Oxford University Press (1999).
Yoko Yazaki-Sugiyama   

Title: Neuronal basis for information processing.

Abstract:

We are acquiring visual information at the eye, auditory information in the ear, olfactory information at the nose etc., which is conveyed to the brain and processed and transformed to make us to recognize as a sense. The brain also works for generating and controlling a complicated behavior, and are responsible to define aspects of behavior as feelings and abstract of thought.  

Neurons are the smallest component of the brain and are the key players for signal processing for making these difficult tasks with wiring each other.     

 In this lecture we will learn basic physiological character and mechanism of neurons to see how those complicated tasks can be performed. We will also try to get an idea how neurons can compute signals by wisely connecting each other.

Suggested Reading:

  • The neuron: Cell and Molecular Biology. I.B. Levitan and L.K. Kaczmarek, Oxford University Press

Tuesday, June 26

Erik De Schutter   

Title: Modeling biochemical reactions, diffusion and reaction-diffusion systems

Abstract:

In my first talk I will use calcium dynamics modeling as a way to introduce deterministic solution methods for reaction-diffusion systems. The talk covers exponentially decaying calcium pools, diffusion, calcium buffers and buffered diffusion, and calcium pumps and exchangers. I will describe properties of buffered diffusion systems and ways to characterize them experimentally. Finally I will compare the different modeling approaches.
In the second talk I will turn towards stochastic reaction-diffusion modeling. Two methods will be described: Gillespie's Stochastic Simulation algorithm extended to simulate diffusion, and particle-based methods. I will briefly describe the STEPS software and give some examples from our research.

Suggested Readings:

  • U.S. Bhalla and S. Wils: Reaction-diffusion modeling. In Computational Modeling Methods for Neuroscientists, E. De Schutter ed., MIT Press, Boston. 61–92 (2009)
  • E. De Schutter: Modeling intracellular calcium dynamics. In Computational Modeling Methods for Neuroscientists, E. De Schutter ed., MIT Press, Boston. 61–92 (2009)
  • G. Antunes and E. De Schutter: A stochastic signaling network mediates the probabilistic induction of cerebellar long-term depression. Journal of Neuroscience 32: 9288–9300. (2012).
  • F. Santamaria, S. Wils, E. De Schutter and G.J. Augustine: Anomalous diffusion in Purkinje cell dendrites caused by dendritic spines. Neuron 52: 635–648 (2006).

 

Wednesday, June 27

Bernd Kuhn

1. Ion channel physiology and the Hodgkin-Huxley model of neuronal activity

In my first lecture I will talk about electric activity in neurons. I will start with the basics of ion channels, and specifically focus on voltage-gated channels and their dynamics in response to membrane voltage. Neurons use a combination of different voltage-gated channels to generate fast (about 1 ms), depolarizing action potentials. I will explain the first action potential model by Hodgkin and Huxley . Finally, I will discuss more recent additions or fine-tuning of the time-honored Hodgin-Huxley model.

2. Functional optical imaging

Functional optical imaging has becomes one of the key techniques in neuroscience. In my second lecture I will introduce fluorescence and the most important imaging methods. I will explain what we can learn from them but also discuss their limitations.

Suggested Readings:

  • Johnston and Wu: Foundation of cellular neurophysiology, MIT press
  • Helmchen, Konnerth: Imaging in Neuroscience, 2011
  • Yuste, Lanni, Konnerth: Imaging Neurons, 2000

Thursday, June 28

Greg Stephens   

Title: An introduction to dynamical systems: from neural activity to natural behavior

Abstract:

My lecture will consist of two parts: an introduction to dynamical systems focused in particular on the power of qualitative analysis and a novel, quantitative approach to understanding the motion of C. elegans.

Indeed, while there has been an explosion in our ability to characterize the dynamics of molecules, cells, and circuits, our understanding of behavior on the organism-scale is remarkably less advanced. Here, we use high-resolution video microscopy to show that the space of shapes is low-dimensional, with just four dimensions accounting for 95% of the shape variance.   Projections of worm shape along these four “eigenworms” provide a precise yet substantially complete description of locomotory behavior, capturing both classical motion such as forward crawling, reversals, and Ω-turns and novel behaviors such as “pause” states at particular postures.  We use the eigenworms to construct a stochastic model of the body wave dynamics that predicts transitions between attractors corresponding to abrupt reversals in crawling direction and we show that the noise amplitude decreases systematically with increasing time away from food, resulting in longer bouts of forward crawling and suggesting that worms use noise to adaptive benefit.

Friday, June 29

Erik De Schutter

Title: Introduction to modeling neurons.

Abstract:

I will discuss methods to model single neurons, going from very simple to morphologically detailed. I will briefly introduce cable-theory, the mathematical description of current flow in dendrites. By discretizing the cable equation we come to compartmental modeling, the standard method to simulate morphologically detailed models of neurons. I will discuss the challenges in fitting compartmental models to experimental data with an emphasis on active properties and give several examples from my own work. I will also give an overview of dendritic properties predicted by cable theory and experimental data confirming these predictions.
I will finish with describing approaches to go beyond the compartmental model by simulating neurons in 3D.

Suggested Reading:

  • Several chapters in Computational Modeling Methods for Neuroscientists, E. De Schutter ed., MIT Press, Boston (2009).

Week2

Monday, July 02

Krasimira Tsaneva-Atanasova

Title: Modelling Calcium Dynamics

Abstract:

Calcium is a ubiquitous second messenger that controls a plethora of cellular processes, such as gene expression, secretion of neurotransmitters and hormones, synaptic plasticity, etc.. Calcium signals are not only mediated via, but also generate complex temporal and spatial interactions between ion channels, receptors, pumps, exchangers and buffers located on the surface of and within various cellular compartments. In order to accomplish its function calcium ions flow across cellular membranes, diffuse and react with calcium sensitive proteins inside the cell as well as on the cellular membrane(s). Therefore local calcium micro-domains and gradients are extremely important for normal cell function. However, due to the irregularity of cellular geometry and the nature of calcium dynamics it is very difficult and in some cases impossible to experimentally resolve local calcium concentrations with the precision necessary to understand vital physiological processes, such as induction of synaptic plasticity in the brain. The complexity of calcium dynamics has motivated a great body of mathematical modelling work on calcium dynamics employing partial differential equations (PDEs) and/or dynamical systems theory. Parabolic PDEs of reaction-diffusion type have been used to model intracellular calcium dynamics. Systems of coupled nonlinear ordinary differential equations (ODEs) very often involving multiple-time scales have been used to describe the temporal evolution of various calcium release mechanisms by roughly approximating and largely ignoring the spatial dependence of such processes. Irregular geometrical domains can be efficiently handled by finite element spatial discretisation for example. In this lecture I will introduce the mathematical techniques commonly used in modelling calcium dynamics with particular emphasis on applications to neuroscience.

Suggested Readings:

  • Blackwell, K.T. (2013). Approaches and tools for modeling signaling pathways and calcium dynamics in neurons. J Neurosci Methods 220(2), 131-140. doi: 10.1016/j.jneumeth.2013.05.008.
  • De Schutter, E., and Smolen, P. (1998). Calcium dynamics in large neuronal models. Methods in neuronal modeling: From ions to networks 2.
  • Rudiger, S. (2013). Stochastic models of intracellular calcium signals. Physics Reports.
  • Saftenku, E., and Friel, D.D. (2012). "Combined Computational and Experimental Approaches to Understanding the Ca2+ Regulatory Network in Neurons," in Calcium Signaling. Springer), 569-601.
  • Thul, R. (2014). Translating intracellular calcium signaling into models. Cold Spring Harbor Protocols, 2014(5), pdb-top066266.
  • Thul, R., Bellamy, T. C., Roderick, H. L., Bootman, M. D., & Coombes, S. (2008). Calcium oscillations. In Cellular Oscillatory Mechanisms (pp. 1-27). Springer, New York, NY.
  • Keener, J., & Sneyd, J. (2010). Mathematical Physiology: I: Cellular Physiology. Springer Science & Business Media. (Chapter 7 Calcium Dynamics pp. 273)

Tuesday, July 03

Karl Friston

Title: Active inference and the free energy principle
 
This presentation comprises three parts, covering the various aspects of the free energy principle; namely, biological self-organisation in dynamical systems like the brain and how this leads to predictive coding and embodied (active) inference. The second part deals with equivalent formulations of active inference for models of the world that can be described in terms of discrete states (and time steps). We will conclude by considering recent advances in Bayesian model selection and averaging in the brain that provide a formalism for artificial curiosity and insight.
 
Part 1: I am therefore I think
 
Abstract: This overview of the free energy principle offers an account of embodied exchange with the world that associates conscious operations with actively inferring the causes of our sensations. Its agenda is to link formal (mathematical) descriptions of dynamical systems to a description of perception in terms of beliefs and goals. The argument has two parts: the first calls on the lawful dynamics of any (weakly mixing) ergodic system – from a single cell organism to a human brain. These lawful dynamics suggest that (internal) states can be interpreted as modelling or predicting the (external) causes of sensory fluctuations. In other words, if a system exists, its internal states must encode probabilistic beliefs about external states. Heuristically, this means that if I exist (am) then I must have beliefs (think). The second part of the argument is that the only tenable beliefs I can entertain about myself are that I exist. This may seem rather obvious; however, if we associate existing with ergodicity, then (ergodic) systems that exist by predicting external states can only possess prior beliefs that their environment is predictable. It transpires that this is equivalent to believing that the world – and the way it is sampled – will resolve uncertainty about the causes of sensations. We will conclude by looking at the epistemic behaviour that emerges under these beliefs, using simulations of active inference.
 
 
Part 2: Active inference and belief propagation in the brain
 
Abstract: This part considers deep temporal models in the brain. It builds on previous formulations of active inference to simulate behaviour and electrophysiological responses under deep (hierarchical) generative models of discrete state transitions. The deeply structured temporal aspect of these models means that evidence is accumulated over distinct temporal scales, enabling inferences about narratives (i.e., temporal scenes). We illustrate this behaviour in terms of Bayesian belief updating – and associated neuronal processes – to reproduce the epistemic foraging seen in reading. These simulations reproduce these sort of perisaccadic delay period activity and local field potentials seen empirically; including evidence accumulation and place cell activity. Finally, we exploit the deep structure of these models to simulate responses to local (e.g., font type) and global (e.g., semantic) violations; reproducing mismatch negativity and P300 responses respectively. These simulations are presented as an example of how to use basic principles to constrain our understanding of system architectures in the brain – and the functional imperatives that may apply to neuronal networks.
 
Part 3: Active inference and artificial curiosity
 
Abstract: This part offers a formal account of insight and learning in terms of active (Bayesian) inference. It deals with the dual problem of inferring states of the world and learning its statistical structure. In contrast to current trends in machine learning (e.g., deep learning), we focus on how agents learn from a small number of ambiguous outcomes to form insight. I will simulations of abstract rule-learning and approximate Bayesian inference to show that minimising (expected) free energy leads to active sampling of novel contingencies. This epistemic, curiosity-directed behaviour closes ‘explanatory gaps’ in knowledge about the causal structure of the world; thereby reducing ignorance, in addition to resolving uncertainty about states of the known world. We then move from inference to model selection or structure learning to show how abductive processes emerge when agents test plausible hypotheses about symmetries in their generative models of the world. The ensuing Bayesian model reduction evokes mechanisms associated with sleep and has all the hallmarks of ‘aha moments’.
 
Key words: active inference ∙ cognitive ∙ dynamics ∙ free energy ∙ epistemic value ∙ self-organization

Wednesday, July 04

Michael Brecht

Title:Sex, touch & tickle: The cortical neurobiology of physical contact

Abstract:

The cerebral cortex is the largest brain structure in mammalian brains. While we know much about cortical responses to controlled, experimenter imposed sensory stimuli, we have only limited understanding of cortical responses evoked by complex social interactions. In my lecture, I will focus on response patterns evoked by social touch in somatosensory cortex in interacting rats. We find that social touch evokes stronger responses than object touch or free whisking. Moreover, we find prominent sex differences in responsiveness. We observe a modulation of cortical activity with estrus cycle in females, and in particular a modulation of fast-spiking interneurons by estrogens. The prominent sex differences are unexpected, since the somatosensory cortex is not anatomically sexually dimorphic. Recently, we confirmed the absence of anatomical sex differences in somatosensory cortex by an analysis of somatosensory representations of genitals. Despite the marked external sexual dimorphism of genitals, we observed a stunning similarity of the cortical maps representing the clitoris and penises, respectively. In the final part of my presentation I will discuss the involvement of somatosensory cortex in ticklishness. In these experiments we habituated rats to be tickled and found that animals respond to such stimulation with vocalizations. Importantly, rats seem to enjoy tickling and seek out such tactile contacts. In the physiology we observed in trunk somatosensory cortex numerous cells that were either inhibited or excited by tickling. Most interestingly, excitatory or inhibitory responses to tickling predicted excitatory or inhibitory cortical response patterns during play behavior. Microstimulation of deep layer neurons in the somatosensory cortex evoked vocalizations similar to those evoked by tickling. Thus, stimulation and recording data suggest a critical role of somatosensory cortex in mediating ticklishness and the control of playful behaviors.

Suggested readings:

  • Ishiyama S, Brecht M (2016) Neural correlates of ticklishness in the rat somatosensory cortex. Science 354(6313):757-760
  • Lenschow C, Copley S, Gardiner JM, Talbot ZN, Vitenzon A, Brecht M (2016) Sexually Monomorphic Maps and Dimorphic Responses in Rat Genital Cortex. Curr Biol.26(1):106-13. Epub 2015 Dec 24.
  • Lenschow C, Sigl-Glöckner J, Brecht M (2017) Development of rat female genital cortex and control of female puberty by sexual touch. PLoS Biology, 15(9), e2001283.

Thursday, July 05

Olaf Sporns

Title: Network Neuroscience: Structure and Dynamics of Complex Brain Networks

Abstract:

Modern neuroscience is in the middle of a transformation, driven by the development of novel high-resolution brain mapping and recording technologies that deliver increasingly large and detailed “big neuroscience data”. Network science has emerged as one of the principal approaches to model and analyze neural systems, from individual neurons to circuits and systems spanning the whole brain. A core theme of network neuroscience is the comprehensive mapping of anatomical and functional brain connectivity, also called connectomics. In this lectureI will review foundations, current themes and future directions of network neuroscience. Topics will include: Basic concepts and methods of network science; Introduction to connectomics; Comparative studies of brain networks across different species; Mapping and analysis of human brain networks; Functional connectivity and brain dynamics; Computational models for mapping information flow and communication dynamics. The goal is to provide an introduction to these different aspects of network neuroscience, and to critically examine and discuss promise and limitations of the approach.

Suggested Readings:

  • Bassett DS, Sporns O (2017) Network neuroscience. Nature Neuroscience 20, 353-364.
  • van den Heuvel MP, Bullmore ET, Sporns O (2016) Comparative connectomics. Trends Cogn Sci 20, 345-361.
  • Sporns O, Betzel RF (2016) Modular brain networks. Annu Rev Psychol 67, 613-640.
  • Petersen SE, Sporns O (2015) Brain networks and cognitive architectures. Neuron 88, 207-219.
  • Sporns O (2014) Contributions and challenges for network models in cognitive neuroscience. Nature Neurosci 17, 652-660.

Saturday, July 07

Kazuhisa Shibata

Title: Mechanisms of visual perceptual learning

Abstract:

Visual perceptual learning (VPL) is defined as a long-term increase in visual performance as a result of visual experiences and the processes that govern them. VPL is regarded as a manifestation of visual and brain plasticity. Clarification of VPL would lead to a better understanding of the basic mechanism behind visual and brain plasticity, which may, in turn, lead to interventions to ameliorate diseases affecting vision, and other pathological or age-related visual declines.

In this lecture, first I will give an introduction to VPL; how VPL is characterized in terms of behavioral changes and what neural processes are involved in VPL. Second, I will describe how different computational models have accounted for certain aspects of VPL. Finally, I will show our recent works that aim to build a unified model of VPL.

Suggested Readings:

  • Advances in visual perceptual learning and plasticity, Sasaki Y, Nanez JE, Watanabe T, Nat Rev Neurosci, 2009.
  • Perceptual learning in vision research, Sagi D, Vis Res, 2011.
  • Shibata K, Sagi D, Watanabe T, Two-stage model in perceptual learning: toward a unified theory, Ann NY Acad Sci, 2014.

Week3

Monday, July 09

Robert Gütig

Title: Information processing and learning in spiking neural networks.

Suggested Readings:

  • The tempotron: a neuron that learns spike timing–based decisions.R Gütig, H Sompolinsky, Nature neuroscience 9 (3), 420
  • Spiking neurons can discover predictive features by aggregate-label learning. R Gütig, Science 351 (6277), aab4113

Thuesday, July 10

Michele Giugliano  

Title: The dynamical response properties of cortical neurons

Abstract:

Earlier theoretical studies on simplified neuronal models suggested that the joint firing activity of ensembles of cortical neurons may relay downstream rapidly varying components of their synaptic inputs, with no attenuation. Information transmission in networks of weakly-coupled model neurons may in fact overcome the limits imposed by the spike refractoriness and the slow integration of individual cells, effectively extending their input-output bandwidth.

My lab has been the first to experimentally explore and test such a hypothesis. We designed a stimulation protocol to directly probe the (dynamical) response properties of pyramidal cells of the rat neocortex in vitro, by means of patch-clamp recordings. This identifies the linear transfer function of neurons, linking (recreated) synaptic inputs to the firing probability. In the Fourier domain, this correspond to magnitude and phase of the response for progressively more rapid oscillating inputs. Interestingly, such a novel characterisation offers a deeper access to the biophysics of information processing (e.g. relevant to predict correlations) than (stationary) frequency-current curves, which are widely used to classify neuronal phenotypes.

To our surprise, not only we confirmed that pyramidal neurons can track and relay inputs varying in time faster the cut-off imposed by membrane electrical passive properties (~50 cycles/s), but we found that they do it substantially faster (up to ~200 cycles/s) than explained by their ensemble mean firing rates (~10 spikes/s). In addition, above 200 cycles/s neurons attenuate their response with a power-law relationship and a linear phase lag.

Such an unexpectedly broad bandwidth of neuronal dynamics could be qualitatively related to the dynamics of the initiation of the action potential. Interested to explore and test such a possibility, we found a first indirect confirmation of it in terms of correlation between the action potentials rapidness at onset and the neuronal bandwidth, over a large set of experiments.

A second more direct confirmation - which will conclude the presentation - came from our recent study where we applied the same protocols to in vitro human cortical (healthy) tissue, exceptionally obtained from therapeutic resective brain surgery. We found that human L2/3 cortical neurons fire much “steeper” action potentials than in rodent neurons of the same layer, and have a much more extended bandwidth reaching 1000 cycles/s, violating the predictions of existing models and opening intriguing new directions for the phylogenetics of neuronal dynamics.

Suggested Readings:

  • Brunel et al. (2001) - https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.86.2186
  • Fourcaud-Trocme' et al. (2003) - http://www.jneurosci.org/lookup/pmidlookup?view=long&pmid=14684865
  • Casti (1977) Linar Dynamical System, chapters 1-2 - https://goo.gl/aHejhs

Wednesday, July 11

Tomoki Fukai

Title: Different forms of synaptic plasticity orchestrate for cognitive function

Abstract:

The brain's ability to learn and memorize things is crucial for cognitive behavior of animals. In the last few decades, neuroscience has achieved many insights into the brain mechanisms of learning and memory. In the first part, I will introduce various types of synaptic plasticity, which are thought to underlie learning and memory of the brain. In particular, I will present some computational models to discuss the role of structural plasticity in neural computation. In the second part, I will consider synaptic plasticity rules in recurrent networks to explore the underlying mechanisms of hippocampal memory processing. In spatial navigation task, replay sequences of hippocampal neurons are known to play a pivotal role for the formation of place memory. A conventional theory claims that recurrent networks learn sequential firing patterns through spike-timing dependent plasticity (STDP) with asymmetric time windows.However, STDP has a symmetric time window in CA3, and the standard view needs to be revised. We show in a computational model that goal-directed sequence memory is formed during reverse replay events under symmetric, but not asymmetric, STDP if combined with short-term synaptic plasticity. Because reverse replay frequently occurs after reward, this form of sequence memory navigates animals to the rewarded locations. Finally, I will discuss how the circuit-level understanding of memory processing will advance our understanding of the brain's mechanisms to model the external and internal worlds.

Thursday, July 12

Maxim Bazhenov

Title: New learning, sleep and memory consolidation

Abstract:

Memory depends on three general processes: encoding, consolidation and retrieval. Although the vast majority of research has been devoted to understanding encoding and retrieval, recent novel approaches have been developed in both human and animal research to probe mechanisms of consolidation. A story is emerging in which important functions of consolidation occur during sleep and that specific features of sleep appear critical for successful retrieval across a range of memory domains, tasks, and species.
Previously encoded memories can be damaged by encoding of new memories, especially when they are relevant to the new data and hence can be disrupted by new training – a phenomenon called “catastrophic forgetting”. Sleep can prevent the damage by replaying recent memories along with the old relevant memories. Though multiple evidences point to the role of sleep in memory consolidation, exact mechanisms remain to be understood.
In my talk, I will first discuss the neuronal and network level mechanisms behind major sleep EEG rhythms and experimental data on memory consolidation. I will then present our new results, obtained in computer simulations, to reveal the neural substrates of memory consolidation involving replay of memory specific sequences of spikes. Our study predicts that spontaneous reactivation of the learned sequences during sleep spindles and slow waves of NREM sleep represents a key mechanism of memory consolidation, sleep replay helps to avoid catastrophic forgetting and the basic structure of sleep stages provides an optimal environment for consolidation of competing memories.

Suggested Readings:

  • Wei Y, Krishnan GP, Komarov M, Bazhenov M. Differential roles of sleep spindles and sleep slow oscillations in memory consolidation. bioRxiv 153007; doi: https://doi.org/10.1101/153007 (in press in PLoS Comp Bio)
  • Wei Y, Krishnan GP, Bazhenov M. Synaptic Mechanisms of Memory Consolidation during Sleep Slow Oscillations. J Neurosci. 2016 Apr 13;36(15):4231-47. doi: 10.1523/JNEUROSCI.3648-15.2016.
  • Krishnan GP, Chauvette S, Shamie I, Soltani S, Timofeev I, Cash SS, Halgren E, Bazhenov M. Cellular and neurochemical basis of sleep stages in the thalamocortical network. Elife. 2016 Nov 16;5. pii: e18607. doi: 10.7554/eLife.18607.
  • Timofeev I, Grenier F, Bazhenov M, Sejnowski TJ, Steriade M. Origin of slow cortical oscillations in deafferented cortical slabs. Cereb Cortex. 2000 Dec;10(12):1185-99.
  • Bazhenov M, Timofeev I, Steriade M, Sejnowski T. Spiking-bursting activity in the thalamic reticular nucleus initiates sequences of spindle oscillations in thalamic networks. J Neurophysiol. 2000 Aug;84(2):1076-87.

Go back to Top