OCNC2007

Original Website

Top | Schedule | Lectures | Projects | People

Okinawa Computational Neuroscience Course 2007

The aim of the Okinawa Computational Neuroscience Course is to provide opportunities for young researchers with theoretical backgrounds to learn the latest advances in neuroscience, and for those with experimental backgrounds to have hands-on experience in computational modeling.

We invite graduate students and postgraduate researchers to participate in the course, held from June 26th through July 13th at an oceanfront seminar house of the Okinawa Institute of Science and Technology.

In the previous three years, OCNC focused on the brain's computation at different levels: Bayesian computation by neural populations (2004), learning and prediction for behavior (2005), and single neurons as computational devices (2006). This year, OCNC will be a comprehensive three-week course covering single neurons, networks, and behaviors with more time for student projects. We invite those who are interested in integrating experimental and computational approaches at each level, as well as in bridging different levels of complexity.

Date:

  • June 25th to July 13th, 2007

Place:

  • OIST Seaside House, Onna Village, Okinawa, Japan

Sponsors:

  • Okinawa Institute of Science and Technology
  • Nara Institute of Science and Technology
  • Japanese Neural Network Society

Co-organizers:

  • Erik De Schutter, Okinawa Institute of Science and Technology
  • Kenji Doya, Okinawa Institute of Science and Technology
  • Klaus Stiefel, Okinawa Institute of Science and Technology
  • Jeff Wickens, Okinawa Institute of Science and Technology

Advisory Board:

  • Sydney Brenner, Okinawa Institute of Science and Technology
  • Mitsuo Kawato, ATR Computational Neuroscience Laboratories
  • Terrence Sejnowski, Salk Institute
  • Torsten Wiesel, Rockefeller University

Theme:

"Neurons, Networks and Behaviors"

In the previous three years, OCNC focused on the brain's computation at different levels: Bayesian computation by neural populations (2004), learning and prediction for behavior (2005), and single neurons as computational devices (2006). This year, OCNC will be a comprehensive three-week course covering single neurons, networks, and behaviors with more time for student projects.

We invite those who are interested in integrating experimental and computational approaches at each level, as well as in bridging different levels of complexity.

Lecturers:

  • Ad Aertsen, University Freiburg
  • Gordon Arbuthnott, Okinawa Institute of Science and Technology
  • Tom Bartol, Salk Institute
  • Hagai Bergman, Hebrew University of Jerusalem
  • Nathaniel Daw, New York University
  • Sophie Deneve, ENS Paris
  • Erik De Schutter, Okinawa Institute of Science and Technology
  • Markus Diesmann, RIKEN Brain Science Institute
  • Kenji Doya, Okinawa Institute of Science and Technology
  • Michael Hausser, University College London
  • Shin Ishii, Nara Institute of Science and Technology
  • Dieter Jaeger, Emory University
  • Mitsuo Kawato, ATR Computational Neuroscience Laboratories
  • Eve Marder, Brandeis University
  • Read Montague, Baylor College of Medicine
  • Klaus Stiefel, Okinawa Institute of Science and Technology
  • David Terman, Ohio State University
  • Jeff Wickens, Okinawa Institute of Science and Technology

Top | Schedule | Lectures | Projects | People

Schedule

Monday, June 25th Check-in
   
   
Tuesday, June 26th Basic Tutorials
   
Track 1 : Math and computing for experimentalists
 
09:00-12:00 Shin Ishii ( NAIST)
Introduction: statistical and machine learning based approaches to neurobiology
 
14:00-17:00 Markus Diesmann (RIKEN)
Differential equations, phase planes, and numerics for Computational Neuroscience
   
   
Track 2 : Neurobiology for theoreticians
 
09:00-12:00 Gordon Arbuthnott (OIST)
Brain Structure and function - an introduction for non biologists
   
14:00-17:00 Jeff Wickens (OIST)
Synaptic plasticity and behaviour
   
17:00-19:00 Project Orientation
   
19:00-21:00 Welcome
   
   
Wednesday, June 27th  
   
09:00-12:00 David Terman ( Ohio State U)
Bifurcation Analysis of Conductance-Based Neuron Models
   
14:00- 16:00 Poster Session 1
 
 
Thursday, June 28th  
   
09:00-12:00 Klaus Stiefel (OIST)
Dendritic computation: From the basics to structure - function relationships
 
14:00- 16:00 Poster Session 2
 
 
Friday, June 29th  
   
09:00-12:00 Dieter Jaeger ( Emory U)
Ionic and synaptic currents combined experimental/modeling analysis
 
14:00- 16:00 Poster Session 3
16:30- 18:30
 

Klaus Stiefel
Neuron Tutorial

 
 
Saturday, June 30th  
   
09:00-12:00 Tom Bartol ( Salk Institute)
Realistic 3D simulation of neuronal cell signaling
 
Sunday, July 1st Day off
 
 
Monday, July 2nd  
09:00-12:00 Ad Aertsen (U Freiburg)
Cortical dynamics
 
 
14:00-16:00 Carson Roberts (University of Texas at San Antonio)
GENESIS Tutorial
 
 
Tuesday, July 3rd  
 
09:00-12:00 Eve Marder (Brandeis U)
Understanding small circuit dynamics: the role of neuromodulation and homeostasis
 
14:00-16:00 Werner Van Geit (OIST)
PC Cluster Tutorial
 
 
 
Wednesday, July 4th  
 
09:00-12:00 Sophie Deneve (ENS Paris)
How do neurons and population of neurons deal with uncertainty? Probabilistic theories of neural coding and computation
 
13:00-18:00 Excursion
 
 
Thursday, July 5th    
 
09:00-12:00 Erik De Schutter (OIST)
The Purkinje cell in the olivocerebellar network
 
 
14:00-16:00 Special Interest Session: Parameter landscape and autotuning
 
 
 
Friday, July 6th  
 
09:00-12:00 Hagai Bergman (Hebrew U)
The neural networks of the basal ganglia: From reinforcement learning to Parkinson's disease
 
 
Saturday, July 7th  
 
09:00-12:00 Michael Hausser (UCL)
Single neuron computation
 
 
14:00-15:00 Jerome Friedman (MIT)
Special Lecture: Particle physics and cosmology
 
 
Sunday, July 8th Day off
 
   
Monday, July 9th  
   
09:00-12:00 Kenji Doya (OIST)
Reinforcement learning and decision making
 
 
Tuesday, July 10th  
09:00-10:00 Torsten Wiesel (Rockefeller University)
Special Lecture: Do we learn to see? The role of nature and nurture in brain development
 
 
   
10:00-13:00 Mitsuo Kawato (ATR)
Computational motor learning
 
 
Wednesday, July 11th  
   
09:00-12:00 Gail Tripp and Jeff Wickens (OIST)
Reinforcement mechanisms: from human behaviour to cellular mechanisms
 
14:00-18:00 Project Presentation 1
   
   
Thursday, July 12th  
   
09:00-12:00 Nathaniel Daw (NYU)
Cognition and planning
 
14:00-18:00 Project Presentation 2
   
19:00-21:00 Farewell
   
   
Friday, July 13th Check-out

Top | Schedule | Lectures | Projects | People

Lectures

Tutorial 1: Math and Computing for Experimentalists
Shin Ishii (NAIST)
Markus Diesmann (RIKEN)
Tutorial 2: Neurobiology for Theoreticians
Gordon Arbuthnott (OIST)
Jeff Wickens (OIST)
David Terman (Ohio State University)
Klaus Stiefel (OIST)
Dieter Jaeger (Emory University)
Tom Bartol (Salk Institute)
Eve Marder (Brandeis University)
Ad Aertsen (Universität Freiburg)
Sophie Deneve (ENS Paris)
Erik De Schutter (OIST)
Hagai Bergman (Hebrew University)
Michael Häusser (University College London)
Jerome Friedman (MIT)
Kenji Doya (OIST)
Torsten Wiesel (Rockefeller University)
Mitsuo Kawato (ATR)
Gail Tripp and Jeff Wickens(OIST)
Nathaniel Daw (New York University)

Lecture Title, Abstract, and Suggested Readings:

Shin Ishii

Introduction: statistical and machine learning based approaches to neurobiology

In this introductory lecture, I present some basic ideas and techniques in statistics and machine learning, which can be applied to modeling and data analyses in neurobiological studies. Statistical inference, e.g., maximum likelihood estimation and Bayesian estimation, are important and useful not only for model fitting to data but also for various kinds of modeling of neurobiological systems. Information theoretical methods are also important for evaluating the amount of information processed by such systems.

Suggested Readings:

F.Rieke, D.Warland, R.de Ruyter van Steveninck, W.Bialek. (1999). Spikes. The MIT Press.
eds. K.Doya, S.Ishii, A.Pouget, R.P.N.Rao. (2007). Bayesian Brain. The MIT Press.
[pdf]

Lecture slides
Lecture movie (313.7 MB)

 

Markus Diesmann

Differential equations, phase planes, and numerics for Computational Neuroscience

Modern neuroscience relies on the language of nonlinear dynamics to formulate accurate and intelligible descriptions of the complex processes in the brain.
This introductory lecture suitable for students from the different disciplines covers basic techniques for solving dynamical systems typically occurring in Computational Neuroscience. Unfortunately, these systems can often only be solved numerically. Therefore, an emphasis is put on qualitative analysis and simulation. The examples start with components of simple neuron models. Finally, because the brain brings about its unparalleled functions by the interaction of many neurons, we also have a look at methods for large neuronal networks.

Suggested Readings:

[1] Kaplan, D. and Glass, L. (1995) Understanding Nonlinear Dynamics.
Springer, New York.

[2] Strogatz, Steven H. (1994) Nonlinear Dynamics and Chaos.
Perseus Books, Reading, Massachusetts.

[3] Rotter S and Diesmann M. (1999) Exact Digital Simulation of
Time-Invariant
Linear Systems with Applications to Neuronal Modeling.
Biological Cybernetics 81:381-402.

[4] Morrison A, Straube S, Plesser H E, and Diesmann, M. (2007) Exact
subthreshold integration with continuous spike times in discrete time
neural network simulations. Neural Computation 19:47-79.

 

Gordon Arbuthnott

"Brain Structure and function - an introduction for non biologists"

This session will have very modest aims. We should cover the entire Neuroscience literature from 1890's to today! We'll try to pick the good bits and to look at some of the assumptions that have become 'reasonable' as the subject has progressed.
It will have to be selective and so will not cover anything in detail but should at least introduce you to the main players in the field - Neurons, glial cells, synapses, channels. The basic machinery of brains. We will leave out the 'higher functions' to be dealt with in the afternoon but they need somehow to be a consequence of the properties that we will discuss in the morning.

We'll use a recent paperback book "The Architecture of Brains" as a way of sorting the various parts of the system into groups. Most of what I'll talk about are very fundamental ideas that are covered in standard text books of Neuroscience. I've listed the best of them below but I haven't read any of them from cover to cover! You will only need one of them and then only if you don't have access to a decent University library to go look at particular problems in them.

I hate text books - heavy, expensive and out of date - but they do let you catch up quickly on what is safe to assume as common knowledge. May not be right but it will be safe.

Suggested Readings:

PRINCIPLES OF NEUROSCIENCE Kandel , Schwartz, and Jessel

Jeff Wickens

Synaptic plasticity and behaviour

In the lecture on synapses and synaptic plasticity, the classical idea of the synapse will be extended to include neuromodulatory actions of neurotransmitters. Quantitative neuroanatomy of synapses will be discussed, as it is one of the few clues we have about the connectivity of real neural networks. The experimental study of synaptic plasticity will also be reviewed. This has encouraged our speculations about mechanisms for learning and memory. So, what are the biological rules governing synaptic plasticity, and how important are the details, for example, of timing? Second messengers and dendritic spines will be reviewed, as they suggest mechanisms that may constrain the possible rules. In the lecture on cognition and behaviour we will consider how these rules for synaptic plasticity may be engaged at synapses deep within the brain during ongoing behaviour. At the macroscopic level, the brain is composed of many entities - large masses of grey matter and broad connecting tracts. We need to consider how these major entities interact to produce purposeful behaviour. I will give an overview of the anatomical organisation of the central nervous system, and a brief introduction to the structure of the cerebral cortex, basal ganglia and cerebellum. Then I will discuss regional specialization of function based on evidence from different types of experiments, from single unit recordings to lesion and behaviour studies.

Suggested Readings:

Brembs, B., Lorenzetti, F. D., Reyes, F. D., Baxter, D. A. & Byrne, J. H. Operant reward learning in Aplysia: neuronal correlates and mechanisms. Science 296, 1706-9 (2002).

Kandel, ER, Schwartz JH, Jessell, TM. Principles of Neural Science. Chapters on The Neurobiology of Behaviour, The Neural Basis of Cognition, and Learning and Memory.

Matsuzaki, M., Honkura, N., Ellis-Davies, G. C. & Kasai, H. Structural basis of long-term potentiation in single dendritic spines. Nature 429, 761-6 (2004).

Reynolds, J. N. J., Hyland, B. I. & Wickens, J. R. A cellular mechanism of reward-related learning. Nature 413, 67-70 (2001).

Squire, L. R. Memory systems of the brain: a brief history and current perspective. Neurobiol Learn Mem 82, 171-7 (2004).

David Terman

"Bifurcation Analysis of Conductance-Based Neuron Models"

The primary goal of my lecture is to learn how to use dynamical systems methods to analyze both single cell and network models. I will begin by introducing some of the basic concepts of dynamical systems theory including phase space analysis, stability, limit cycles, bifurcation theory and geometric singular perturbation theory. These will be used to analyze bursting oscillations and propagating action potentials. I will then discuss small synaptically coupled networks and characterize when these networks exhibit synchronous, antiphase and other more complex rhythms. The lectures will be complimented with numerical simulations, primarily using XPPAUT. Time permitting, I will discuss specific applications such as models for sleep rhythms, Parkinsonian tremor and olfaction.

Suggested Readings:

Termam D.(2004), An Introduction to Dynamical Systems and Neuronal Dynamics. [pdf]

Lecture movie (143.0 MB)
 

Klaus Stiefel

Dendritic Computation: From the Basics to Structure - Function Relationships

In the first part of my lecture, I will review the integration of electric and biochemical signals in dendrites. Topics covered will be the generation of these signals at synapses, the integration of passive electrical signals as described by the cable equations and active dendritic properties. Furthermore, I will talk about the signal processing conducted by 2nd messenger cascades in dendrites and their interaction with the electrical signal integration. I will also briefly review the role these 2nd messenger cascades play in synaptic plasticity. I will stress how different spatial and temporal scales interact in dendritic signal integration.
In the 2nd part of my lecture, I will give you a taste of the diversity of dendritic properties and structures, and how it reflects the different computational functions neurons perform. I will cover neurons from the fly's visual system, the owl's auditory brainstem and the mammalian cortex. Finally, I will talk about a theoretical approach my research group and I use to determine function - structure relationships in dendrites.

Suggested Readings:

Foundations of Cellular Neurophysiology, Johnston & Wu (1994), MIT Press

Markram, H, Toledo-Rodrigues, M, Wang, Y, Gupta, A, Silverberg, G, Wu, C, Interneurons of the neocortical inhibitory system Nature Reviews Neuroscience, 5: 793-807 (2004). [pdf]


Single, S, Borst, A, Dendritic Integration and Its Role in Computing Image Velocity, Science, 281: 1848-1850 (1998)


Mainen, ZF, Sejnowski, TJ, Influence of Dendritic Structure on Firing Patterns in Model Neocortical Neuons, Nature, 382: 363-366 (1996)


Stiefel, KM, Sejnowski, TJ Mapping Function onto Neuronal Morphology. J Neurophysiology, in press (2007). [pdf]


Lecture slides
Lecture movie (273.9 MB)


 


Dieter Jaeger

"Ionic and synaptic currents combined experimental/modeling analysis"

Morphologically realistic compartmental models can be studied in many ways just like real neurons, however, unlike in experimental settings we have complete control over all inputs to the model as well as access to all output variables. Such models can be used to study properties of synaptic coding in a precisely defined way that is not available in the experiment. Nonetheless, the model may not reflect reality accurately, and thus experimental testing of modeling predictions is essential. I will demonstrate the cycle of building a model, generating model predictions, and testing these predictions experimentally. I will introduce concepts of compartmental modeling using the Genesis simulator with an emphasis on creating biologically realistic outcomes. I will also introduce the technique of dynamic current clamping, which allows the testing of modeling predictions by injecting artificial conductances into neurons during brain slice recordings.

Suggested Readings:
PDF files of these suggested readings and many other papers can be found at:
http://www.biology.emory.edu/research/Jaeger/

Herz, A.V.M, Gollisch, T., Machens, C.K., and Jaeger, D. (2006) Review: Modeling single-neuron dynamics and computations: A balance of detail and abstraction. Science. 314: 80-85.

D. Jaeger, E. De Schutter, J.M. Bower. (1997) The role of synaptic and voltage-gated currents in the control of Purkinje cell spiking: a modeling study. J. Neuroscience. 17: 91-106.

D. Jaeger and J.M. Bower (1999) Synaptic control of spiking in cerebellar Purkinje cells: Dynamic current clamp based on model conductances. J. Neurosci. 19: 6090-6101.

PageTop
 

Thomas M. Bartol Jr.

Realistic 3D simulation of neuronal cell signaling

Biochemical signaling pathways are integral to the information storage, transmission, and transformation roles played by neurons in the nervous system.
Far from behaving as well-mixed bags of biochemical soup, the intra- and inter-cellular environments in and around neurons are highly organized reaction-diffusion systems, with some subcellular specializations consisting of just a few copies each of the various molecular species they contain. For example, glutamtergic dendritic spines in area CA1 hippocampal pyramidal cells contain perhaps 100 AMPA receptors, 20 NMDA receptors, 10 CaMKII complexes, and 5 free Ca++ ions in the spine head. Much experimental data has been gathered about the neuronal signaling pathways involved in processes such as synaptic plasticity, especially recently, thanks to new molecular probes and advanced imaging techniques. Yet, fitting these observations into a clear and consistent picture that is more than just a cartoon but rather can provide biophysically accurate predictions of function has proven difficult due to the complexity of the interacting pieces and their relationships. Gone are the days when one could do a simple thought experiment based on the known quantities and imagine the possibilities with any degree of accuracy. This is especially true of biological reaction-diffusion systems where the number of discrete interacting particles is small, the spatial relationships are highly organized, and the reaction pathways are non-linear and stochastic. Biophysically accurate computational experiments performed on cell signaling pathways is a powerful way to study such systems and to help formulate and test new hypotheses in conjunction with bench experiments. MCell was designed for the purpose of simulating exactly these sorts of cell signaling systems. Here I will introduce fundamental concepts of cell signaling processes in organized, compact spaces by presenting three specific examples that have been studied using MCell: 1) Ectopic neurotransmission in the chick ciliary ganglion; 2) Glutamatergic synaptic transmission and calcium dynamics in hippocampal area CA1 dendritic spines; and 3) Presynaptic calcium dynamics and modulation of release probability in Schaeffer collateral multisynaptic boutons. Finally I will present an introduction to the computational algorithms employed in MCell Version 3, and how to use MCell's Model Description Language to build 3D models of virtually any biochemical signaling pathway in the context of its cellular ultrastructure.

Suggested Readings:
PDF files of these suggested readings can be found at:
http://www.mcell.cnl.salk.edu/Publications/

Jay S. Coggan, Thomas M. Bartol, Eduardo Esquenazi, Joel R. Stiles, Stephan Lamont, Maryann E. Martone, Darwin K. Berg, Mark H. Ellisman, Terrence J. Sejnowski. 2005. Evidence for Ectopic Neurotransmission at a Neuronal Synapse. Science, 15 July 2005, 446-451.

Joel R. Stiles, Thomas M. Bartol. 2001. Monte Carlo Methods for Simulating
Realistic Synaptic Microphysiology Using MCell. in: Computational Neuroscience: Realistic Modeling for Experimentalists, editor Erik De Schutter. CRC Press, 87-127.

Joel R. Stiles, Thomas M. Bartol, Miriam M. Salpeter, Edwin E. Salpeter, Terrence J. Sejnowski. 2001. Synaptic Variability: New Insights from Reconstructions and Monte Carlo Simulations with MCell. in: Synapses, editors W. Maxwell Cowan, Thomas C. Sudhof, Charles F. Stevens. Johns Hopkins University Press, 681-731.

Lecture movie (278.3 MB)

Ad Aertsen

Lecture slides
Lecture movie (309.9 MB)

Eve Marder

"Pattern generation and neuromodulation"

In my presentation I will describe: a) the basic mechanisms by which simple rhythmic networks generate behavior, b) how neuromodulators alter synaptic and intrinsic properties of neurons to influence circuit dynamics, c) the kinds of homeostatic mechanisms which are needed to provide stable neuronal and network function over an animal's lifetime. I will use examples from theoretical and experimental studies using the crustacean stomatogastric nervous system to argue that synaptic and intrinsic currents can vary far more than the output of the circuit in which they are found. These data have significant implications for the mechanisms that maintain stable function over the animal's lifetime, and for the kinds of changes that allow the nervous system to recover function after injury.

Suggested Readings:
Original Papers:

Prinz, A.A., Bucher, D. and Marder, E. (2004) Similar network activity from disparate circuit parameters. Nat. Neurosci., 7: 1345-1352.

Bucher, D., Prinz, A.A., and Marder, E. (2005) Animal-to-animal variability in motor pattern production in adults and during growth. J. Neurosci., 25: 1611-1619. (cover picture).

Thirumalai, V., Prinz, A.A., Johnson, C. and Marder, E. (2006) Red pigment concentrating hormone strongly enhances the strength of the feedback to the pyloric rhythm oscillator but has little effect on pyloric rhythm period. J. Neurophysiol., 95: 1762-1770.

Schulz, D.J., Goaillard, J.-M., and Marder, E. (2006) Variance in channel expression in the same neuron in different animals, Nature Neuroscience, 9: 356 ? 362 (cover picture).


Review Articles:

Goaillard, J.-M. and Marder, E. (2006) Dynamic clamp analyses of cardiac, endocrine, and neural function. Physiology, 21: 197-207.

Marder, E. and Goaillard, J-M. (2006) Variability, compensation, and homeostasis in neuron and network function. Nature Neuroscience Reviews, 7:563-574.

Marder, E. and Bucher, D. (2007) Understanding circuit dynamics using the stomatogastric nervous system of lobsters and crabs. Annu Rev Physiol, 69: 291-316.

Sophie Deneve

How do neurons and population of neurons deal with uncertainty? Probabilistic theories of neural coding and computation

Understanding how human perception is translated into actions and how our experience forms our worldview has been one of the central questions of psychology, cognitive science and neuroscience. In particular it is very hard to understand how our perceptions and strategies persist and/or change in the face of our continuous experience as active agents in an unpredictable perpetually changing world. Theories of Bayesian inference and learning have been very successful recently in describing the behaviors of humans and animals, and particularly their perceptual and motor biases. Indeed, the tasks humans are facing in "natural" situations (as opposite to simplistic laboratory settings) require the combination of multiple noisy and ambiguous sensory cues, as well as the use of prior knowledge, either innate or acquired from previous experiences. Such incomplete and imperfect sensory cues and prior knowledge can only provide probabilistic information (such as object structures that are more likely, or the probability of moving in a particular direction given the observed optic flow).

How the neural substrates perform these probabilistic inference tasks is still an open question. Arguably, one cannot deny that the brain performs efficient probabilistic computations. However, to do so it uses computational units that are slow (neurons have time constants of the order of several tens of milliseconds, as opposed to processors that are millions time quicker) and unreliable (synaptic transmission in the cortex fails 20% of the time on average, spike counts are very variable from trial to trial). Moreover, its code is not composed of continuous values (such as a probability) but by trains of discreet and rare events (spikes).

During this lecture we will first consider what are the sources of neural noise and how neural structures can cope with this variability to provide precise estimates of sensory and motor variables. Population coding is a solution: Many behaviorally relevant variables, such as movement direction, are represented by large groups of neurons with wide, overlapping tuning curves, and very noisy responses. Interconnected population of such neurons could "clean up" the noise and converge to the most probable estimate of the variable, e.g. the most likely direction. When several population codes represent statistically related variables, for example the direction of motion of an object on the skin and on the retina, these networks could perform optimal cue integration and converge to the most precise joint estimate.

Eventually, making behavioral decisions might involve convergence to an attractor. Unfortunately, this provides a very incomplete theory for how the brain deals with uncertainty, for two main reasons. First, convergence to an attractor (or any other forms of neural decision) implies the loss of probabilistic information: we know what we voted for, but not with which confidence margin. This makes it hard to integrate this current state of knowledge with prior knowledge and with new information. Second of all, convergence implies a static task, with plenty of time to perform an inference, whereas the state of the real world perpetually changes. For example visual direction of motion has to be estimated on-line from streams of noisy V1 responses.

Alternatively, rather than making decisions "right away", single neurons and neural populations could be representing, implicitly or explicitly, probability distributions. We will review several alternative models for how this could be performed by neurons with various degree of biophysical realism. First of all, population coding with noisy neurons can be interpreted as implicitly representing probabilities. More surprisingly, even single integrate and fire neurons can be interpreted as computing probabilities over time. Relatively simple and straightforward computations, such as receptive fields, linear synaptic integration, threshold crossing and Hebbian plasticity, could lead to elaborate probabilistic inference and learning based on the statistical regularities of the world.

Numerous challenge and questions remain: alternative hypothesis regarding the neural basis of probabilistic computation will need extensive experimental investigation and validation. The Bayesian framework has its own limitation, particularly visible when applied to real world problem. Importantly, despite its current popularity the Bayesian framework is not a theoretical model of the brain. It is a tool that can sometime be useful to formalize and clarify theories of neural computation.



Suggested Readings:

Basic readings:

Shadlen MN, Newsome WT.
Neural basis of a perceptual decision in the parietal cortex (area LIP) of the rhesus monkey.
J Neurophysiol. 2001 Oct;86(4):1916-36.

Pouget A, Dayan P, Zemel RS.
Inference and computation with population codes.
Annu Rev Neurosci. 2003;26:381-410. Epub 2003 Apr 10. Review.

More advanced readings :

Zemel RS, Dayan P, Pouget A.
Probabilistic interpretation of population codes.
Neural Comput. 1998 Feb 15;10(2):403-30. Review.

Barber MJ, Clark JW, Anderson CH.
Neural representation of probabilistic information.
Neural Comput. 2003 Aug;15(8):1843-64.

Rao RP.
Bayesian computation in recurrent neural circuits.
Neural Comput. 2004 Jan;16(1):1-38.

Ma WJ, Beck J, Latham PE & Pouget A
Bayesian inference with probabilistic population codes.
Nature Neuroscience. 2006 Nov;9:1432-1438.

Lecture slides

Lecture movie (253.9MB)

Erik De Schutter

"The Purkinje cell in the olivocerebellar network"

My lectures will be at the interface between detailed single cell and network modeling and focus on the role of the Purkinje cell (PC) in cerebellar learning.
In the first lecture I will start with a technical issue: how to select parameters for a neuron model. I will describe our development of an automated parameter search method that is now distributed as the package NeuroFitter. Using this method we have been exploring the parameter space of the 1994 PC model (De Schutter and Bower 1994) and, to our surprise, have identified over 250 different versions of this model, all replicating its firing pattern in great detail and distributed over a very complex parameter space (Achard and De Schutter 2006). This result will be considered in the context of cellular homeostasis and experimental data showing heterogeneity among PCs. Finally, I will demonstrate that within these different models calcium influx is constrained in a manner supporting the induction of plasticity at the parallel fiber (PF) to PC synapse (long-term depression, LTD), but excluding a role of calcium as homeostatic sensor.
In the second lecture I will present combined modeling and experimental studies of the temporal structure of PC simple spike (SS) trains and present this starting from how it may affect the target neurons in deep cerebellar nuclei (DCN). DCN neurons are known to produce strong rebound spikes upon release from inhibition. Such release requires a synchronized pause in firing of afferent PCs, we indeed found that 13% of pauses were synchronized among neighboring PCs (Shin and De Schutter 2006). Therefore the pauses may form a temporal code in SS trains. If evoking rebound spikes is an important coding principle, PCs must be able to regulate them. Surprisingly, we found that SS firing contains long stretches of regular firing, mostly at faster rates (Shin et al. submitted). These regular patterns, together with the known fast depression of PC to DCN synapses, form a perfect rate code, where firing rate translates into steady state inhibition of DCN neurons. We propose that this rate code controls the amplitude of rebound spikes that may be evoked shortly afterwards.
Another way to regulate rebound spikes is by changing their duration through synaptic plasticity. Indeed, we found that induction of LTD of the PF synapse affects the duration of the SS pause evoked by strong PF stimuli (Steuber et al. 2007). The PF input causes a short burst of spikes followed by a pause that is caused by a calcium-activated hyperpolarisation. LTD of PF synapses causes a reduction in calcium influx and of the consequent hyperpolarisation. This leads to the counter-intuitive result that LTD shortens the pause and increases spiking output of the Purkinje cell. These modeling predictions have been confirmed with in vitro slice studies and analysis of awake in vivo recordings.

Suggested Readings:

E. De Schutter and J.M. Bower: An active membrane model of the cerebellar Purkinje cell. I. Simulation of current clamps in slice. Journal of Neurophysiology 71: 375-400 (1994).

P. Achard and E. De Schutter: Complex parameter landscape for a complex neuron model. PLoS Computational Biology 2: e94, 794-804 (2006).

S.-L. Shin and E. De Schutter: Dynamic synchronization of Purkinje cell simple spikes. Journal of Neurophysiology 96: 3485-3491 (2006).

V. Steuber, W. Mittmann, F.E. Hoebeek, R.A. Silver, C.I. De Zeeuw, M. Hausser and E. De Schutter: Cerebellar LTD and pattern recognition by Purkinje cells. Neuron 54: 121-136 (2007).

Hagai Bergman

The neural networks of the basal ganglia: From reinforcement learning to Parkinson's disease

The critical role played by the basal ganglia in the pathogenesis of various movement disorders such as Parkinson's and Huntington's diseases has been known for many years. Recent studies have indicated that the neural networks of the basal ganglia participate in everyday complex behaviors that require coordination between cognition, motivation and movements. Our research is therefore aimed at both directions. First, we try to provide better understanding of the role and way of action of the basal ganglia-cortical networks in normal behavior, and secondly we are studying these networks following the induction of clinical disorders such as Parkinson's disease and dyskinesia.

Multi-electrode recordings in the globus pallidus of normal primate revealed mainly uncorrelated activity. We therefore propose a Reinforcement Driven Dimensionality Reduction (RDDR) model which postulates that the neural networks of the basal ganglia compress cortical information according to a reinforcement signal using optimal extraction methods. The reinforcement signal, representing the mismatch between expectations and reality of a stimulus-action pair, is mainly provided by the mid-brain dopaminergic projections to the basal ganglia. However, this dopamine error signal and basal ganglia encoding are limited to the positive domain. We therefore suggest that negative reinforcement learning is carried by other neuronal system (e.g., amygdala).

Parkinson's disease is a common and disabling disorder of movement in human and it is due to dopaminergic denervation of the striatum. However, how this denervation perverts normal functioning to cause poverty and slowing of voluntary movements (akinesia and bradykinesia), muscle rigidity and tremor has remained unclear. Recent work in tissue slice preparations, animal models and in humans with Parkinson's disease has demonstrated abnormally synchronized oscillatory activity at multiple levels of the basal ganglia-cortical loop. These excessive oscillations correlates with the motor deficits and their suppression by dopaminergic therapies, ablative surgery or deep brain stimulation ameliorate the motor symptoms of Parkinson's disease. The excessive synchronization of the basal ganglia networks can be explained in the framework of the RDDR model. Nevertheless, persistent and robust correlation between the basal ganglia oscillations and the Parkinsonian tremor has not been established. This is probably due to the gating, rather than driving mode of effect of the basal ganglia on the thalamo-cortical activity. We therefore suggest that basal ganglia abnormal oscillations and synchronization disrupt motor cortex and brain stem activity leading to akinesia and bradykinesia (core negative symptoms of the disease). The positive motor symptoms of Parkinsonism ? rigidity and tremor ? are probably generated by compensatory mechanisms downstream to the basal ganglia.

Suggested Readings:

1. Daw ND, Doya K (2006) The computational neurobiology of learning and reward. Curr Opin Neurobiol 16:199-204.

2. Arbuthnott GW, Wickens J (2007) Space, time and dopamine. Trends Neurosci 30:62-69.

3. Schultz W (2007) Behavioral dopamine signals. Trends Neurosci 30:203-210.

4. Bergman H, Wichmann T, DeLong MR (1990) Reversal of experimental parkinsonism by lesions of the subthalamic nucleus. Science 249:1436-1438.

5. Bergman H, Feingold A, Nini A, Raz A, Slovin H, Abeles M, Vaadia E (1998) Physiological aspects of information processing in the basal ganglia of normal and parkinsonian primates. Trends Neurosci 21:32-38

6. Morris G, Arkadir D, Nevet A, Vaadia E, Bergman H (2004) Coincident but distinct messages of midbrain dopamine and striatal tonically active neurons. Neuron 43:133-143.

Michael Hausser

Single neuron computation

One of the central questions in neuroscience is how particular tasks, or computations, are implemented by neural networks to generate behaviour, and how patterns of activity are stored during learning. In the past, the prevailing view has been that information processing and storage in neural networks results mainly from properties of synapses and connectivity of neurons within the network. As a consequence, the contribution of single neurons to computation in the brain has long been underestimated. I will describe recent work providing evidence that the dendritic processes of single neurons, which receive most of the synaptic input, display an extremely rich repertoire of behaviour, and actively integrate their synaptic inputs to define the input-output relation of the neuron. Moreover, the signalling mechanisms which have been discovered in dendrites have suggested new ways in which patterns of network activity could be stored and transmitted.

Suggested Readings:

Sjostrom PJ, Hausser M. (2006). A cooperative switch determines the sign of synaptic plasticity in distal dendrites of neocortical pyramidal neurons.
Neuron 51(2):227-38.

London M, Hausser M. (2005). Dendritic computation.
Annu Rev Neurosci. 28:503-32.

Hausser M, Mel B. (2004). Dendrites: bug or feature?
Curr Opin Neurobiol. 13(3):372-83.

Jerome Friedman

Special Lecture: Particle physics and cosmology

Kenji Doya

Reinforcement Learning and Decision Making

In the first part of the lecture, I will present the basic framework of reinforcement learning and several variants of algorithms, such as actor-critic, Q-learning, and model-based planning.
In the second part, I will present our working hypothesis about how reinforcement learning can be implemented in the network linking the cerebral cortex and the basal ganglia and report our neural recording experiments in monkeys and rats to test that hypothesis.
In the last part, I will present another hypothesis about how parameters of reinforcement learning can be regulated by neuromodulators and report our human brain imaging and rat neural recording experiments to test the role of serotonin in delayed reward tasks.
Suggested Readings:
Doya, K. (2007). Reinforcement learning: Computational theory and biological principles. HFSP Journal, doi:10.2976/1.2723643. [pdf]

Samejima, K., Ueda, K., Doya, K., Kimura, M. (2005). Representation of action-specific reward values in the striatum. Science, 301, 1337-1340.

Tanaka, S., Doya, K., Okada, G., Ueda, K., Okamoto, Y., Yamawaki, S. (2004). Prediction of immediate and future rewards differentially recruits cortico-basal ganglia loops. Nature Neuroscience, 7(8), 887-893.

Doya K. (2002). Metalearning and neuromodulation. Neural Networks, 15, 495-506.

Doya K. (2000). Complementary roles of basal ganglia and cerebellum in learning and motor control. Current Opinion in Neurobiology, 10, 732-739.

Doya K. (1999). What are the computations of the cerebellum, the basal ganglia, and the cerebral cortex. Neural Networks, 12, 961-974.

Lecture slides
Lecture movie (255.9 MB)

 


Torsten Wiesel

Special Lecture: Do we learn to see? The role of nature and nurture in brain development [pdf]


PageTop


Mitsuo Kawato

Computational motor learning

The cerebellar internal model theory postulates that the cerebellar cortex acquires many internal models of controlled objects, dynamical processes in the external world, and even other one's brain dependent on long-term depression (LTD) of Purkinje cells. A specific version of this theory, the feedback-error-learning model postulates that the climbing fiber inputs to Purkinje cells carry the feedback motor command, which could be regarded as an approximation to the error signal for motor commands and can supervise learning of inverse dynamics models. Many experimental supports were obtained from the ventral paraflocculus of the cerebellum during monkey control of ocular following responses. For arm movements under multiple force fields, firings of many Purkinje cells correlate with dynamics [1]. fMRI studies mapped forward and inverse models of manipulated objects and tools in the cerebellar cortex.

Kinetic models of LTD [2,3] suggest a cascade of excitable and bistable dynamical processes, which may resolve plasticity-stability dilemma at single spine level. That is, even a single pulse of climbing fiber input combined with an early train of several parallel fiber pulses can induce Ca2+ induced Ca2+ release via insoitol, 1, 4, 5-trisphosphate receptors on endoplasmic reticulum. The mitogen-activated protein kinases (MAPK) positive feedback loop leaky integrates resulting large Ca2+ elevation and if it crosses the threshold then the state moves to a depressed equilibrium. These models explain diverse LTD experiments and clearly demonstrate that LTD is a supervised learning rule, and not anti-Hebbian as erroneously characterized. The MAPK positive feedback loop model [2] was recently supported by a Ca2+ photo-uncaging and imaging experiment [4] that demonstrates LTD all-or-none character at single spine level.

Suggested Readings:

Kawato M, Samejima K: Efficient reinforcement learning: Computational Theories, Neuroscience and Robotics. Current Opinion in Neurobiology 17, 205-212 (2007).

Kawato M: Internal models for motor control and trajectory planning. Current Opinion in Neurobiology, 9, 718-727 (1999).

Kawato M: From "understanding brain by creating brain" toward manipulative neuroscience., Philosophical Transactions of the Royal Society B, submitted (2007).

Gail Tripp and Jeff Wickens

Reinforcement mechanisms: from human behaviour to cellular mechanisms

All organisms learn to adapt to their environment on the basis of the consequences of their actions. When this process goes awry, disordered behaviour emerges. We will discuss examples of altered reinforcement in humans and animal models of pathological conditions, and how these can be studied behaviourally. We will then describe cellular mechanisms of reinforcement as studied by electrophysiological approaches in model systems. In order to elucidate the underlying neurobiological mechanisms of disordered behaviour, we need to link data from the cellular level to the whole organism. In making such links, computational neuroscience has the potential to play an important role.

References:
Tripp G and Alsop B (2001) Sensitivity to reward delay in children with attention deficit hyperactivity disorder (ADHD). J Child Psychol Psychiat, 42: 691-8.

Tripp G and Alsop B (1999) Sensitivity to reward frequency in boys with attention-deficit hyperactivity disorder. J Clin Child Psychol, 28: 366-75.

Pan, W. X., Schmidt, R., Wickens, J. R., & Hyland, B. I. (2005) Dopamine cells respond to predicted events during classical conditioning: evidence for eligibility traces in the reward-learning network. Journal of Neuroscience, 25, 6235-6242.

Wickens, J.R., Begg, A.J. and Arbuthnott G.W. (1996) Dopamine reverses the depression of rat cortico-striatal synapses which normally follows high frequency stimulation of cortex in vitro. Neuroscience 70, 1-5 .
Suggested Readings:

Daw, N.D., Niv, Y., and Dayan, P. (2005) "Uncertainty-based competition
between prefrontal and dorsolateral striatal systems for behavioral
control," Nature Neuroscience 8:1704-1711.

Hampton, A.N., Bossaerts, P., O'Doherty J.P. (2006) "The Role of the
Ventromedial Prefrontal Cortex in Abstract State-Based Inference during
Decision Making in Humans" Journal of Neuroscience 26:8360-8367

Camerer, C.F., Ho, T.-H., Chong J.-K. (2004) "A Cognitive Hierarchy
Model of Games" Quarterly Journal of Economics 119:861-898

Nathaniel Daw

Cognition and planning

Simple reinforcement learning models of the sort associated with the dopamine system and striatum provide at best an incomplete picture of the decision making in the brain. Psychologically, such theories essentially mirror stimulus-response models of the sort favored by early behaviorists like Hull and Thorndike prior to the cognitive revolution in psychology and neuroscience. Not just in psychology but also systems neuroscience and behavioral economics, the idea is now ubiquitous that such simple processes are accompanied by (and perhaps compete against) more cognitive, reflective processes that are behaviorally and neurally dissociable.

Within the broad setting of learned decision making, we review evidence concerning higher cognitive functions such as inference and planning, and attempts to bring these into contact with computational ideas from reinforcement learning. Particularly important will be model-based reinforcement learning methods, which learn and exploit some representation of the underlying task structure or contingencies. We consider behavioral and neural evidence for cognitive maps and goal-directed behavior; views of fractionation and competition between multiple brain systems in memory, navigation, and reinforcement learning; different conceptions of the role of model-based inference and planning in reinforcement learning; and finally model-based iterative reasoning about opponents in multiplayer competitive interactions.

Suggested Readings:

Daw, N.D., Niv, Y., and Dayan, P. (2005) "Uncertainty-based competition
between prefrontal and dorsolateral striatal systems for behavioral
control," Nature Neuroscience 8:1704-1711.[pdf]

Hampton, A.N., Bossaerts, P., O'Doherty J.P. (2006) "The Role of the
Ventromedial Prefrontal Cortex in Abstract State-Based Inference during
Decision Making in Humans" Journal of Neuroscience 26:8360-8367. [pdf]

Camerer, C.F., Ho, T.-H., Chong J.-K. (2004) "A Cognitive Hierarchy
Model of Games" Quarterly Journal of Economics 119:861-898.[pdf]

Top | Schedule | Lectures | Projects | People

Projects

Student Projects
Most of the evening hours will be spent for student projects. Below are the three groups:

A) Single neuron modeling and analysis

Tutors:
Robert Cannon
Justin Kinney
Carson Roberts

Students:
Jun Ding
Eeva Makiraatikka
Abdellatif Nemri
Shani Ross
Adrien SCHRAMM
Jan Michael Schulz
Shun Tsuruno
David Wipf


B) Psychophysics experiments and/or network modeling

Tutors:
Quentin Huys
Naoyuki Sato
Kristy Sundberg

Students:
Sebastian Brandt
Moritz Buerck
Cristina Domnisoru
Dimitri Fisher
Nikos Green
Johannes Hjorth
Minoru Honda
Steffen Kandler
Szymon Leski
Jason Moyer
Srikanth Ramaswamy
Jing Shao
Yunkyu Sohn
Yoshiyuki Yamada


C) Behavioral experiments and modeling

Tutors:
Daniela Schiller
Quentin Huys
Makoto Ito

Students:
Hanneke den Ouden
Laurent Doll
Robert Froemke
David Milstein
Tobias Potjans
Urvashi Raheja
Diego Shalom
Kazuhisa Shibata
Corinne Teeter

* Students will present posters on their current works early in the
course and the results of their projects at the end of the course.

Top | Schedule | Lectures | Projects | People

People

Co-organizers:

  • Erik De Schutter, Okinawa Institute of Science and Technology
  • Kenji Doya, Okinawa Institute of Science and Technology
  • Klaus Stiefel, Okinawa Institute of Science and Technology
  • Jeff Wickens, Okinawa Institute of Science and Technology

Lecturers:

  • Ad Aertsen, Universität Freiburg
  • Gordon Arbuthnott, Okinawa Institute of Science and Technology
  • Tom Bartol, Salk Institute
  • Hagai Bergman, Hebrew University of Jerusalem
  • Nathaniel Daw, New York University
  • Sophie Deneve, ENS Paris
  • Erik De Schutter, Okinawa Institute of Science and Technology
  • Markus Diesmann, RIKEN Brain Science Institute
  • Kenji Doya, Okinawa Institute of Science and Technology
  • Michael Häusser, University College London
  • Shin Ishii, Nara Institute of Science and Technology
  • Dieter Jaeger, Emory University
  • Mitsuo Kawato, ATR Computational Neuroscience Laboratories
  • Eve Marder, Brandeis University
  • Klaus Stiefel, Okinawa Institute of Science and Technology
  • David Terman, Ohio State University
  • Gail Tripp, Okinawa Institute of Science and Technology
  • Jeff Wickens, Okinawa Institute of Science and Technology

Tutors:

  • Robert Cannon, University of Edinburgh
  • Quentin Huys, Gatsby Computational Neuroscience Unit
  • Makoto Ito, Okinawa Institute of Science and Technology
  • Justin Kinney, Salk Institute
  • Carson Roberts, University of Texas at San Antonio
  • Naoyuki Sato, RIKEN Brain Science Institute
  • Daniela Schiller, New York University
  • Kristy Sundberg, Salk Institute

Students:

  • Sebastian Brandt, Washington University in St. Louis
  • Moritz Buerck, Technical University of Munich
  • Hanneke den Ouden, University College London
  • Jun Ding, Northwestern University
  • Laurent Doll, Universit Pierre et Marie Curie, Paris VI
  • Cristina Domnisoru, MIT
  • Dimitri Fisher, Weizmann Institute of Science
  • Robert Froemke,University of California San Francisco
  • Nikos Green, Max Planck Institute for Human Development
  • Johannes Hjorth, Royal Institute of Technology
  • Minoru Honda, University of Tokyo
  • Steffen Kandler, University of Freiburg
  • Szymon Leski, Polish Academy of Sciences
  • Eeva M_kiraatikka, Tampere University of Technology
  • David Milstein, Queen's University
  • Jason Moyer, University of   Pennsylvania
  • Abdellatif Nemri, University of Montreal
  • Tobias Potjans, RIKEN Brain Science Institute
  • Urvashi Raheja, National Centre for Biological Sciences
  • Srikanth Ramaswamy, Brain Mind Institute, EPFL
  • Shani Ross, University of Michigan
  • Adrien Schramm, Universit Pierre et Marie Curie
  • Jan Michael Schulz, University of Otago
  • Diego Shalom, INECO - UBA
  • Jing Shao, Washington Univeristy in St. Louis
  • Kazuhisa Shibata, Nara Institute of Science and Technology
  • Yunkyu Sohn, Korea Advanced Institute of Science and Technology
  • Corinne Teeter,University of California, San Diego; The Salk Institute
  • Shun Tsuruno, Kyoto University
  • David Wipf, University of California San Francisco
  • Yoshiyuki Yamada, University of Tokyo; Riken BSI
  • Shizhen Zhu, National University of Singapore