OCNC 2013 Program

Program

This program is provisional and subject to change. Program updates will be posted on the OCNC forum at http://workshop.oist.jp

*All the lectures take place in the Seminar Room, OIST Seaside House unless otherwise indicated.

*Faculty meetings take place in Meeting room 1 (and 2 if we have two meetings at the same time).

 

Week 1 (Jun 17-23) : Methods

Monday, June 17

09:30-09:45   Greetings from the organizers

10:00-13:00  Biologists:  Kenji Doya  Introduction to numerical methods for ordinary and partial differential equations

Parallel Sessions    Theoreticians: Yoko Yazaki-Sugiyama Neuronal basis for information processing.Meeting Room 1  

14:00-18:00   Student poster presentations (14:00-16:00 Group1 / 16:00-18:00 Group2)

19:00-21:00   Reception & Dinner

  

Tuesday, June 18

09:30-12:30   Bernd Kuhn 

                          1. Optical Imaging Methods

                          2. Ion channel physiology and the Hodgkin-Huxley model of neuronal activity

14:00-15:00   Introduction of the tutors

15:30-18:00   Tutorial: Python

 

Wednesday, June 19

09:30-12:30   Kenji Doya   Introduction to reinforcement learning and Bayesian inference

14:00-16:00   Tutorial: Matlab1

16:00-18:00   Tutorial: Neuron1

 

Thursday, June 20

09:30-12:30   Erik De Schutter  Modeling biochemical reactions, diffusion and reaction-diffusion systems

14:00-16:00   Tutorial: Matlab2

16:00-18:00   Tutorial: NEST

 

Friday, June 21

09:30-12:30   Erik De Schutter  Introduction to modeling neurons and networks

14:00-16:00   Tutorial: STEPS

16:00-18:00   Project work

 

Saturday, June 22 (Day off)

 

Sunday, June 23

09:30-12:30   Karl Deisseroth  Optical deconstruction of fully-assembled biological systems

14:00-16:00   Faculty meeting with Dr. Karl Deisseroth

 

Week 2 (Jun 24-29) : Neurons, Networks and Behavior I

Monday, June 24

09:30-12:30   Greg Stephens  An introduction to dynamical systems: from neural activity to natural behavior

14:00-16:00   Project work or meeting with Dr. Erik De Schutter or Dr.Greg Stephens

16:00-18:00   Project work

 

Tuesday, June 25

09:30-12:30   Sophie Deneve  Representation of sensory signals and uncertainty in spiking neural networks.

14:00-16:00  Project work or meeting with faculty

16:00-18:00   Project work

 

Wednesday, June 26

09:30-12:30   William (Bill) Holmes Modeling neurons: from theory to practice.  Experiences with hippocampal and vestibular neuron models

14:00-18:00   Visit to OIST campus + faculty meeting with Dr. Bernd Kuhn or Dr. Yoko Yamazaki-Sugiyama

 

Thursday, June 27

09:30-12:30   Avrama Blackwell  Molecular Mechanisms of Synaptic Plasticity

14:00-16:00   Project work or meeting with Sophie Deneve or Dr. William (Bill) Holmes.

16:00-18:00   Project work or Discussion on publishing computational neuroscience papers (Seminar room)

 

Friday, June 28

09:30-12:30   Mike Hasselmo  Memory mechanisms in the entorhinal cortex and hippocampus: Oscillations, grid cells and acetylcholine.

14:00-16:00   Project work or meeting with Dr. Avrama Blackwell or Dr. Kenji Doya.

16:00-18:00   Project work

 

Saturday, June 29

09:30-12:30   Mitsuo Kawato  From computational model based neuroscience to manipulative neuroscience

14:00-16:00   Project work or meeting with Dr.Mike Hasselmo.

 

Sunday, June 30 (Day off)

 

 

Week 3 (Jul 1-4) : Neurons, Networks and Behavior II

Monday, July 1

09:30-12:30   Gaute Einevoll   Modelling and analysis of the local field potential (LFP)

14:00-16:00   Project work or meeting with Dr. Mitsuo Kawato.

16:00-18:00   Project work

 

Tuesday, July 2

09:30-12:30   Jonathan Pillow   Neural coding and statistical models for neural spike trains

14:00-16:00   Project work or meeting with Dr. Gaute Einevoll.

16:00-18:00   Project work

 

Wednesday, July 3

09:30-12:30   Angelo Arleo Neural coding at the early stages of the somatosensory pathway: biological evidence and computational modeling

14:00-16:00   Project work or meeting with Dr. Angelo Arleo or Dr. Jonathan Pillow.

16:00-18:00   Project work

 

Thursday, July 4

09:30-11:00   Jeff Wickens   Basal ganglia structure and computations

11:20-12:20   Student project presentations (first 10 students)

13:30-14:30   Student project presentations (second 10 students)

14:50-15:45   Student project presentations (last 9 students)

16:00-17:00   Tutor meeting

19:00-21:00   Banquet & Dinner


 

 

 

Kenji Doya

Introduction to numerical methods for ordinary and partial differential equations

This tutorial introduces the basic concepts of differential equations and how to solve them, or simulate their behaviors in time, using a computer. Key concepts like eigenvalues and stability are explained while solving simple differential equation using MTALAB programming language. Some examples of Hodgkin-Huxley type neuron models and cable equations are also introduced.

Introduction to reinforcement learning and Bayesian inference

The aim of this tutorial is to present the theoretical cores for modeling animal/human action and perception. In the first half of the tutorial, we will focus on ""reinforcement learning"", which is a theoretical framework for an adaptive agent to learn behaviors from exploratory actions and resulting reward or punishment. Reinforcement learning has played an essential role of understanding the neural circuit and neurochemical systems behind adaptive action learning, most notably the basal ganglia and the dopamine system. In the second half, we will familiarize ourselves with the framework of Bayesian inference, which is critical in understanding the process of perception from noisy, incomplete observations.

 

Suggested reading:

Doya K: Reinforcement learning: Computational theory and biological mechanisms. HFSP Journal, 1(1), 30-40 (2007)

Free on-line access: http://dx.doi.org/10.2976/1.2732246

Doya K, Ishii S: A probability primer. In Doya K, Ishii S, Pouget A, Rao RPN eds. Bayesian Brain: Probabilistic Approaches to Neural Coding, pp. 3-13. MIT Press (2007).

Free on-line access: http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=11106

 

 

 

Yoko Yazaki-Sugiyama

Neuronal basis for information processing.

We are acquiring visual information at the eye, auditory information in the ear, olfactory information at the nose etc., which is conveyed to the brain and processed and transformed to make us to recognize as a sense. The brain also works for generating and controlling a complicated behavior, and are responsible to define aspects of behavior as feelings and abstract of thought.

Neurons are the smallest component of the brain and are the key players for signal processing for making these difficult tasks with wiring each other.

In this lecture we will learn basic physiological character and mechanism of neurons to see how those complicated tasks can be performed. We will also try to get an idea how neurons can compute signals by wisely connecting each other.

 

Suggested reading

The neuron: Cell and Molecular Biology. I.B. Levitan and L.K. Kaczmarek, Oxford University Press

 
 

Bernd Kuhn

Optical Imaging Methods

Functional optical imaging has becomes one of the key techniques in neuroscience. I will introduce the most important methods and explain what we can learn from them but also their limitations.

Ion channel physiology and the Hodgkin-Huxley model of neuronal activity

I will give an introduction on ion channels and specifically focus on voltage-gated channels and their dynamics in response to membrane voltage. A combination of different voltage-gated channels is used by neurons to generate fast (about 1 ms) voltage changes. I will talk about the first model by Hodgkin and Huxley describing this electrical activity. I will also talk about more recent additions or fine-tuning of this time-honored model.

 

Suggested reading

Helmchen, Konnerth: Imaging in Neuroscience, 2011

Yuste, Lanni, Konnerth: Imaging Neurons, 2000

Johnston and Wu: Foundation of cellular neurophysiology, MIT press

 
 

Erik De Schutter

Modeling biochemical reactions, diffusion and reaction-diffusion systems

In my first talk I will use calcium dynamics modeling as a way to introduce deterministic solution methods for reaction-diffusion systems. The talk covers exponentially decaying calcium pools, diffusion, calcium buffers and buffered diffusion, and calcium pumps and exchangers. I will describe properties of buffered diffusion systems, ways to characterize them and new approximations that we developed for use in large neuron models.

In the second talk I will turn towards stochastic reaction-diffusion modeling. Two methods will be described: Gillespie's Stochastic Simulation algorithm extended to simulate diffusion, and particle-based methods. I will discuss some of the problems in generating correct descriptions of microscopic 3D geometries and briefly describe the STEPS software. I will then describe two applications: stochastic reaction modeling of LTD induction in Purkinje cells and stochastic diffusion modeling of anomalous diffusion in spiny dendrites.

Introduction to modeling neurons and networks

In the first talk I will discuss methods to model morphologically detailed neurons. I will briefly introduce cable-theory, the mathematical description of current flow in dendrites. By discretizing the cable equation we come to compartmental modeling, the standard method to simulate morphologically detailed models of neurons. I will discuss the challenges in fitting compartmental models to experimental data with an emphasis on active properties. The talk will finish with a brief overview of dendritic properties predicted by cable theory and experimental data confirming these predictions.

The second talk will briefly introduce network modeling. I will introduce simpler neuron models like integrate-and-fire neurons and then move on to modeling synaptic currents. I will wrap up with an overview of network connectivity.

 

Suggested reading

• U.S. Bhalla and S. Wils: Reaction-diffusion modeling. In Computational Modeling Methods for Neuroscientists, E. De Schutter ed., MIT Press, Boston. 61–92 (2009)

• E. De Schutter: Modeling intracellular calcium dynamics. In Computational Modeling Methods for Neuroscientists, E. De Schutter ed., MIT Press, Boston. 61–92 (2009)

• G. Antunes and E. De Schutter: A stochastic signaling network mediates the probabilistic induction of cerebellar long-term depression. Journal of Neuroscience: in press (2012).

• F. Santamaria, S. Wils, E. De Schutter and G.J. Augustine: Anomalous diffusion in Purkinje cell dendrites caused by dendritic spines. Neuron 52: 635-648 (2006).

• Several chapters in Computational Modeling Methods for Neuroscientists, E. De Schutter ed., MIT Press, Boston (2009).

• V. Steuber et al.: Cerebellar LTD and pattern recognition by Purkinje cells. Neuron 54: 121–136 (2007).

 

 

Karl Deisseroth

Optical deconstruction of fully-assembled biological systems

(Abstract: TBA)

 

Suggested reading : TBA
 
 
 

Gregory Stephens

An introduction to dynamical systems: from neural activity to natural behavior

My lecture will consist of two parts: an introduction to dynamical systems focused in particular on the power of qualitative analysis and a novel, quantitative approach to understanding the motion of C. elegans.

Indeed, while there has been an explosion in our ability to characterize the dynamics of molecules, cells, and circuits, our understanding of behavior on the organism-scale is remarkably less advanced. Here, we use high-resolution video microscopy to show that the space of shapes is low-dimensional, with just four dimensions accounting for 95% of the shape variance.   Projections of worm shape along these four “eigenworms” provide a precise yet substantially complete description of locomotory behavior, capturing both classical motion such as forward crawling, reversals, and Ω-turns and novel behaviors such as “pause” states at particular postures.  We use the eigenworms to construct a stochastic model of the body wave dynamics that predicts transitions between attractors corresponding to abrupt reversals in crawling direction and we show that the noise amplitude decreases systematically with increasing time away from food, resulting in longer bouts of forward crawling and suggesting that worms use noise to adaptive benefit. 

 

Suggested reading: TBA

 

 

 

Sophie Deneve

Representation of sensory signals and uncertainty in spiking neural networks.

Understanding how human perception is translated into actions and how our experience forms our worldview has been one of the central questions of psychology, cognitive science and neuroscience. In particular it is very hard to understand how our perceptions and strategies persist and/or change in the face of our continuous experience as active agents in an unpredictable and perpetually changing world. Indeed, the tasks humans are facing in “natural” situations (as opposite to simplistic laboratory settings) require the combination of multiple noisy and ambiguous sensory cues, as well as the use of prior knowledge, either innate or acquired from previous experiences. Such incomplete and imperfect sensory cues and priors can only provide probabilistic information (such as object structures that are more likely, or the probability of moving in a particular direction given the observed optic flow).

How the neural substrates perform these probabilistic inference tasks is still an open question. In this lecture, we will explore the hypothesis that rather that representing a single estimate of sensory or motor variables, single neurons or neural populations could be representing, implicitly or explicitly, probabilities or probability distributions. Many behaviorally relevant variables, such as movement direction, are represented by large groups of neurons with wide, overlapping tuning curves, and very noisy responses. Interestingly, this kind of code is ideally suited to perform Bayesian inference. In particular, when several population codes represent statistically related variables, for example the direction of motion of an object on the skin and on the retina, their information can be combined simply by summing inputs from the different populations. 

We will consider how probabilistic inference could be implemented by network of integrate and fire neurons. Such models will have important implications for our views of the neural code and the dynamics of cortical networks. We will show in particular that the high variability of sensory and motor spike trains could in fact not be noise, but a translation, in “neural language”, of sensory and motor uncertainty. Meanwhile, population coding is a natural and necessary consequence of interpreting noisy and ambiguous sensory inputs, and not a redundant code to deal with neural noise. In fact, neural representations at the level of the entire population could be order of magnitudes more precise that they appear to be when extrapolation from single neuron’s recordings. This would also explain the very tight balance between excitation and inhibition observed in cortical neurons, as well as the Poisson statistics of their spike trains.

 

Suggested reading : TBA

 

 

 

William (Bill) Holmes

Modeling neurons: from theory to practice. Experiences with hippocampal and vestibular neuron models

Dendritic modeling begins with the cable equation and the models of Rall. I will review some of the key solutions to the cable equation and discuss their importance for gaining an intuitive understanding of dendritic function. In particular, there are some simple formulas that allow insight into certain cell characteristics. As useful as these insights are for providing understanding, the assumption of passive membrane is rarely satisfied and this means that single neuron models with voltage- and calcium-dependent conductances in the dendrites and soma, as well as in the axon, are required. I will share some practical ideas for modeling neurons based on my experiences with hippocampal and vestibular neuron models. I will discuss things that “bug” me about single neuron modeling, including the plethora of ion channel types, the use and abuse of Boltzmann fits for modeling conductances, experimental issues with channel kinetics data, unnecessarily complicated time constant expressions and the fact that models that appear to reproduce observed behavior may not always be physiologically realistic.

 

Suggested reading: TBA

 

Avrama Blackwell

Molecular Mechanisms of Synaptic Plasticity

Long term synaptic plasticity is long lasting change in the strength of a synaptic connection, and is a proposed mechanism of memory storage. There are many induction paradigms used to induce plasticity, and they vary by brain region, type of plasticity, and molecular mechanisms. Calcium is a crucially important molecule for all types of synaptic plasticity: an elevation in intracellular calcium concentration is critical for induction of both potentiation and depression. Calcium activates many other molecules with a documented role in synaptic plasticity, including calcium calmodulin dependent protein kinase II (CaMKII), protein kinase C, and cAMP dependent protein kinase (PKA). I will describe a few characteristics of hippocampal synaptic plasticity and a few of the signaling pathways implicated, as well as various computational models that have been developed to evaluate whether the identified signaling pathway can explain the sensitivity to stimulation pattern.

 

Suggested reading: TBA
 
 
 

Michael E. Hasselmo

Memory mechanisms in the entorhinal cortex and hippocampus: Oscillations, grid cells and acetylcholine.

Entorhinal cortex, the hippocampus and the medial septum play an important role in memory for the location and time of events in episodic memory. The physiological properties of these regions help us to understand the mechanisms of episodic memory function. The spatial location of memories may be coded by place cells in the hippocampus that respond when a rat is in a single location (O’Keefe and Burgess, 2005) and grid cells in entorhinal cortex that fire when a rat visits an array of locations in the environment that fall in a hexagonal pattern (Moser and Moser, 2008). Hebbian modification of synapses could form associations between locations and items or events in memory. The effects of acetylcholine could enhance the encoding of new associations between locations and items. I will review the data on the behavioral effects of acetylcholine receptor blockade and models that link the behavioral role to specific cellular effects of acetylcholine (Hasselmo, 2006; Hasselmo and Stern, 2006).

The coding of space could depend on the spacing and size of firing fields that becomes progressively larger for grid cells recorded in more ventral anatomical locations. Models of grid cells show how grid cells could arise from oscillations and resonance in entorhinal cortex (Burgess, Barry and O’Keefe, 2007; Hasselmo, 2008). Loss of oscillations with inactivation of the medial septum is associated with a loss of spatial periodicity of grid cells (Brandon et al., 2011). The spacing of grid cells could depend upon the intrinsic frequency of oscillations and resonance. Whole cell patch data from shows that layer II stellate cells in entorhinal cortex have higher frequencies of resonance and subthreshold membrane potential oscillations in dorsal compared to ventral entorhinal cortex (Giocomo et al., 2007). A new model shows how this resonance could contribute to the spacing of grid cell firing fields.

Brandon, M.P., Bogaard, A.R., Libby, C.P., Connerney, M.A., Gupta, K., Hasselmo, M.E. (2011) Reduction of theta rhythm dissociates grid cell spatial periodicity from directional tuning. Science, 332: 595-599.

Burgess N, Barry C, O'Keefe J. 2007. An oscillatory interference model of grid cell firing. Hippocampus 17(9):801-12.

Giocomo LM, Zilli EA, Fransen E, Hasselmo ME. (2007) Temporal frequency of subthreshold oscillations scales with entorhinal grid cell field spacing. Science, 315:1719-22.

Hasselmo, M.E. (2006) The role of acetylcholine in learning and memory. Curr. Opinion Neurobiol. 16(6): 710-715.

Hasselmo M.E. (2008) Grid cell mechanisms and function: Contributions of entorhinal persistent spiking and phase resetting. Hippocampus. 2008;18(12):1213-29.

Hasselmo, M.E., Stern, C.E. (2006) Mechanisms underlying working memory for novel information. Trends in Cognitive Sciences, 10(11):487-93.

Moser EI, Moser MB (2008) A metric for space. Hippocampus 18: 1142-1156.

O’Keefe J., Burgess N. (2005) Dual phase and rate coding in hippocampal place cells: theoretical significance and relationship to entorhinal grid cells. Hippocampus 15: 853-866.

 

Suggested reading: TBA

 

 

 

Mitsuo Kawato

From computational model based neuroscience to manipulative neuroscience

My lecture will first summarize computational model based neuroimaging and neurophysilogy studies from our group related to cerebellar internal model and basal ganglia reinforcement learning model. After pointing out pros and cons of this approach, I advocate necessity of causal tools in systems neuroscience. Decoded neurofeedback and decoded connectivity neurofeedback are new experimental tools to realize spatiotemporal neural activities in human brain. Experimental and theoretical issues will be presented.

 

Suggested reading: TBA
 
 

Gaute Einevoll

Modelling and analysis of the local field potential (LFP)

The past decade has witnessed a renewed interest in local field potentials (LFPs), i.e., extracellularly recorded potentials with frequencies up to 500 Hz. This is due to both the advent of multielectrodes allowing for recording of LFPs at tens to hundreds of sites simultaneously and the insight that LFPs offer a unique window into how neurons in cortical populations integrate synaptic inputs. However, owing to its numerous potential neural sources, the LFP is difficult to interprete. Careful mathematical modelling and analysis is thus needed to take full advantage of the opportunities that this signal offers in understanding the signal processing in cortical circuits and, ultimately, the neural basis of perception and cognition. In the lecture I will go through the biophysical origin of LFP, how it can be mathematically modeled, and new methods for analysing the signal.

 

Suggested reading: TBA
 
 
 

Jonathan Pillow

Neural coding and statistical models for neural spike trains

A central problem in systems neuroscience is to understand the probabilistic relationship between environmental stimuli and neural spike responses. A powerful approach to this problem is to develop explicit statistical models of neural spike trains. Such models allow us to determine how stimulus information is encoded in neural activity and how it might be read out by downstream brain areas.

In this talk, I will provide a general introduction to neural encoding models and review classical methods like reverse-correlation (spike-triggered averaging), Volterra/Weiner kernels, and maximum likelihood estimation. Additional topics will include spike-triggered covariance (STC) analysis and generalized linear models (GLMs) for multi-neuron data. I will discuss several open problems and invite discussion about the use of probabilistic models for understanding the neural code.

 

Suggested reading: TBA

 

 

Angelo Arleo

Neural coding at the early stages of the somatosensory pathway: biological evidence and computational modeling

Fine touch sensing relies on peripheral-to-central transmission of somesthetic percepts. First, we will see how mechanoreceptors encode contact features to mediate early tactile processing in humans. We will quantify neurotransmission reliability at the level of primary tactile afferents through a metrics-based information analysis, by focusing on relative spike time coding. Second, we will consider a model of how primary afferent signals are processed by the cuneate neurons of the brainstem, prior to their transmission to central networks serving perceptual and sensorimotor functions. Finally, we will see how findings on the encoding/decoding principles underpinning neural information processing along the ascending somatosensory pathway can give raise to possible applications for fine touch sensing in neurorobotic and neuroprosthetic devices.

 

Suggested reading:

Bengtsson F, Brasselet R, Johansson RS, Arleo A, Jorntell H (2013) Integration of sensory quanta in cuneate nucleus neurons in vivo. PLoS ONE, 8(2):e56630.

Brasselet R, Johansson RS, Arleo A (2011) Quantifying neurotransmission reliability through metrics based information analysis. Neural Comput 23(4):852-81.

Johansson RS, Birznieks I (2004) First spikes in ensembles of human tactile afferents code complex spatial fingertip events. Nat Neurosci 7:170-7.

Johansson RS, Flanagan JR (2009) Coding and use of tactile signals from the fingertips in object manipulation tasks. Nat Rev Neurosci 10:345-59.

Quian Quiroga R, Panzeri S (2009) Extracting information from neuronal pop- ulations: Information theory and decoding approaches. Nat Rev Neurosci 10:173-85.

Victor J, Purpura K (1996) Nature and precision of temporal coding in visual cortex: A metric-space analysis. J Neurophysiol 76:1310-26.

 

Jeff Wickens

Basal ganglia structure and computations

 

The basal ganglia constitute a major brain center for learning on the basis of positive reinforcement. The neuromodulators, dopamine and acetylcholine, play a central role in basal ganglia operations. In the first lecture I will discuss the cellular and circuit mechanisms underlying reinforcement learning in the striatum, the major input nucleus of the basal ganglia. I will discuss the implications of neuromodulator mechanisms for computational models of reinforcement learning.

 

Suggested reading:

Wickens, J.R. (1997). Basal ganglia: Structure and computations. Network: Computation in Neural Systems 8: 77-109.

Reynolds, J.N., Hyland, B.I., and Wickens, J.R. (2001). A cellular mechanism of reward-related learning. Nature 413: 67-70.

Glimcher, P.W. (2012). Understanding dopamine and reinforcement learning: the dopamine reward prediction error hypothesis. Proc Natl Acad Sci U S A 108 Suppl 3: 15647-15654.