Program & Abstract
OCNC 2014 Program
*All the lectures take place in the Seminar Room, OIST Seaside House unless otherwise indicated. *Faculty meetings take place in Meeting room 1 (and 2 if we have two meetings at the same time)Week 1 (Jun 16-22) : Methods
Monday, June 16
09:30-09:45 Greetings from the organizers
10:00-13:00 Parallel Sessions
Biologists: Kenji Doya Introduction to numerical methods for ordinary and partial differential equations
Theoreticians: Gordon Arbuthnott An exploration of real neuronal systems/ Meeting Room 1
14:00-18:00 Student poster presentations (14:00-16:00 Group1 / 16:00-18:00 Group2)
19:00-21:00 Reception & Dinner
Tuesday, June 17
09:30-12:30 Erik De Schutter Modeling biochemical reactions, diffusion and reaction-diffusion systems
14:00-15:00 Introduction of the tutors
15:30-18:00 Tutorial: Matlab1
Wednesday, June 18
09:30-12:30 Bernd Kuhn
14:00-16:00 Tutorial: Python
16:00-18:00 Tutorial: NEST
Thursday, June 19
09:30-12:30 Erik De Schutter Introduction to modeling neurons and networks
14:00-16:00 Tutorial: Neuron
16:00-18:00 Tutorial: Matlab2
Friday, June 20
09:30-12:30 Kenji Doya Introduction to reinforcement learning and Bayesian inference
14:00-16:00 Tutorial: STEPS / Cluster Use (optional)
16:00-18:00 Q&A Session
Saturday, June 21
09:30-12:30 Upinder Bhalla Computing with Chemistry (and also electricity as a minor side effect)
14:00-16:00 Meeting with Dr.Doya
Sunday, June 22 (Day off) optional Excursion
Week 2 (Jun 23-29) : Neurons, Networks and Behavior I
Monday, June 23
09:30-12:30 Greg Stephens An introduction to dynamical systems: from neural activity to organism-scale behavior
14:00-16:00 Project work or meeting with Dr. Bhalla or Dr. De Schutter
16:00-18:00 Project work
Tuesday, June 24
09:30-12:30 Yael Niv Advanced topics in neural reinforcement learning
14:00-18:00 Visit to OIST campus + faculty meeting with Dr. Kuhn and Dr. Stephens
Wednesday, June 25
09:30-12:30 Ivan Soltesz Data-driven large-scale modeling of hippocampal networks
14:00-16:00 Project work or meeting with Dr. Niv
16:00-18:00 Project work
Thursday, June 26
09:30-12:30 Claudia Clopath Single neuron modeling
14:00-17:00 Project work or meeting with Dr. Soltesz
17:00-18:00 Project work
Friday, June 27
09:30-12:30 Tony Prescott Embodied Computational Neuroscience for Sensorimotor and Social Cognition
14:00-16:00 Project work or meeting with Dr. Clopath
16:00-18:00 Project work
Saturday, June 28 (Day off) optional excurison
Sunday, June 29
09:30-12:30 Jason Kerr Imaging neuronal activity in vivo: how do we make sure we get the right answer and avoid delusion?
14:00-16:00 Project work or meeting with Dr. Prescott
Week 3 (Jun 30 – Jul 3) : Neurons, Networks and Behavior II
Monday, June 30
09:30-12:30 Javier Medina Can we build an artificial cerebellum for motor control and adaptation?
14:00-16:00 Project work or meeting with Dr. Kerr
16:00-18:00 Project work
Tuesday, July 1
09:30-12:30 Hiroyuki Nakahara Neural computation and social decision-making
14:00-16:00 Project work or meeting with Dr. Medina or Dr. Fukai
16:00-18:00 Project work
Wednesday, July 2
09:30-12:30 Greg Stuart Single neuron computation
14:00-16:00 Project work or meeting with Dr. Nakahara or Dr. Stuart
16:00-18:00 Project work
Thursday, July 3
09:30-11:00 Taro Toyoizumi Brain State Modulation by Closed Loop Sensory Feedback
11:15-12:30 Student project presentations
14:00-16:00 Student project presentations
19:00-21:00 Banquet & Dinner
Abstract
Lecturer | Kenji Doya |
Title |
(Parallel Session) Introduction to numerical methods for ordinary and partial differential equations |
Abstract |
This tutorial introduces the basic concepts of differential equations and how to solve them, or simulate their behaviors in time, using a computer. Key concepts like eigenvalues and stability are explained while solving simple differential equation using MATLAB programming language. Some examples of Hodgkin-Huxley type neuron models and cable equations are also introduced. |
Lecturer | Kenji Doya |
Title |
Introduction to reinforcement learning and Bayesian inference |
Abstract |
The aim of this tutorial is to present the theoretical cores for modeling animal/human action and perception. In the first half of the tutorial, we will focus on "reinforcement learning", which is a theoretical framework for an adaptive agent to learn behaviors from exploratory actions and resulting reward or punishment. Reinforcement learning has played an essential role of understanding the neural circuit and neurochemical systems behind adaptive action learning, most notably the basal ganglia and the dopamine system. In the second half, we will familiarize ourselves with the framework of Bayesian inference, which is critical in understanding the process of perception from noisy, incomplete observations. |
Suggested Readings |
|
Lecturer |
Gordon Arbuthnott |
Title | |
Abstract |
A hero of my graduate years Ben DeL. Burns wrote a book called ‘The Uncertain Nervous System’. He claimed he loved writing books because you avoided peer review and could say what you thought. Most of my contemporaries thought he had lost his mind – nothing was ‘uncertain’ about the nervous system – just unknown! As we consider what you should expect from the real nervous systems you model his insight is worth keeping in mind! He was Tim Bliss’ Ph.D. supervisor but when Tim discovered LTP with Lomo; Ben said – “we have been looking for the wrong thing all this time! You don’t learn by co-incidences you learn by consequences!” He (Burns) and Alison Webb began a long series of studies on the measurement of behavior in cats and, for a time, abandoned electrophysiology as he had done it before. What does all this have to do with computational neuroscience? Mainly, it has to do with the importance of the right question, but it also has to do with the best strategy in looking for answers. After some trouble I convinced an editor to accept a purely methodological chapter in a neuroscience text because “the only facts are the methods and their consequences.” The rest is just speculation – important insightful interpretation (speculation?) is exactly what I think computation neuroscience can give us, but it is even more useful if it can be tested by some means against real nervous systems in live animals. |
Lecturer | Erik De Schutter |
Title |
Modeling biochemical reactions, diffusion and reaction-diffusion systems |
Abstract |
In my first talk I will use calcium dynamics modeling as a way to introduce deterministic solution methods for reaction-diffusion systems. The talk covers exponentially decaying calcium pools, diffusion, calcium buffers and buffered diffusion, and calcium pumps and exchangers. I will describe properties of buffered diffusion systems and ways to characterize them experimentally. Finally I will compare the different modeling approaches. In the second talk I will turn towards stochastic reaction-diffusion modeling. Two methods will be described: Gillespie's Stochastic Simulation algorithm extended to simulate diffusion, and particle-based methods. I will briefly describe the STEPS software. I will then describe two applications: stochastic reaction modeling of LTD induction in Purkinje cells and stochastic diffusion modeling of anomalous diffusion in spiny dendrites. |
Lecturer | Erik De Schutter |
Title | |
Abstract |
In the first talk I will discuss methods to model morphologically detailed neurons. I will briefly introduce cable-theory, the mathematical description of current flow in dendrites. By discretizing the cable equation we come to compartmental modeling, the standard method to simulate morphologically detailed models of neurons. I will discuss the challenges in fitting compartmental models to experimental data with an emphasis on active properties. The talk will finish with a brief overview of dendritic properties predicted by cable theory and experimental data confirming these predictions. The second talk will briefly introduce network modeling. I will introduce simpler neuron models like integrate-and-fire neurons and then move on to modeling synaptic currents. I will wrap up with an overview of network connectivity. |
Suggested Readings |
|
Lecturer |
Bernd Kuhn |
Title | |
Abstract |
Functional optical imaging has becomes one of the key techniques in neuroscience. I will introduce the most important methods and explain what we can learn from them but also their limitations. |
Lecturer | Bernd Kuhn |
Title |
Ion channel physiology and the Hodgkin-Huxley model of neuronal activity |
Abstract |
I will give an introduction on ion channels and specifically focus on voltage-gated channels and their dynamics in response to membrane voltage. A combination of different voltage-gated channels is used by neurons to generate fast (about 1 ms) voltage changes. I will talk about the first model by Hodgkin and Huxley describing this electrical activity. I will also talk about more recent additions or fine-tuning of this time-honored model. |
Suggested Readings |
|
Lecturer |
Upinder Bhalla |
Title |
Computing with Chemistry (and also electricity as a minor side effect) |
Abstract |
The computations of the brain are almost exclusively thought of in electrical terms. This viewpoint misses out on some of the most elaborate and rapid computations in the brain, as well as the most long-lasting ones. Subcellular computations range from the possibly dull but important job of keeping neurons alive, to building the cells and networks, and most of the key steps in synaptic transmission and plasticity. I will introduce chemical signaling and the kinds of computation that it supports. I'll discuss the kinds of models that represent chemical signaling in different contexts, and introduce how these models are built. I'll then bring in the interface with electrical signaling, and stress how neuronal computation is really a continuum in which all these forms of computation are essential. |
Lecturer |
Gregory Stephens |
Title |
An introduction to dynamical systems: from neural activity to organism-scale behavior |
Abstract |
My lecture will consist of two parts: an introduction to dynamical systems focused in particular on the power of quantitative analysis and a novel, quantitative approach towards understanding the wiggling, motile behavior of C. elegans. We apply a low-dimensional yet complete representation of body shape (eigenworms) to construct a principled parameterization of the 2D movements. Despite it’s simplicity, we show that a linear dynamical model of the eigenworm projections captures long-range temporal correlations and reveals two periodic dynamics, the primary body wave and and an oscillation between the head and body curvature which underlies arcs in the centroid trajectory. We parameterize the movement phenospace by constructing dynamical systems locally in time and show that variation within this space is remarkably restrained; with increasing window size, a single behavioral mode dominates the variance and represents the coupled control of speed and turning. The distribution of this primary mode is bimodal, suggesting a correspondence to roaming and dwelling states. Finally, we apply our behavioral parameterization to show that the worm’s response to a strong impulsive heat shock includes a Hopf-like bifurcation corresponding to an early-time growth of the amplitude of the crawling wave. |
Lecturer |
Yael Niv |
Title | |
Abstract |
Following up on the lecture from Prof. Doya, we will discuss trial by trial model fitting of reinforcement learning models, model comparison, and some recent neural findings using these methods. We will continue to discuss multiple decision making systems in the brain (model free and model based reinforcement learning), and in the third hour, discuss Bayesian inference and partially observable Markov Decision Processes (POMDPs). |
Suggested Readings |
|
Lecturer |
Ivan Soltesz |
Title | |
Abstract |
The close integration of experimental findings with large-scale, data-driven computational simulations of neuronal networks offers a powerful tool towards the identification of key circuit parameters that control behavior under normal conditions and in various neurological and psychiatric disorders. To this end, we have been developing realistic microcircuit-based network models of the control and injured hippocampus in order to investigate questions related to normal hippocampal microcircuit function and the mechanistic bases of epilepsy. We will discuss the conceptual framework and biological basis of full-scale model development and show specific applications, including computational and experimental results concerning model validation, cell type specific hippocampal chronocircuit properties and the role of hub neurons. The talk will highlight the predictive and analytic power of freely shared, highly realistic, large-scale computational models in understanding normal and abnormal circuit functions. |
Suggested Readings |
|
Lecturer |
Jason Kerr |
Title |
Imaging neuronal activity in vivo: how do we make sure we get the right answer and avoid delusion? |
Abstract |
This lecture will be divided into 4 sections: 1) Introduction of imaging techniques available and what they can provide 2) Physics of an image: how do we know what we see is real? 3) Application of imaging tools to real neuroscience problems 4) What’s coming up in the future? |
Lecturer |
Claudia Clopath |
Title | |
Abstract |
I will cover the different models of Integrate-and-Fire neurons and their properties. |
Lecturer |
Tony Prescott |
Title |
Embodied Computational Neuroscience for Sensorimotor and Social Cognition |
Abstract |
Our brains have evolved to guide our interaction with the world about us, both physical and social. This lecture addresses the complementary topics of "embodied computation in the brain" and "robots as a tool for computational neuroscience.” Embodiment has multiple aspects, most notably the notion that sensory and cognitive systems evolved to process data in a form the body needs to execute various tasks, and that some computations are simplified by exploiting the body’s own dynamics. Robotics can be used to test the sufficiency of a brain model to reproduce a target behavior in a way that complements computer simulation. The lecture will be illustrated by example research on biomimetic and brain-based robots including that from my own laboratory on mammal- and human-like robots. The aim will be to show how experimental, computational and robotic approaches can operate together to advance our understanding of sensorimotor and social cognition in behaving systems. |
Suggested Readings |
|
Lecturer |
Javier Medina |
Title |
Can we build an artificial cerebellum for motor control and adaptation? |
Abstract |
If there is one circuit in the brain that has captured the imagination of neuroscientists, mathematicians and engineers alike, that is the cerebellum. In the last 50 years, we have made a tremendous amount of progress in elucidating the connectivity, synaptic features and intrinsic properties of neurons in the cerebellum. But are we any closer to understanding how the cerebellar circuit actually works? In my lecture, I will first review computational theories of cerebellar function which are based on the idea that the cerebellum is a neural machine for adapting our movements and keeping them finely tuned. Then, I will describe a number of studies of eyeblink conditioning, a prototypical example of cerebellar-dependent motor learning. These studies provide an excellent opportunity to evaluate some of the key predictions made by cerebellar learning theories, particularly with regards to the neural mechanisms and computations mediating motor timing and error processing. The format of the lecture will be interactive. My goal is not to present a bunch of facts one-by-one, or even to tell a complete story. Instead, I hope to raise a number of questions and open the floor for discussion. At the end, I’d like to ask: do you think we understand the cerebellum well enough to build one? |
Suggested Readings |
|
Lecturer |
Hiroyuki Nakahara |
Title | |
Abstract |
A fundamental challenge in social cognition is elucidating how one learns and predicts the mind of others, and what the underlying neural computations and mechanisms are. We approach this challenge by using computational frameworks to map behavioral-level complexity and related neural systems, i.e., extending reinforcement learning theory into the realm of social cognition and combining human fMRI with modeling. In my talk, the first part will cover the basics for using model-based analysis in human fMRI experiments and the second part will describe our research on social decision-making. I will discuss that these efforts will open up new avenues of inquiry, such as identifying critical differences in neural computations for understanding the intentions and behavior of others between people with and without mental disorders. |
Suggested Readings |
|
Lecturer |
Greg Stuart |
Title | |
Abstract |
In my presentation I will describe how neurons integrate synaptic inputs. As synaptic inputs are made primarily onto the dendritic tree of neurons the first part of my presentation with focus on the properties of neuronal dendrites and how these properties influence synaptic integration. The second part of my presentation will be on action potential generation, with a focus on the properties of the axon initial segment. If there is time I will finish off by describing some new results we have obtained looking at synaptic integration in vivo during processing of binocular visual information. |
Suggested Readings |
|
Lecturer |
Taro Toyoizumi |
Title | |
Abstract |
Animals use passive and active sensing to navigate in their environment. Unlike passive sensing, during active sensing the brain processes sensory information resulting from self-generated action. Neuronal variability and sensitivity reduce during active sensing but it is unknown how brain-environment interactions are involved. Here, we present a theory describing dynamic modulation of neuronal variability and sensitivity by closed-loop sensory feedback from the environment. Our theory on a novel form of neuronal gain control was validated in several species and sensory modalities. These results demonstrate that closed-loop brain-environment interactions are crucial for neuronal dynamics and pave the way for new neurofeedback technologies for therapeutic and behavioral enhancement. |