OCNC2004
Top  Schedule  Lectures  Projects  People
Okinawa Computational Neuroscience Course 2004
The aim of Okinawa Computational Neuroscience Course is to provide opportunities for young researchers with theoretical backgrounds to learn uptodate neurobiological findings, and those with experiment backgrounds to have handson experience in computational modeling. This may also be a good opportunity for theoretical and experimental neuroscientists to meet together and enjoy attractive nature and culture of Okinawa, the southernmost island prefecture of Japan.
This course is the second of a series of tutorial courses that the Cabinet Office of the Japanese Government is sponsoring as a precursory activity for the Okinawa Institute of Science and Technology.
The sponsor will provide lodging expenses during the course and support for travel to Okinawa.
Top  Schedule  Lectures  Projects  People
Schedule
Tuesday, November 9th  
11:30

Registration  
13:2013:40

Opening  
13:4016:40

Alex Pouget: Population coding  
17:3020

Welcome party  
Wednesday, November 10th  
912  Jeff Bilmes: Dynamic graphical models: Theory and applications  
13:3016  Student presentations on their own works, part 1  
1922  Adrianne Fairhall: Spike coding  
Thursday, November 11th  
912

Peter Latham: Computing with population codes  
13:3016

Student presentations on their own works, part 2  
1922

Jonathan Pillow: Estimating neuron models from spike trains  
Friday, November 12th  
912

Richard Zemel: Coding and decoding uncertainty  
13:3016:30

Excursion to OIST campus site in Onna village  
1922

Michael Shadlen: A Neural Mechanism for Making Decisions  
Saturday, November 13th  
912  Emanuel Todorov: Optimality principles in sensorimotor control  
13:3016:30  Karl Friston: Dynamic causal modelling  
1820  Barbecue party  
Sunday, November 14th: Day off  
Monday, November 15th  
912

Shunichi Amari: Statistical approach to neural learning and population coding  
1922

David Knill: Bayesian models of sensory cue integration  
Tuesday, November 16th  
912

Rajesh Rao: Probabilistic Models of Cortical Computation  
13:3017:00

Excursion to OIST Initial Research Project Lab. in Gushikawa city  
1922

Konrad Koerding: Bayesian combination of priors and perception: Optimality in sensorimotor integration  
Wednesday, November 17th  
912

Wolfgang Maass: Computational properties of neural micorcircuit models  
1922

Barry Richmond: Neural coding: Determinsim vs stochasticity  
Thursday, November 18th  
912

Bruno Olshausen: Representing what and where in timevarying images  
1922

Tai Sing Lee:Cortical mechanisms of visual scene segmentation  a hierarchical Bayesian perspective   
Friday, November 19th  
912

Anthony Bell: Unsupervised machine learning with spike timings  
13:3017

Presentations of student projects  
1921

Farewell party 
Top  Schedule  Lectures  Projects  People
Lectures
Lecture Papers
 Alex Pouget "Population Coding"
 Jeff Bilmes "Dynamic graphical models: Theory and applications"
 Adrianne Fairhall "Spike coding"
 Peter Latham "Computing with population codes"
 Jonathan Pillow "Estimating neural models from spike trains"
 Richard Zemel "Coding and decoding uncertainty"
 Michael Shadlen "A Neural Mechanism for Making Decisions"
 Emanuel Todorov "Optimality principles in sensorimotor control"
 Karl Friston "Dynamic causal modelling"
 Shunichi Amari "Statistical approach to neural learning and population coding"
 David Knill "Bayesian models of sensory cue integration"
 Rajesh Rao "Probabilistic Models of Cortical Computation"
 Konrad Kording "Bayesian combination of priors and perception : Optimality in sensorimotor integration"
 Wolfgang Maass "Computational Properties of Cortical Microcircuit Models"
 Barry Richmond " Neural Coding: Determinsim vs stochasticity"
 Bruno Olshausen "Representing what and where in timevarying images"
 Tai Sing Lee "Cortical mechanisms of visual scene segmentation  a hierarchical Bayesian perspective "
 Anthony Bell "Unsupervised machine learning with spike timings"
Abstracts of Lectures
9 (tue)  10 (wed)  11 (thu)  12 (fri)  13 (sat)  14 (sun)  

Morning 
Registration

Jeff Bilems

Peter Latham

Rich Zemel

Emo Todorov

Day Off

Afternoon 
Alex Pouget

Student Presentations

Excursion Onna village

Karl Friston


Evening 
Welcome Party

Adrianne Fairhall

Jonathan Pillow

Michael Shadlen

Barbecue Party

15 (mon)  16 (tue)  17 (wed)  18 (thu)  19 (fri)  

Morning 
Shunichi Amari

Rajesh Rao

Wolfgang Maass

Bruno Olshausen

Anthony Bell

Afternoon 
Excursion Gushikawa

Presentations of Student projects


Evening 
David Knill

Konrad Koerding

Barry Richmond

Tai Sing Lee

Farewell Party

Nov. 9th 14:0017:00 Alex Pouget
"Population Coding"
Numerous variables in the brain are encoded with population codes, that is to say, through the concerted activity of large populations of neurons. This includes the orientation of visual contours, color, depth, spatial location, numbers, direction of arm movements, frequency of tactile stimuli, and arm position, to cite only a few. Characterizing the properties of those codes is critical to our understanding of the relationship between behavior and neural activity. Accordingly, the last 20 years have witnessed a surge of research on this topic, resulting in the development of several statistical and computation theories of population codes. In this lecture, we will review the major concepts that have emerged from this research. We will start by considering the various decoding techniques for reading out population codes. In the process, we will discuss how to characterize the information content of a code, using concepts such as Fisher and Shannon information. Decoding, however, is only part of the story. Population codes emerge in the brain as a result of computations, and they are themselves used for further computations, eventually leading to behavioral responses. In the second half of the lecture, we will review some of the theories that have been developed to understand how these computations are performed with population codes. We will focus in particular on the theory of basis functions and review the experimental evidence supporting this particular approach.
Download the lecture slides (ppt) here
Nov. 10th 9:0012:00 Jeff Bilmes
"Dynamic graphical models: Theory and applications"
Graphical models are a general statistical abstraction that can represent the inherent uncertainty within many scientific and natural phenomena. In this tutorial, we will first overview graphical models, Bayesian networks, dynamic Bayesian networks (DBNs), and probabilistic inference on graphs. This will include examples of how graphical models generalize many common statistical techniques including PCA, LDA, factor analysis, Gaussians, certain neural networks, mixture models, hidden Markov models (HMMs), and vector autoregressive HMMs. It will also be shown how various forms of "information fusion" (such as probabilistic interpretations of cue combination) can be represented using Bayesian networks. In particular, we will discuss methods to represent early, middle, and lateterm information integration. The last part of this tutorial will delve further into the details of DBNs, and how they can be used to represent time signals in general, and speech and language processing as a particular application. We will discuss how these models can easily generalize to other domains such as temporal cue combination.
Recommended Readings:
J. A. Bilmes and C. Bartels, On Triangulating Dynamic Graphical Models
http://ssli.ee.washington.edu/people/bilmes/mypapers/uai2003_final.pdf
Download the lecture text (pdf) here
Nov. 10th 19:0022:00 Adrianne Fairhall
"Spike coding"
Recommended Readings:
Spikes, Exploring the neural code. Rieke et al., Bradford Books 1996
Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems by Peter Dayan, L. F. Abbott . MIT Press 2001.
Relevant references from my work:
A. L. Fairhall, G. Lewen, W. Bialek and R. de Ruyter van Steveninck, Efficiency and ambiguity in an adaptive neural code, Nature, 412:787792 (2001)
B. Aguera y Arcas, A. L. Fairhall and W. Bialek, Computation in a single neuron: Hodgkin and Huxley revisited. Neural Comp. 15: 17891807 (2003)
Download the lecture text (pdf) here
Nov. 11th 9:0012:00 Peter Latham
"Computing with population codes"
One of the few things in systems neuroscience we are fairly certain about is that information is encoded in population activity. For example, a population of neurons coding for the orientation of a bar (theta), might have firing rates r_i = f(thetatheta_i) + noise. Here f is a smooth, bell shaped function (often taken to be Gaussian), the index i labels neuron, and theta_i is the preferred orientation of neuron i. See [1] for discussion.
Representing variables in populations codes, however, is only one step
 just as important are computations with population codes. Sensorimotor transformations are a natural example: the brain receives information about the outside world; that information is represented in population activity at the sensory level; and to perform an action in response to that input, such as reaching for an object, population codes in motor cortex must be generated to drive the appropriate joint movements. The transformation from the activity in sensory cortex to the activity in motor cortex is a computation based on population codes.
We are beginning to understand how these computations might take place in the brain, and in particular how they might take place efficiently; that is, with very little information loss. I will discuss several types of computations with population codes, including basis function networks [2], computations that take into account correlations among neurons [3], and ones that represent and manipulate uncertainty. If there is time, I will also discuss more esoteric computations such as the "liquid state machine" [4].
References:
1. Dayan and Abbott, "Theoretical Neuroscience," MIT press (2001). See especially chapter 3.
2. Pouget and Sejnowski, "Spatial transformations in the parietal cortex using basis functions," Journal of Cognitive Neuroscience 9:222237 (1997); Deneve, Latham, and Pouget, "Efficient computation and cue integration with noisy population codes," Nature Neurosci. 4:826831 (2001).
3. Latham, Deneve, and Pouget, "Optimal computation with attractor networks," J. Physiol. Paris 97:683694 (2003); Shamir and Sompolinsky, "Nonlinear population codes," Neural Comput. 16:11051136 (2004).
4. Maass, Natschlager, and Markram, "Realtime computing without stable states: A new framework for neural computation based on perturbations," Neural Comput. 14:25312560 (2002); Jaeger and Haas, "Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication," Science. 304:7880 (2004).
Download the lecture text (pdf) here
Download the lecture slides (pdf) here
Nov. 11th 19:0022:00 Jonathan Pillow:
"Estimating neural models from spike trains"
One of the fundamental problems in systems neuroscience is that of
characterizing the functional relationship between environmental
variables and neural spike responses. One approach to this problem
seeks to characterize the transformation between stimuli and spike
trains using computational models, which provide a precise
mathematical framework for understanding the computation a neuron
performs on its input. In this approach, we seek complete models
which generalize to predict responses to arbitrary novel stimuli, in
contrast to models which only describe responses to a restricted set
of parametrically varying stimuli (e.g. orientation tuning curves).
In this talk, I will review some of the classical approaches to the
problem of neural characterization, including reversecorrelation/
spiketriggered averaging, Volterra/Weiner kernels, and LN models. I
will then discuss several more recent aproaches, including
PCA/spiketriggered covariance and the use of nonPoisson models such as integrateandfire, and show applications to data collected in
retina and primary visual cortex. Finally, I will invite a discussion
of probabilistic models as a tool for an understanding of the neural
code, and discuss several extensions and applications of current
techniques.
Relevant Readings:
1. "Characterization of neural responses with stochastic stimuli"
E. P. Simoncelli, L. Paninski, J. W. Pillow, and O. Schwartz. in The
Cognitive Neurosciences, 3rd edition Ed: M Gazzaniga. MIT Press,
November, 2004 (to appear).
http://www.cns.nyu.edu/~pillow/pubs/CharacterizingNeuralResps_03.pdf
2. Chichilnisky, E. J. (2001). A simple white noise analysis of
neuronal light responses. Network: Computation in Neural Systems,
12(2), 199213.
3. "Maximum Likelihood Estimation of a Stochastic IntegrateandFire
Neural Model." J. W. Pillow, L. Paninski, and E. P. Simoncelli. to
appear in Advances in Neural Information Processing Systems,
eds. S. Thrun, L. Saul, and B. Schkopf, v 16, pages 13111318, May
2004, MIT Press, Cambridge MA.
Download the lecture text (pdf) here
Download the lecture slides (ppt) here
Nov. 12th 9:0012:00 Richard Zemel:
"Coding and decoding uncertainty"
As animals interact with their environments, they must constantly update estimates about relevant states of the world. For example, a batter must rapidly reestimate the velocity of a baseball as he decides whether and when to swing at a pitch. Bayesian models provide a description of statistically optimal updating based on prior probabilities, a dynamical model, and sensory evidence, and have proved to be consistent with the results of many diverse psychophysical studies. In this lecture we will review various schemes that have been proposed for how populations of neurons can represent the uncertainty that underlies this probabilistic formulation. We will also consider some examples of Bayesian computations that can be carried out with these representations, such as cue combination and noise removal. A focus of the lecture will be on how models based on standard neural architecture and activations can effectively implement, or at least approximate, this optimal computation.
Relevant readings:
"Inference and computation with population codes" by A. Pouget, P. Dayan, & R. Zemel. Annual Review Neuroscience, 26: 381410, 2003. http://www.cs.toronto.edu/pub/zemel/Papers/AnnRev03.pdf
"Spiking Boltzmann machines" by G. Hinton & A. Brown. In Advances in Neural Information Processing Systems, 12, 122129, 2000.
http://www.cs.toronto.edu/~hinton/absps/nips00ab.ps.gz
"Bayesian computation in recurrent neural circuits" by R. Rao. Neural Computation, 16: 138, 2004.
http://www.cs.washington.edu/homes/rao/nc_bayes_reprint.pdf
"Velocity likelihoods in biological and machine vision" by Y. Weiss & D. Fleet. In Statistical Theories of the Cortex, ed. R Rao, B Olshausen, MS Lewicki, pp. 7796. Cambridge, MA: MIT Press, 2002.
http://www.cs.huji.ac.il/~yweiss/rao7.ps.gz
Nov. 12th 19:0022:00 Michael Shadlen:
"A Neural Mechanism for Making Decisions"
Neurobiology is beginning to furnish an understanding of the brain mechanisms that give rise to such higher cognitive functions as planning, remembering, and deciding. Progress has come about mainly by measuring the electrical activity from parts of the brain that lie between the sensory and motor areas. The neurons in these brain areas operate on a time scale that is not controlled by external events: their electrical activity can outlast sensory input for many seconds, and they do not cause any overt change in behavior. Put simply, these neurons play neither a purely sensory nor a purely motor role but appear instead to control mental states. My lecture will focus on neurons in the parietal lobe that underlie a simple kind of decisionmaking?forming a commitment to one of two competing hypotheses about a visual scene. We have discovered that these neurons work by accumulating “evidence” from the sensory cortex as a function of time. The brain makes a decision when the accumulated evidence represented by the electrical discharge from these neurons reaches a criterion level. These neurons therefore explain both what is decided and when a decision is reached. Interestingly, the neural computations that underlie such a decision process were anticipated during WWII by Alan Turing and Abraham Wald. Turing applied this tool to break the German Navy’s enigma cipher, while Wald invented the field of sequential analysis. In addition to mathematical elegance and winning wars, our experiments suggest that this computational strategy may lie at the root of higher brain function.
Model:
Mazurek, M. E., Roitman, J. D., Ditterich, J., and Shadlen, M. N. (2003). A role for neural integrators in perceptual decisionmaking. Cereb Cortex 13:12571269.
http://www.shadlen.org/%7Emike/papers/mine/mazurek_cerebralCtx2003.pdf
Physiology/Psychophysics primary source:
Roitman, JD and Shadlen, MN (2002) Response of neurons in posterior parietal cortex (area LIP) during a combined reactiontime directiondiscrimination task. J. Neurosci 22:94759489.
http://www.shadlen.org/%7Emike/papers/mine/roitman_shadlen2002.pdf
Other theory of interest:
Gold, JI and Shadlen, MN (2003). The influence of behavioral context on the representation of a perceptual decision in developing oculomotor commands. J. Neurosci 23: 632651.
http://www.shadlen.org/%7Emike/papers/mine/gold_shadlen2003.pdf
Gold, JI and Shadlen, MN (2001) Neural computations that underlie decisions about sensory stimuli. Trends in Cog Sci 5:1016. http://www.shadlen.org/%7Emike/papers/mine/gold_shadlen2001c.pdf
Mazurek, ME and Shadlen, MN (2002) Limits to the temporal fidelity of cortical spike rate signals. Nature Neurosci: 5:463471 (plus web suppl.). http://www.shadlen.org/%7Emike/papers/mine/mazurek_shadlen2002.pdf
Download the lecture text (pdf) here
Nov. 13th 9:0012:00 Emanuel Todorov
"Optimality principles in sensorimotor control"
The sensorimotor system is a product of evolution, development, learning and adaptation – processes that work on different time scales to improve behavioral performance. The limit of this neverending improvement is captured by the notion of optimal performance. Indeed, many behavioral phenomena have been explained as reflecting the best possible control strategy for the given task. In this talk I will summarize the applications of optimal control theory to the study of the neural control of movement. I will then focus on our recent approach, in which the object being optimized is not the average movement trajectory but the sensorimotor loop that generates trajectories online. Applications of this theoretical framework will be illustrated in the context of reaching and viapoint movements, eyehand coordination, bimanual coordination, hitting and throwing tasks, and target perturbation experiments. I will also discuss recent numerical methods for constructing approximatelyoptimal sensorimotor control laws for complex biomechanical systems.
References:
Todorov E (2004) Optimality principles in sensorimotor control (Review). Nature Neuroscience 7(9): 907915.
Todorov E and Jordan M (2002) Optimal feedback control as a theory of motor coordination. Nature Neuroscience 5(11): 12261235.
Todorov E (2002) Cosine tuning minimizes motor errors. Neural Computation 14: 12331260.
Harris C and Wolpert D (1998) Signaldependent noise determines motor planning. Nature 394: 780784.
Nov. 13th 13:0016:00 Karl Friston
"Dynamic causal modelling"
I will present a general approach to the identification of dynamic inputstateoutput systems. Identification of the parameters proceeds in a Bayesian framework given the known, deterministic inputs and the observed responses of the system.
We develop this approach for the analysis of effective connectivity using experimentally deigned inputs and fMRI responses. In this context, parameters correspond to effective connectivity and, in particular, bilinear parameters reflect the changes in connectivity induced by inputs. The ensuing framework allows one to characterise fMRI and EEG experiments, conceptually as an experimental manipulation of integration among brain regions (by contextual or trialfree inputs, like time or attentional set) that is perturbed or probed using evoked responses (to trialbound inputs like stimuli).
As with previous analyses of effective connectivity, the focus is on experimentally induced changes in coupling (c.f. psychophysiologic interactions). However, unlike all previous approaches to connectivity in neuroimaging, the causal model ascribes responses to designed deterministic inputs, as opposed to treating inputs as unknown and stochastic.
Friston KJ, Harrison L, Penny W. Dynamic causal modelling. Neuroimage. 2003 Aug;19(4):1273302.
Friston KJ, Penny W. Posterior probability maps and SPMs. Neuroimage. 2003 Jul;19(3):12409
Friston KJ. Bayesian estimation of dynamical systems: an application to fMRI. Neuroimage. 2002 Jun;16(2):51330.
Friston KJ, Penny W, Phillips C, Kiebel S, Hinton G, Ashburner J. Classical and Bayesian inference in neuroimaging: theory. Neuroimage. 2002 Jun;16(2):46583.
Download the lecture text (pdf) here
Nov. 15th 9:0012:00 Shunichi Amari
"Statistical approach to neural learning and population coding"
Neural learning is understood as stochastic phenomena taking place in a population of neurons. Population coding is also a stochastic representation of information. These subjects are analyzed in terms of probability theory and statisticBayesian and nonBayesian. The present lecture will give an understandable overview of dynamics of neural learning and selforganization, as well as statistical theory of population coding in terms of Fisher information and information geometry. The role of Bayesian statistics will be elucidated in wider perspectives. No prior knowledge on information geometry is required.
The lecture addresses those topics as
1) dynamics of selforganization
2) stochastic equation for neural learning
3) singular structure in neural networks
4 ) Fisher information in population coding
5) synchronization and higherorder interaction of neurons.
References:
 S.Amari, H.Nakahara, S.Wu and Y.Sakai, Synchronous firing and higherorder interactions in neuron pool, Neural Computation, 15, 127142, 2003
 H.Nakahara and S.Amari, Informationgeometric measure for neural spikes, Neural Computation, 14, 22692316, 2002
 S.Wu, H.Nakahara, and S.Amari, Population coding with correlation and an unfaithful model, Neural Computation, 13, 775797, 2001
 S.Amari, Natural gradient works efficiently in learning, Neural Computation, 10,251176, 1998
 A.Takeuchi and S.Amari, Formation of topographic maps and columnar microstructure, Biological Cybernetics, 35, 6372, 1979
 S.Amari and A.Takeuchi, Mathematical theory on formation of category detecting nerve cells, Biological Cybernetics, 29, 127136, 1978
Download the lecture text (pdf) here
Download the lecture slides, part 1 (ppt) here, part 2 (ppt) here
Nov. 15th 19:0022:00: David Knill
"Bayesian models of sensory cue integration"
Probability theory provides a calculus for constructing optimal
models of perceptual inference in the face of uncertain sensory data.
These models characterize how a perceptual system "should" work. We have been using the framework to also construct models of perceptual performance  how the brain actually does work. I will describe how the probabilistic approach applies to the problem of integrating multiple sources of sensory information (sensory cues) about objects in the world. In particular, I will describe a Bayesian taxonomy of strategies for integrating depth cues (stereo, shading, etc.). I will also describe apsychophysical approaches that allow us to test and compare different Bayesian theories of cue integration. I will
illustrate the theory with specific examples from perceptual
phenomonology and psychophysics on depth cue integration. Specific
examples discussed will include simple weighted averaging of the
information provided by different cues, how the weights in such a
scheme depend on cue uncertainty, nonlinear forms of cue integration, the role of prior assumptions about the world in cue integration and integrating information over time. Each new theoretical idea discussed will coupled with examples of psychophysical experiments designed to test the theoretical redictions.
Recommended Readings:
Knill, D.C. / Saunders, J.A., Do humans optimally integrate stereo and texture information for judgments of surface slant?, Vision Research, Nov 2003
Knill, D.C. / Saunders, J.A., Perception of 3D surface orientaion from skew symmetry, Vision Research, Nov 2001
Knill, D.C., Mixture models and the probabilistic structure of depth cues, Vision Research, Mar 2003
Download the lecture text (pdf) here
Nov. 16th 9:0012:00: Rajesh Rao
"Probabilistic Models of Cortical Computation"
There is growing evidence that the brain utilizes probabilistic principles for perception, action, and learning. How such principles are implemented within neural circuits has become a topic of active interest in recent years. In this lecture, I will review two models of cortical computation that my collaborators and I have investigated over the past few years. The first model, based on the statistical principle of predictive coding, ascribes a prominent role to feedback connections in cortical computation. It provides new interpretations of neurophysiological properties such as nonclassical receptive field effects. The second model describes how networks of cortical neurons may perform hierarchical Bayesian inference by implementing the belief propagation algorithm in the logarithmic domain. In this model, the spiking probability of a noisy cortical neuron approximates the posterior probability of the preferred state encoded by the neuron. I will discuss the application of this model to understanding cortical phenomena such as direction selectivity, decision making, and attentional modulation.
Recommended Readings:
R. P. N. Rao. “Bayesian Computation in Recurrent Neural Circuits” Neural Computation, 16(1), 138, 2004.
R. P. N. Rao and D. H. Ballard. “Predictive Coding in the Visual Cortex: A Functional Interpretation of Some ExtraClassical Receptive Field Effects” Nature Neuroscience, 2(1), 7987, 1999.
R. P. N. Rao. “An Optimal Estimation Approach to Visual Perception and Learning” Vision Research, 39(11), 19631989, 1999.
Download the lecture text (pdf) here
Download the lecture slides (ppt) here
Nov. 16th 19:0022:00: Konrad Koerding
"Bayesian combination of priors and perception : Optimality in sensorimotor integration"
When we move our hands we have to contend with variability inherent in our sensory feedback. As our sensors do not provide perfect information we can only estimate our hands position. Bayesian statistics defines that to generate an optimal estimate, information about the distribution of positions (the prior) should be combined with the evidence provided by sensory feedback. I will summarize a number of experiments that show that that humans use such an optimal strategy. In such studies it is usually assumed that people have a certain criterion of optimality; often it is assumed that people want to minimize the mean squared error or maximize the amount of money they earn. In the human sensorimotor system it is possible to measure this optimality criterion that is called utility function. I will review experiments that measure such utility functions. We find that for the range of errors we used the optimality criterion was close to our initial assumptions. The optimality criterion should, however, not only depend on the errors made but also on the effort needed. Which forces had to be held for how long should influence the optimal behaviour. Understanding how people use Bayesian statistics along with understanding what they are trying to optimize in a statistical way is an important step to put human behaviour into a more quantitative framework.
Recommended Readings:
Kording, KP. and Wolpert, D. (2004) Bayesian Integration in Sensorimotor Learning, Nature 427:244247
http://www.koerding.com/pubs/koerdingNature2004.pdf
Trommershauser, J., Maloney, L. T. & Landy, M. S. (2003), Statistical decision theory and tradeoffs in motor response. Spatial Vision, 16, 255275.
Kording, KP. and Wolpert, D. (2004) The loss function of sensorimotor learning, Proceedings of the National Academy of Sciences 101:983942
http://www.koerding.com/pubs/KorWol_PNAS_04.pdf
Download the lecture text (pdf) here
Nov. 17th 9:0012:00: Wolfgang Maass
"Computational Properties of Cortical Microcircuit Models"
I will focus on the challenge to understand computations in realistic models for cortical microcircuits, for example computer models of cortical microcircuits based on the detailed experimental data of [Thomson et al, 2002], [Markram et al, 1998], and [Gupta et al, 2000]. It is commonly assumed that such microcircuits are the
computational modules of cortical computations (see [Mountcastle, 1998], [Silberberg et al, 2002]).
Cortical microcircuits can be viewed as special cases of dynamical
systems, but as dynamical systems that continuously receive external inputs (instead of the case of autonomous dynamical systems that has been studied extensively in dynamical systems theory). I will discuss theoretical results and computer simulations of online computations in such dynamical systems (see [Bertschinger et al, 2004], [Maass et al, 2004]).
This analysis suggests a new systemsoriented perspective of neural computation and neural coding which complements the classical approaches. It also suggests new methods for the design of experiments and for analyzing data from multiunit neural recordings.
Recommended Readings:
[Bertschinger et al, 2004]
Nils Bertschinger and Thomas Natschlager, RealTime Computation at the Edge of Chaos in Recurrent Neural Networks Neural Computation,Vol. 16, Issue 7 / July 2004 http://www.igi.tugraz.at/nilsb/publications/NCpaper.pdf
[Gupta et al, 2000]
Gupta A, Wang Y, Markram H. Organizing principles for a diversity of GABAergic interneurons and synapses in the neocortex. Science. 2000 Jan 14;287(5451):2738.
[Maass et al, 2004]
W. Maass, R. A. Legenstein, and N. Bertschinger. Methods for estimating the computational power and generalization capability of neural microcircuits. In Proc. of NIPS 2004, Advances in Neural Information Processing Systems, volume 17. MIT Press, 2005 http://www.igi.tugraz.at/Abstracts/MaassETAL:04/
[Markram et al, 1998]
Markram H, Wang Y, Tsodyks M. Differential signaling via the same axon of neocortical pyramidal neurons. Proc Natl Acad Sci U S A. 1998 Apr 28;95(9):53238.
[Mountcastle, 1998] V.B. Mountcastle. Perceptual Neuroscience: The Cerebral Cortex. Harvard University Press, 1998
[Silberberg et al, 2002]
Silberberg G, Gupta A, Markram H. Stereotypy in neocortical microcircuits. Trends Neurosci. 2002 May;25(5):22730. Review.
[Thomson et al, 2002]
Thomson AM, West DC, Wang Y, Bannister AP. Synaptic connections and small circuits involving excitatory and inhibitory neurons in layers 25 of adult rat and cat neocortex: triple intracellular recordings and biocytin labelling in vitro. Cereb Cortex. 2002 Sep;12(9):93653.
Download the lecture text (pdf) here
Nov. 17th 19:0022:00: Barry Richmond
"Neural Coding: Determinsim vs stochasticity"
Ever since the advent of single neuronal recording, there has been speculation and debate aboutwhat aspects of neuronal responses carry information. It has always been clear that the number of spikes is related to the function of neurons. However, because neuronal responses vary in their rate, it has also seemed likely that the pattern of spikes over time, for example, with changes in rate, also carries information. A lot of recent work has focused on what the natural time resolution for representing the neural spike might be. Under some circumstances such as viewing stationary objects or scenes a time resolution of 1020 ms seems adequate, whereas with rapidly changing random stimulus sequences the spikes seem to be considerably more precise, with resolution of under 5 ms. In many circumstances the number of spikes in each response can vary considerably. For interpreting whether spikes are stochastic samples from a rate function or more deterministic, it is critical to understand the relation between the number and patterns of spikes. Taking this relation into account leads to understanding spike trains in terms of order statistics, allowing development of a strategy for decoding stochastic spike trains millisecondbymillisecond. Further work leads to the observation that spike trains, which often seem stochastic but seldom if ever seem Poisson, can often be modeled as a mixture of nonhomogeneous or ratevarying Poisson processes, making decoding and simulation easy and quick.
Recommended Readings:
Matthew C. Wiener and Barry J. Richmond, Decoding Spike Trains Instant by Instant Using Order Statistics and the MixtureofPoissons Model, The Journal of Neuroscience, March 15, 2003; 23(6):2394 –2406.
http://www.jneurosci.org/cgi/content/full/23/6/2394
M. W. Oram, M. C. Wiener, R. Lestienne, B. J. Richmond, Stochastic Nature of Precisely Timed Spike Patterns in Visual System Neuronal Responses, Journal of Neurophysiology, Jun. 1999; 81(6):302133.
Download the lecture text (pdf) here
Nov. 18th 9:0012:00 Bruno Olshausen
"Representing what and where in timevarying images"
Recent work on sparse coding and ICA has shown how unsupervised
learning principles, when combined with the statistics of natural
scenes, can account for receptive field properties found in the
visual cortex of mammals. Here I will show how sparse coding may
be incorporated into a more global framework for doing Bayesian
inference in the cortex. I will also discuss more recent work
attempting to model both the invariant ("what") and variant ("where")
structure in images.
Recommended Readings:
Olshausen BA, Field DJ (2004) Sparse coding of sensory inputs.
Current Opinion in Neurobiology, 14: 481487.
ftp://redwood.ucdavis.edu/pub/papers/currentopinion.pdf
Olshausen BA, Field DJ (2004) What is the other 85% of V1 doing?
In: Problems in Systems Neuroscience. T.J. Sejnowski, L. van Hemmen, eds. Oxford University Press. (in press)
ftp://redwood.ucdavis.edu/pub/papers/V1article.pdf
Murray SO, Kersten D, Olshausen BA, Schrater P, Woods DL (2002).
Shape Perception Reduces Activity in Human Primary Visual Cortex.
Proceedings of the National Academy of Sciences, 99(23):1516415169. ftp://redwood.ucdavis.edu/pub/papers/scottpnas.pdf
Johnson JS, Olshausen BA (2003). Timecourse of Neural Signatures of Object Recognition. Journal of Vision, 3: 499512.
http://www.journalofvision.org/3/7/4/
Karklin Y, Lewicki MS. (2003) Learning higherorder structures in
natural images. Network, 14(3):48399.
Lewicki MS. (2002) Efficient coding of natural sounds.
Nat Neurosci., 5(4):35663.
Hyvarinen A, Hurri J, Vayrynen J. (2003) Bubbles: a unifying
framework for lowlevel statistical properties of natural image
sequences. J Opt Soc Am A Opt Image Sci Vis., 20(7):123752.
Download the lecture text (pdf) here
Download the lecture slides, part 1 (pdf) here, part 2 (pdf) here
Nov. 18th 19:0022:00 Tai Sing Lee
"Cortical mechanisms of visual scene segmentation
 a hierarchical Bayesian perspectiveierarchical "
Scene segmentation is the visual process that organizes and parses a visual scene into different coherent parts. It identifies and localizes the boundary between objects or surfaces of objects, delineating their shapes and forms. Thus it is fundamental for object recognition, interpreation of form, as well as the analysis of the spatial arrangement of objectsin the visual scene.
In this lecture, I will review some computational ideas and models on scene segmentation and neurophyisological evidence on cortical mechanisms underlying each of the computational component processes that are thought to be important for scene segmentation. I will discuss how these data might suggest a hierarchical Bayesian framework for cortical inference and present some new experimental and computational results on these issues.
Supplementary Readings:
Hochstein, S. Ahissar, M. (2002) View from the top: hierarchies and
reverse hierarchies in the visual system,
Neuron 36, 791804 (2002).
Lee, T.S., Mumford, D. Romero, R. Lamme, V.A.F. (1998).
The role of the primary visual cortex in higher level vision.
Vision Research, 38(1516): 242954.
Lee, T.S., Yang, C., Romero, R. and Mumford, D. (2002). Neural
activity in early visual cortex reflects behavioral experience and higher order perceptual saliency
Nature Neuroscience 5(6) . 589597
Kelly, R. and Lee, T.S. (2004) Decoding V1 Neuronal Activity using Particle Filtering with Volterra Kernels. Advances in Neural Information Processing Systems, MIT Press. In Press.
Lee, T.S., Nguyen, M. (2001). Dynamics of subjective contour formation in early visual cortex. Proceedings of the National Academy of Sciences, U.S.A. , 98(4) 19071911.
Lee, T.S., Nguyen, M. (2001). Dynamics of subjective contour formation in early visual cortex. Proceedings of the National Academy of Sciences, U.S.A. , 98(4) 19071911.
Nov. 19th 9:0012:00 Anthony Bell
"Unsupervised machine learning with spike timings"
Abstract of Bell A.J. & Parra L.C. 2004. Maximising Information yields Spike Timing Dependent Plasticity (not final version!)
Experiments show a synaptic weight potentiating if its presynaptic spike just preceded its postsynaptic one, and depressing if it came just after, with a sharp transition at synchrony. To understand why, we would like to derive this rule from first principles. To do this, we first calculate the dependency of the postsynaptic spike timing on the presynaptic spike timing in a linear spiking model called the Spike Response Model. We then use this to calculate the gradient of the information transfer in a spiking network. This produces a nonlocal learning rule for the weights which has the correct signs of potentiation and depression, but without the sharp transition. Since the rule is analogous to ICA, we follow Amari in transforming the gradient by an approximate Hessian to get a Natural Gradient rule. This yields an almostlocal learning algorithm in which a sharp transition between potentiation and depression now appears. The main mismatch between our rule and the experiment is an offset of the sharp transition from synchrony. We believe this is due to a mismatch between the Spike Response Model and the real neuron. We propose that information maximisation occurs across time through a causal network of spike timings.
Gerstner W. & Kistner W.M. 2002. Spiking neuron models, Camb. Univ. Press
Bell A.J. 2003. The coinformation lattice, Proc. ICA 2003, Nara, Japan
Bell A.J. & Parra L.C. 2004. Maximising Information yields Spike Timing Dependent Plasticity, NIPS 2004, to appear.
Download the lecture text (pdf) here
Top  Schedule  Lectures  Projects  People
Projects
Student Projects
Most of the afternoon hours will be spent for student projects. Below
are the three groups and tutors in charge of them:
Head: Tomohiro Shibata (NAIST)
A) Modeling Group
Shinich Maeda (NAIST)
Angela Yu (Gatsby)
Peggie Series (Gatsby)
B) Analysis Group
Masami Tatsuno (U Arizona)
Wakako Nakamura (Hokkaido U)
Yoichi Miyawaki (RIKEN)
C) Psychophysics Group
Yukiyasu Kamitani (ATR)
Ryota Kanai (Utrecht U)
Thierry Caminade (ATR)
Top  Schedule  Lectures  Projects  People
People
 Coorganizers:
 Kenji Doya Initial Research Project, OIST
 Shin Ishii Nara Institute of Science and Technology
 Alex Pouget University of Rochester
 Rajesh Rao University of Washington
 Lecturers:
 Shunichi Amari, RIKEN Brain Science Institute
Anthony Bell, Redwood Neuroscience Institute  Jeff Bilmes University of Washington
 Adrienne Fairhall University of Washington
 Karl Friston University College London
 David Knill University of Rochester
 Konrad Koerding MIT
 Peter Latham Gatsby Computational Neuroscience Unit,UCL
 TaiSing Lee Carnegie Mellon University
 Wolfgang Maass Technische Universitaet Graz
 Bruno Olshausen, University of California, Davis
 Jonathan Pillow New York University
 Alex Pouget University of Rochester
 Rajesh Rao University of Washington
 Barry Richmond National Instutes of Health
 Michael Shadlen University of Washington
 Emo Todorov University of California, San Diego
 Richard Zemel University of Toronto
 Tutors:
 Thierry Chaminade, ATR
 Yukiyasu Kamitani ATR
 Ryota Kanai Universiteit Utrecht
 Shinichi Maeda Nara Institute of Science and Technology
 Yoichi Miyawaki RIKEN /PRESTO JST
 Wakako Nakamura, Hokkaido University
 Peggy Series Gatsby Computational Neuroscience Unit, UCL
 Tomohiro Shibata Nara Institute of Science and Technology
 Masami Tatsuno University of Arizona
 Angela Yu Gatsby Computational Neuroscience Unit, UCL
 Students:
 Jeffrey Beck, University of Rochester
 Ulrik Beierholm, California Institute of Technology
 Fredrik Bissmarck, ATR
 Tansu Celikel, Max Planck Institute for Medical Research
 Pammi Chandrasekhar, University of Hyderabad
 Pierre Dangauthier, INRIA (The French National Institute for Research in Computer Science and Control)
 Gaelle Desbordes, Boston University
 JeanClaude Dreher, CNRS, Institut des Sciences Cognitives
 Michael Fine, Washington University in St. Louis
 Surya Ganguli University of California, San Fransisco
 Alan Hampton, California Institute of Technology
 Junichiro Hirayama, Nara Institute of Science and Technology
 Mark Histed, RIKENMIT
 Timothy Hospedales, , University of Edinburgh
 Anne Hsu, University of California, Berkeley
 Quentin Huys, Gatsby Computational Neuroscience Unit, UCL
 Adam Johnson, University of Minnesota
 Mehdi Khamassi, College de France / Universite de Paris 6
 Steffen Klingenhoefer, PhilippsUniversity Marburg
 Nedialko Krouchev, Universite de Montreal
 Mauktik Kulkarni Johns Hopkins University
 Hakwan Lau, University of Oxford
 Timm Lochmann, Max Planck Institute for Mathematics in the Sciences
 Tomas Maul , University Malaya
 Michael Mistry University of Southern California
 Keiji Miura , Kyoto University / RIKEN BSI
 Marta Moita, Instituto Gulbenkian Ciencia
 Riadh Mtibaa, University of ElectroCommunications
 Jonathan Nelson University of California, San Diego
 Uta Noppeney, Institute of Neurology, Wellcome Department of Imaging Neuroscience , UCL
 Gergo Orban, Collegium Budapest
 Felix Polyakov, Weizmann Institute of Science
 Maribel PulgarinMontoya, Technische Universitaet Graz
 Constantin Rothkopf, University of Rochester
 Ausra Saudargiene, Vytautas Magnus University
 Robert Schmidt, University of Otago, Dunedin
 Sean Slee, University of Washington
 Martin Spacek, University of British Columbia
 Hirokazu Tanaka, Columbia University
 Lili Tcheang, University of Oxford
 Roberto Valerio, CNRS, Institut de Neurobiologie Alfred Fessard
 Nobuhiko Wagatsuma, University of Tsukuba
 Hatim Zariwala, State University of New York, Stony Brook