Seminar: Mechanisms of model-based reinforcement learning: Prospection and episodes, by Prof. Nathaniel D. Daw

Date

Thursday, March 19, 2015 - 10:00 to 11:00

Location

Seminar Room C210, Level C, Center bldg

Description

The Graduate School would like to invite you to a seminar by Prof. Nathaniel D. Daw, from Neural Science and Psychology at New York University. This talk will be introduced by Prof. Kenji Doya from Neural Computation Unit. 

--------------------------------------------------------------------------------------
Date:   Thursday, March 19, 2015
Time:   10 am – 11 am
Venue:  Seminar Room C210, Level C, Center bldg
--------------------------------------------------------------------------------------

Speaker:

Prof. Nathaniel D. Daw
Neural Science and Psychology, New York University

Title:

Mechanisms of model-based reinforcement learning: Prospection and episodes

Abstract:

Decisions and neural correlates of decision variables both indicate that humans and animals make decisions taking into account task structure, but there is still little evidence about the algorithmic or neural mechanisms that support such "model-based" learning. I discuss recent studies attempting to address these questions. First, although it is widely envisioned that such model-based choices are supported by prospective computations at decision time, there are also indications that such behaviors may instead be produced by various sorts of precomputations. I present fMRI data from a sequential decision task in which states are tagged with decodable stimulus categories, which demonstrate a correspondence between predictive neural activity and other behavioral and neural signatures of model-based and model-free learning. This supports the widespread supposition that these behaviors are indeed supported by prospection. Second, I present some early and ongoing studies examining to what extent decisions are informed by representations of individual episodes, vs. statistics aggregated over multiple experiences as learned by typical algorithms, both model-based and model-free. Memory for episodes could support distinct computational approaches to the decision problem, including monte carlo and kernel methods, and also might support some apparently model-based behaviors. 

Biography: 

Nathaniel Daw is the Associate Professor of Neural Science and Psychology and Affiliated Associate Professor of Computer Science at New York University. He received his Ph.D. in computer science from Carnegie Mellon University and at the Center for the Neural Basis of Cognition, before conducting postdoctoral research at the Gatsby Computational Neuroscience Unit at UCL. His research concerns computational approaches to reinforcement learning and decision making, and particularly the application of computational models in the laboratory, to the design of experiments and the analysis of behavioral and neural data. He is the recipient of a McKnight Scholar Award, a NARSAD Young Investigator Award, a Scholar Award in Understanding Human Cognition from the MacDonnell Foundation, and the Young Investigator Award from the Society for Neuroeconomics.

Attachments

Sponsor or Contact: 
academic@oist.jp
All-OIST Category: 

Subscribe to the OIST Calendar: Right-click to download, then open in your calendar application.