[Seminar 1] "Stable adaptation and learning" by Prof. Slotine
Date
Location
Description
Dear all,
Neural Computation Unit (Doya Unit) would like to invite you to a seminar as follows.
Speaker: Prof. Jean-Jacques Slotine
Massachusetts Institute of Technology
Registration is required to participate by zoom:
https://oist.zoom.us/meeting/register/tJAudeihpz4jE9E0H4EedDm-q23rozMZ74Qj
After registering, you will receive a confirmation email containing information about joining the meeting.
Title: Stable adaptation and learning
Abstract: The human brain still largely outperforms robotic algorithms in most tasks, using computational elements 7 orders of magnitude slower than their artificial counterparts. Similarly, current large scale machine learning algorithms require millions of examples and close proximity to power plants, compared to the brain's few examples and 20W consumption. We study how modern nonlinear systems tools, such as contraction analysis, virtual dynamical systems, and adaptive nonlinear control can yield quantifiable insights about collective computation, adaptation, and learning in large dynamical networks.
Stable concurrent learning and control of dynamical systems is the subject of adaptive nonlinear control. When multiple parameter choices are consistent with the data (be it for insufficient richness of the task or aggressive overparametrization), stable Riemannian adaptation laws can be designed to implicitly regularize the learned model. Thus, local geometry imposed during learning may be used to select parameter vectors for desired properties such as sparsity. The results can also be systematically applied to predictors for dynamical systems. Stable implicit sparse regularization can be exploited as well to select relevant dynamic models out of plausible physically-based candidates.
In optimization, most elementary results on gradient descent based on convexity of a time-invariant cost can be replaced by much more general results based on contraction. Semi-contraction of a natural gradient in some metric implies convergence to a global minimum, and furthermore that all global minima are path-connected. Adaptive controllers or predictors can also be used in transfer learning or sim2real contexts, where an optimizer has been carefully learned for a nominal system, but needs to remain effective in real-time in the presence of significant but structured variations in parameters.
Finally, a key aspect of contraction tools is that they also suggest systematic mechanisms to build progressively more refined networks and novel algorithms through stable accumulation of functional building blocks and motifs.
Biolgraphy: Jean-Jacques Slotine is Professor of Mechanical Engineering and Information Sciences, Professor of Brain and Cognitive Sciences, and Director of the Nonlinear Systems Laboratory. He received his Ph.D. from the Massachusetts Institute of Technology in 1983, at age 23. After working at Bell Labs in the computer research department, he joined the faculty at MIT in 1984. Professor Slotine teaches and conducts research in the areas of dynamical systems, robotics, control theory, computational neuroscience, and systems biology. He has been a Distinguished Faculty at Google AI since 2019. One of the most cited researchers in systems science, he was a member of the French National Science Council from 1997 to 2002, a member of Singapore’s A*STAR SigN Advisory Board from 2007 to 2010, and has been a member of the Scientific Advisory Board of the Italian Institute of Technology since 2010.
Prof. Slotine will also give a talk on Monday 16th from 10:00 – 11:30 at Seminar room B503 Lab1 Bldg.
Please visit HERE for more information regarding the second seminar.
We hope to see many of you at the seminar.
Sincerely,
Kikuko Matsuo
Neural Computation Unit
Subscribe to the OIST Calendar: Right-click to download, then open in your calendar application.