[Seminar] "Prediction with expert advice: a PDE perspective on a model problem from machine learning" by Prof. Robert V. Kohn

Date

Thursday, November 17, 2022 - 10:00

Location

Zoom

Description

Geometric PDE and Applied Analysis Seminar (November 17, 2022)

Title: Prediction with expert advice: a PDE perspective on a model problem from machine learning

Speaker: Prof. Robert V. Kohn (Courant Institute of Mathematical Sciences, New York University)

Abstract: I’ll discuss recent work with Nadejda Drenska, Vladimir Kobzar, and Zhilei Wang, concerning a model problem from the machine learning literature known as “prediction with expert advice.” It is a repeated two-person game involving a “predictor” and his “adversary.” The predictor has access to guidance from N experts, whose outcomes are chosen by the adversary. At each time step the predictor must combine the experts’ guidance using a mixed strategy, with the goal of minimizing his worst-case final-time shortfall (“regret”) with respect to the best-performing expert. The optimal policies and the predictor’s worst-case regret are determined by a dynamic programming principle. To understand the asymptotic behavior of the game over many time steps, it is convenient to consider a scaled version of the dynamic programming principle. This leads to a characterization of the predictor’s worst-case regret as the unique viscosity solution of a nonlinear parabolic PDE.

The PDE has been solved explicitly in a few special cases involving small numbers of experts (N = 2, 3, 4). But the PDE viewpoint also suggests another approach, namely to seek upper and lower bounds using explicit supersolutions and subsolutions. The upper bounds obtained this way are in some sense familiar, as they involve potential-based strategies for the predictor (using time-dependent potentials). The lower bounds are somewhat less familiar: they involve potential-based strategies for the adversary. When the potentials (i.e. supersolutions or subsolutions) are smooth enough, there is actually no need to consider a scaling limit, and no need for viscosity theory: the upper and lower bounds follow directly from the dynamic programming principle using little more than Taylor expansion.

Zoom registration: Click here to get a Zoom link.

All-OIST Category: 

Subscribe to the OIST Calendar: Right-click to download, then open in your calendar application.