[Online Machine Learning Insights and Innovations (MLII) Seminar Series] How to Detect Out-of-Distribution Data in the Wild? Challenges, Research Progress, and Path Forward, By Assistant Prof. Sharon Yixuan Li, University of Wisconsin-Madison
Date
Location
Description
Zoom: https://oist.zoom.us/j/94680619697
Meeting ID: 946 8061 9697 Passcode: 482744
Speaker: Dr. Sharon Yixuan Li, Assistant Professor, Department of Computer Sciences, University of Wisconsin-Madison
Title: How to Detect Out-of-Distribution Data in the Wild? Challenges, Research Progress, and Path Forward
Abstract: When deploying machine learning models in the open and non-stationary world, their reliability is often challenged by the presence of out-of-distribution (OOD) samples. Since data shifts happen prevalently in the real world, identifying OOD inputs has become an important problem in machine learning. In this talk, I will discuss challenges, research progress, and opportunities in OOD detection. Our work is motivated by the insufficiency of existing learning objective such as ERM --- which focuses on minimizing error only on the in-distribution (ID) data, but do not explicitly account for the uncertainty that arises outside ID data. To mitigate the fundamental limitation, I will introduce a new algorithmic framework, which jointly optimizes for both accurate classification of ID samples, and reliable detection of OOD data. The learning framework integrates distributional uncertainty as a first-class construct in the learning process, thus enabling both accuracy and safety guarantees.
Bio: Sharon Yixuan Li is an Assistant Professor in the Department of Computer Sciences at the University of Wisconsin-Madison. She received a Ph.D. from Cornell University in 2017, advised by John E. Hopcroft. Subsequently, she was a postdoctoral scholar in the Computer Science department at Stanford University. Her research focuses on the algorithmic and theoretical foundations of learning in open worlds. She has served as Area Chair for ICLR, NeurIPS, ICML, and Program Chair for Workshop on Uncertainty and Robustness in Deep Learning. She is the recipient of the AFOSR Young Investigator Program (YIP) award, NSF CAREER award, MIT Technology Review TR-35 Award, Forbes30Under30 in Science, and multiple faculty research awards from Google, Meta, and Amazon. Her works received a NeurIPS Outstanding Paper Award, and an ICLR Outstanding Paper Award Honorable Mention in 2022.
Subscribe to the OIST Calendar: Right-click to download, then open in your calendar application.