[Seminar] MLDS Unit Seminar 2025-4 by Mr. Rémi Surat (Université Paris Cité, ENSAE Paris), Ms. Klea Ziu (MBZUAI)
Date
Location
Description
Speaker 1: Mr. Rémi Surat (Université Paris Cité, ENSAE Paris)
Title: Flow-based generative models: how to train them efficiently using flow matching and optimal transport
Abstract: This talk aims to present generative models based on flows, which are currently state-of-the-art methods in generative AI for continuous spaces. We will begin with a brief overview of energy-based models, one of the earliest approaches to model probability distributions, which provide the basic ideas for understanding the difficulties of data generation. We will then focus on flow-based models and their efficient training through flow matching, while also explaining their connections to the well-known diffusion models. Particular emphasis will be placed on how optimal transport can serve as a framework for defining dynamic flows in probability space. Finally, we will discuss further links with Schrödinger bridges, highlighting the unifying perspective they bring to modern generative modeling.
Speaker 2: Ms. Klea Ziu (Mohamed bin Zayed University of Artificial Intelligence: MBZUAI, UAE)
Title: ψDAG: Projected Stochastic Approximation Iteration for Linear DAG Structure Learning
Abstract: Learning the structure of Directed Acyclic Graphs (DAGs) presents a significant challenge due to the vast combinatorial search space of possible graphs, which scales exponentially with the number of nodes. Recent advancements have redefined this problem as a continuous optimization task by incorporating differentiable acyclicity constraints. These methods commonly rely on algebraic characterizations of DAGs, such as matrix exponentials, to enable the use of gradient-based optimization techniques. Despite these innovations, existing methods often face optimization difficulties due to the highly non-convex nature of DAG constraints and the per-iteration computational complexity. In this work, we present a novel framework for learning DAGs, employing a Stochastic Approximation approach integrated with Stochastic Gradient Descent (SGD)-based optimization techniques. Our framework introduces new projection methods tailored to efficiently enforce DAG constraints, ensuring that the algorithm converges to a feasible local minimum. With its low iteration complexity, the proposed method is well-suited for handling large-scale problems with improved computational efficiency. We demonstrate the effectiveness and scalability of our framework through comprehensive experiments, which confirm its superior performance across various settings.
Subscribe to the OIST Calendar: Right-click to download, then open in your calendar application.

