[Seminar] From Gradient-Free Federation to Leveraging Deep Learning Geometry
Date
Location
Description
Assistant Professor Mirko Polato
University of Turin, Department of Computer Science
Abstract: Federated learning is typically built around gradient exchange and parameter averaging, yet collaboration does not have to rely on gradients alone. In the first part of this talk, I explore gradient-free approaches to federation, including federated boosting and Support Vector Federation, where models are aggregated in function space or through perturbed support vectors rather than shared weights. I also discuss margin-promoting objectives that reshape local optimization to reduce client drift and improve stability under non-i.i.d. data.
The second (very short) part shifts to deep representation learning and focuses on Neural Collapse, a geometric regularity emerging late in training. Rather than treating it as a byproduct of optimization, we use NC-related metrics as training-time signals to identify when and where networks can be simplified without loss of performance.
Subscribe to the OIST Calendar: Right-click to download, then open in your calendar application.

