SEMINAR

Mean field limit in neural network learning

Friday, Oct 23 2020 - 9:11 pm (GMT + 7)
Speaker
Phan-Minh Nguyen
Working
Stanford University
Timeline
Fri, Oct 23 2020 - 10:00 am (GMT + 7)
About Speaker

Phan-Minh Nguyen (Nguyễn Phan Minh) obtained recently his PhD in Electrical Engineering from Stanford University, advised by Andrea Montanari, and previously his bachelor’s degree from the National University of Singapore. Over the years, his works have spanned from information and coding theory to statistical inference, and more recently, theoretical aspects of neural networks. His research fuses some imagination as a highschool physics competitor, some flexibility from an engineering education, and some rigor from his years struggling with maths. He now works in the finance industry at the Voleon Group.

Abstract

Neural networks are among the most powerful classes of machine learning models, but their analysis is notoriously difficult: their optimization is highly non-convex, preventing one from decoupling the optimization aspect and the statistical aspect as usually done in traditional statistics, and their model size is typically huge, easily fitting perfectly large training datasets as found empirically. A curious question emerges in recent years: can we take some of these difficulties to our advantage and say something meaningful about the behavior of neural networks during training?

In this talk, we present one such viewpoint. In the limit of a large number of neurons per layer, under suitable scaling, the training dynamics of the neural network tends to a meaningful and nonlinear dynamical limit, known as the mean field limit. This viewpoint not only removes a major part of the analytical difficulty — the model’s large width — out of the picture, but also lends a way to rigorous studies of the neural network’s properties. These include proving convergence to the global optimum, which sheds light on why neural networks can be optimized well despite non-convexity, and a precise mathematical characterization of the data representation learned by a simple autoencoder.

This talk will be a tour over the story about two-layer neural networks, a simple two-layer autoencoder, and how new non-trivial ideas arise in the multilayer case. We shall spontaneously draw the analogy from the physics of interacting particles, with some light mathematical contents. This is based on joint works with Andrea Montanari, Song Mei, and Huy Tuan Pham. A complementary technical talk on the mean field limit of multilayer networks by Huy Tuan Pham at the OneWorld Series on the Mathematics of Machine Learning can be found here: https://www.oneworldml.org/thematic-d…

Related seminars

Coming soon
Niranjan Balasubramanian

Stony Brook University

Towards Reliable Multi-step Reasoning in Question Answering
Fri, Nov 03 2023 - 10:00 am (GMT + 7)
Nghia Hoang

Washington State University

Robust Multivariate Time-Series Forecasting: Adversarial Attacks and Defense Mechanisms
Fri, Oct 27 2023 - 10:00 am (GMT + 7)
Jey Han Lau

University of Melbourne

Rumour and Disinformation Detection in Online Conversations
Thu, Sep 14 2023 - 10:00 am (GMT + 7)
Tan Nguyen

National University of Singapore

Principled Frameworks for Designing Deep Learning Models: Efficiency, Robustness, and Expressivity
Mon, Aug 28 2023 - 10:00 am (GMT + 7)