"Learning Low-rank Functions With Neural Networks" by Rebecca Willett

Описание к видео "Learning Low-rank Functions With Neural Networks" by Rebecca Willett

Join us for an insightful lecture by Rebecca Willett, Professor of Statistics and Computer Science and Faculty Director of AI at the Data Science Institute at the University of Chicago, delivered as part of the workshop honoring Andrew Barron at Yale University.

Speaker Bio:
Rebecca Willett is a Professor of Statistics and Computer Science and the Faculty Director of AI at the Data Science Institute, with a courtesy appointment at the Toyota Technological Institute at Chicago. Her research focuses on machine learning foundations, scientific machine learning, and signal processing. She is the Deputy Director for Research at the NSF-Simons Foundation National Institute for Theory and Mathematics in Biology and a member of the Executive Committee for the NSF Institute for the Foundations of Data Science. Willett has received numerous awards, including the National Science Foundation CAREER Award and the Air Force Office of Scientific Research Young Investigator Program award. She was named a Fellow of the Society of Industrial and Applied Mathematics in 2021 and a Fellow of the IEEE in 2022. Her work is recognized internationally for its contributions to the mathematical foundations of machine learning and computational imaging. She advocates for diversity in STEM and AI and has organized multiple events supporting women in these fields.

Abstract:
Neural network architectures play a key role in determining which functions are fit to training data and the resulting generalization properties of learned predictors. For instance, when training an overparameterized neural network to interpolate a set of training samples using weight decay, the network architecture influences which interpolating function is learned. In this talk, Rebecca Willett describes new insights into the role of network depth in machine learning using the notion of representation costs – the "cost" for a neural network to represent a function. Understanding representation costs helps reveal the role of network depth in machine learning. Willett shows that adding linear layers to a ReLU network results in a representation cost that favors functions with latent low-dimension structure, such as single- and multi-index models.

🔗 Related Links: fds.yale.edu

📅 Event: Workshop Honoring Andrew Barron: Forty Years at the Interplay of Information Theory, Probability, and Statistical Learning

📍 Location: Yale University, Kline Tower

🗓 Date: April 26-28, 2024

Комментарии

Информация по комментариям в разработке