Logo video2dn
  • Сохранить видео с ютуба
  • Категории
    • Музыка
    • Кино и Анимация
    • Автомобили
    • Животные
    • Спорт
    • Путешествия
    • Игры
    • Люди и Блоги
    • Юмор
    • Развлечения
    • Новости и Политика
    • Howto и Стиль
    • Diy своими руками
    • Образование
    • Наука и Технологии
    • Некоммерческие Организации
  • О сайте

Скачать или смотреть DLS: Peter Bartlett • Gradient Optimization Methods: The Benefits of a Large Step-size

  • Faculty of Mathematics, University of Waterloo
  • 2026-01-30
  • 38
DLS: Peter Bartlett • Gradient Optimization Methods: The Benefits of a Large Step-size
  • ok logo

Скачать DLS: Peter Bartlett • Gradient Optimization Methods: The Benefits of a Large Step-size бесплатно в качестве 4к (2к / 1080p)

У нас вы можете скачать бесплатно DLS: Peter Bartlett • Gradient Optimization Methods: The Benefits of a Large Step-size или посмотреть видео с ютуба в максимальном доступном качестве.

Для скачивания выберите вариант из формы ниже:

  • Информация по загрузке:

Cкачать музыку DLS: Peter Bartlett • Gradient Optimization Methods: The Benefits of a Large Step-size бесплатно в формате MP3:

Если иконки загрузки не отобразились, ПОЖАЛУЙСТА, НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если у вас возникли трудности с загрузкой, пожалуйста, свяжитесь с нами по контактам, указанным в нижней части страницы.
Спасибо за использование сервиса video2dn.com

Описание к видео DLS: Peter Bartlett • Gradient Optimization Methods: The Benefits of a Large Step-size

Deep learning, the technology underlying the recent progress in AI, has revealed some major surprises from the perspective of theory. Optimization in deep learning relies on simple gradient descent algorithms that are traditionally viewed as a time discretization of gradient flow. However, in practice, large step sizes — large enough to cause oscillation of the loss — exhibit performance advantages.

This talk will review recent results on gradient descent with logistic loss with a step size large enough that the optimization trajectory is at the “edge of stability.” We show the benefits of this initial oscillatory phase for linear functions and for multi-layer networks, and identify an asymptotic implicit bias that gradient descent imposes for a large family of deep networks.

Based on joint work with Yuhang Cai, Michael Lindsey, Song Mei, Matus Telgarsky, Jingfeng Wu, Bin Yu and Kangjie Zhou.

Bio: Peter Bartlett is Professor of Statistics and Computer Science at UC Berkeley and Principal Scientist at Google DeepMind. At Berkeley, he is the Machine Learning Research Director at the Simons Institute for the Theory of Computing, Director of the Foundations of Data Science Institute, and Director of the Collaboration on the Theoretical Foundations of Deep Learning, and he has served as Associate Director of the Simons Institute. He is President of the Association for Computational Learning, Honorary Professor of Mathematical Sciences at the Australian National University, and co-author with Martin Anthony of the book Neural Network Learning: Theoretical Foundations.

He was awarded the Malcolm McIntosh Prize for Physical Scientist of the Year in Australia, and has been an Institute of Mathematical Statistics Medallion Lecturer, an IMS Fellow and Australian Laureate Fellow, a Fellow of the ACM, a recipient of the UC Berkeley Chancellor’s Distinguished Service Award, and a Fellow of the Australian Academy of Science.

🩷💛 Engage with us online! 🖤🩷
Instagram: instagram.com/waterloomath
LinkedIn: www.linkedin.com/showcase/faculty-of-math
Facebook: facebook.com/waterloomath

--

As North America's only dedicated Faculty of Math, we are nationally and internationally recognized as one of the top schools for Mathematics and Computer Science.

With nearly $30 million in research funding (2019/20) and an alumni network of over 45,000 across more than 100 countries, our students, faculty, and graduates continue to push the boundaries of research to discover new ways to harness the power of mathematics, computer science, and statistics.

Visit our website at uwaterloo.ca/math

Комментарии

Информация по комментариям в разработке

Похожие видео

  • О нас
  • Контакты
  • Отказ от ответственности - Disclaimer
  • Условия использования сайта - TOS
  • Политика конфиденциальности

video2dn Copyright © 2023 - 2025

Контакты для правообладателей [email protected]