Logo video2dn
  • Сохранить видео с ютуба
  • Категории
    • Музыка
    • Кино и Анимация
    • Автомобили
    • Животные
    • Спорт
    • Путешествия
    • Игры
    • Люди и Блоги
    • Юмор
    • Развлечения
    • Новости и Политика
    • Howto и Стиль
    • Diy своими руками
    • Образование
    • Наука и Технологии
    • Некоммерческие Организации
  • О сайте

Скачать или смотреть Sparse Neural Networks: From Practice to Theory

  • Communications and Signal Processing Seminar Series
  • 2022-10-21
  • 4814
Sparse Neural Networks: From Practice to Theory
  • ok logo

Скачать Sparse Neural Networks: From Practice to Theory бесплатно в качестве 4к (2к / 1080p)

У нас вы можете скачать бесплатно Sparse Neural Networks: From Practice to Theory или посмотреть видео с ютуба в максимальном доступном качестве.

Для скачивания выберите вариант из формы ниже:

  • Информация по загрузке:

Cкачать музыку Sparse Neural Networks: From Practice to Theory бесплатно в формате MP3:

Если иконки загрузки не отобразились, ПОЖАЛУЙСТА, НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если у вас возникли трудности с загрузкой, пожалуйста, свяжитесь с нами по контактам, указанным в нижней части страницы.
Спасибо за использование сервиса video2dn.com

Описание к видео Sparse Neural Networks: From Practice to Theory

Atlas Wang
Assistant Professor, Electrical and Computer Engineering
The University of Texas at Austin

Abstract: A sparse neural network (NN) has most of its parameters set to zero and is traditionally considered as the product of NN compression (i.e., pruning). Yet recently, sparsity has exposed itself as an important bridge for modeling the underlying low dimensionality of NNs, for understanding their generalization, optimization dynamics, implicit regularization, expressivity, and robustness. Deep NNs learned with sparsity-aware priors have also demonstrated significantly improved performances through a full stack of applied work on algorithms, systems, and hardware. In this talk, I plan to cover some of our recent progress on the practical, theoretical, and scientific aspects of sparse NNs. I will try scratching the surface of three aspects: (1) practically, why one should love a sparse NN, beyond just a post-training NN compression tool; (2) theoretically, what are some guarantees that one can expect from sparse NNs; and (3) what is future prospect of exploiting sparsity.

Bio: Professor Zhangyang “Atlas” Wang is currently the Jack Kilby/Texas Instruments Endowed Assistant Professor in the Department of Electrical and Computer Engineering at The University of Texas at Austin, leading the VITA group (https://vita-group.github.io/). Meanwhile, in a part-time role, he serves as the Director of AI Research & Technology for Picsart. During 2021 – 2022, he held a visiting researcher position at Amazon Search. He received his Ph.D. degree in ECE from UIUC in 2016, advised by Professor Thomas S. Huang; and his B.E. degree in EEIS from USTC in 2012. Prof. Wang has broad research interests spanning from the theory to application aspects of machine learning. Most recently, he studies efficient ML / learning with sparsity, robust & trustworthy ML, AutoML / learning to optimize (L2O), and graph ML, as well as their applications in computer vision and interdisciplinary science. His research is supported by NSF, DARPA, ARL, ARO, IARPA, DOE, as well as dozens of industry and university grants. His students and himself have received many research awards and scholarships, as well as extensive media coverage.

Комментарии

Информация по комментариям в разработке

Похожие видео

  • О нас
  • Контакты
  • Отказ от ответственности - Disclaimer
  • Условия использования сайта - TOS
  • Политика конфиденциальности

video2dn Copyright © 2023 - 2025

Контакты для правообладателей [email protected]