Jonathan Frankle of MosiacML— Neural Network Pruning and Training

Описание к видео Jonathan Frankle of MosiacML— Neural Network Pruning and Training

Jonathan Frankle, Chief Scientist at MosaicML and Assistant Professor of Computer Science at Harvard University joins Lukas Biewald on this episode of Gradient Dissent. With comprehensive infrastructure and software tools, MosaicML aims to help businesses train complex machine-learning models using their own proprietary data.

In this episode of Gradient Dissent they discuss:

Details of Jonathan’s Ph.D. dissertation which explores his “Lottery Ticket Hypothesis.”

The role of neural network pruning and how it impacts the performance of ML models.
Why transformers will be the go-to way to train NLP models for the foreseeable future.
Why the process of speeding up neural net learning is both scientific and artisanal.
What MosiacML does, and how it approaches working with clients.
The challenges for developing AGI.
Details around ML training policy and ethics.
Why data brings the magic to customized ML models.
The many use cases for companies looking to build customized AI models.
And much more.

Full episode notes, links and the transcript is available at http://wandb.me/gd-jonathan

Комментарии

Информация по комментариям в разработке