Toward Efficient Deep Neural Network Deployment: Deep Compression and EIE, Song Han

Описание к видео Toward Efficient Deep Neural Network Deployment: Deep Compression and EIE, Song Han

Neural networks are both computationally and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. Song Han explains how deep compression addresses this limitation by reducing the storage requirement of neural networks by 10x-49x without affecting their accuracy and proposes an energy-efficient inference engine (EIE) that accelerates on the sparse, compressed model 13x faster and 3000x more energy efficient than a GPU.

Комментарии

Информация по комментариям в разработке