Machine Learning and Imaging Lecture 11: Back-propagation and auto-differentiation software

Описание к видео Machine Learning and Imaging Lecture 11: Back-propagation and auto-differentiation software

This lecture explains the key computational methods that underpin the world's current ability to efficiently train machine learning and deep learning models. These "auto-differentiation" software methods allow us to efficiently perform the key step of "back-propagation" in algorithm training, wherein the weights of a neural network are efficiently updated to make the algorithm "smarter", i.e. perform better with respect to a defined loss function, as network training proceeds in an iterative manner. Simple examples borrowed from Stanford's CS231n course are used to carefully walk through how the auto-differentiation process works in the hopes of forming a solid understanding of the basic process. Additional resources are available at deepimaging.github.io
#machinelearning #cameras #medicalimaging #ai

Комментарии

Информация по комментариям в разработке