Machine Learning and Imaging Lecture 9: “Deep” Networks – Theoretical Motivations

Описание к видео Machine Learning and Imaging Lecture 9: “Deep” Networks – Theoretical Motivations

In this lecture, Dr. Horstmeyer first overviews some of the material presented in Lectures 7-8 that support the need for enhancing machine learning networks beyond the basic linear and logistic classifier approaches. He the presents some of the theoretical and mathematical background that supports the foundations of machine learning. Using this basic theoretical background, he presents a case for why machine learning is possible at all and motivates the needs for specific features within neural networks to achieve successful performance. Concepts such as network capacity, bias and variance tradeoff, and the idea of manifolds are all introduced to enhance this motivation. A significant portion of this lecture builds upon material contained within the Caltech course, “Learning from Data,” by Prof. Y. Abu-Mostafa. Additional course material is available at deepimaging.github.io
#machinelearning #cameras #medicalimaging #ai #tensorflow

Комментарии

Информация по комментариям в разработке