4. Stochastic Gradient Descent

Описание к видео 4. Stochastic Gradient Descent

A recurring theme in machine learning is to formulate a learning problem as an optimization problem. Empirical risk minimization was our first example of this. Thus to do learning, we need to do optimization. In this lecture we present stochastic gradient descent, which is today's standard optimization method for large-scale machine learning problems.

Access the full course at https://bloom.bg/2ui2T4q

Комментарии

Информация по комментариям в разработке