Why Do We Use the Sigmoid Function for Binary Classification?

Описание к видео Why Do We Use the Sigmoid Function for Binary Classification?

This video explains why we use the sigmoid function in neural networks for machine learning, especially for binary classification. We consider both the practical side of making sure we get a consistent gradient from the standard categorical loss function, as well as making sure the equation is easily computable. We also look at the statistical side by giving an interpretation for what the logit values represent (the values passed into the sigmoid function), and how they can be thought of as normally distributed values with their means shifted one way or the other depending on which class they are for.

My other video, "Derivative of Sigmoid and Softmax Explained Visually":
📼    • Derivative of Sigmoid and Softmax Exp...  

The Desmos graph of the sigmoid function:
📈https://www.desmos.com/calculator/hjc...

Connect with me:
🐦 Twitter -   / elliotwaite  
📷 Instagram -   / elliotwaite  
👱 Facebook -   / elliotwaite  
💼 LinkedIn -   / elliotwaite  

Join our Discord community:
💬   / discord  

🎵 Kazukii - Return
→   / ohthatkazuki  
→ https://open.spotify.com/artist/5d07M...
→    / officialkazuki  

Комментарии

Информация по комментариям в разработке