Quantifying the Uncertainty in Model Predictions

Описание к видео Quantifying the Uncertainty in Model Predictions

Neural networks are infamous for making wrong predictions with high confidence. Ideally, when a model encounters difficult inputs, or inputs unlike data it saw during training, it should signal to the user that it is unconfident about the prediction. Better yet, the model could offer alternative predictions when it is unsure about its best guess. Conformal prediction is a general purpose method for quantifying the uncertainty in a model's predictions, and generating alternative outputs. It is versatile, not requiring assumptions on the model and being applicable to classification and regression alike. It is statistically rigorous, providing a mathematical guarantee on model confidence. And, it is simple, involving an easy three-step procedure that can be implemented in 3-5 lines of code. In this talk I will introduce conformal prediction and the intuition behind it, along with examples of how it can be applied in real-world usecases.

Jesse Cresswell, Sr. Machine Learning Scientist, Layer 6 AI

Комментарии

Информация по комментариям в разработке