Selecting the best model in scikit-learn using cross-validation

Описание к видео Selecting the best model in scikit-learn using cross-validation

In this video, we'll learn about K-fold cross-validation and how it can be used for selecting optimal tuning parameters, choosing between models, and selecting features. We'll compare cross-validation with the train/test split procedure, and we'll also discuss some variations of cross-validation that can result in more accurate estimates of model performance.

Download the notebook: https://github.com/justmarkham/scikit...
Documentation on cross-validation: http://scikit-learn.org/stable/module...
Documentation on model evaluation: http://scikit-learn.org/stable/module...
GitHub issue on negative mean squared error: https://github.com/scikit-learn/sciki...
An Introduction to Statistical Learning: http://www-bcf.usc.edu/~gareth/ISL/
K-fold and leave-one-out cross-validation:    • Видео  
Cross-validation the right and wrong ways:    • Видео  
Accurately Measuring Model Prediction Error: http://scott.fortmann-roe.com/docs/Me...
An Introduction to Feature Selection: http://machinelearningmastery.com/an-...
Harvard CS109: https://github.com/cs109/content/blob...
Cross-validation pitfalls: http://www.jcheminf.com/content/pdf/1...

WANT TO GET BETTER AT MACHINE LEARNING? HERE ARE YOUR NEXT STEPS:

1) WATCH my scikit-learn video series:
   • Machine learning in Python with sciki...  

2) SUBSCRIBE for more videos:
https://www.youtube.com/dataschool?su...

3) JOIN "Data School Insiders" to access bonus content:
  / dataschool  

4) ENROLL in my Machine Learning course:
https://www.dataschool.io/learn/

5) LET'S CONNECT!
Newsletter: https://www.dataschool.io/subscribe/
Twitter:   / justmarkham  
Facebook:   / datascienceschool  
LinkedIn:   / justmarkham  

Комментарии

Информация по комментариям в разработке