How to Deploy ML Models in Production with BentoML

Описание к видео How to Deploy ML Models in Production with BentoML

In this video, you can learn how to deploy Machine Learning models into production using BentoML. I explain how to install BentoML, how to save ML models into BentoML's local store, how to create a BentoML service, how to build a bento, and how to containerise a bento with Docker. I also send requests to the BentoML service to receive back inferences.

Check code:
https://github.com/musikalkemist/mlde...

=================

Join The Sound Of AI Slack community:
https://valeriovelardo.com/the-sound-...

Interested in hiring me as a consultant/freelancer?
https://valeriovelardo.com/

Connect with Valerio on Linkedin:
  / valeriovelardo  

Follow Valerio on Facebook:
  / thesoundofai  

Follow Valerio on Twitter:
  / musikalkemist​  

=================

Content

0:00 Intro
0:19 BentoML deployment steps
1:03 Installing BentoML and other requirements
2:11 Training a simple ConvNet model on MNIST
5:52 Saving Keras model to BentoML local store
10:26 Creating BentoML service
15:28 Sending requests to BentoML service
22:06 Creating a bento
25:46 Serving a model through a bento
28:25 Dockerise a bento
30:46 Run BentoML service via Docker
32:41 Deployment options: Kubernetes + Cloud
33:43 Outro

Комментарии

Информация по комментариям в разработке