Deploying ML Models in Production: An Overview

Описание к видео Deploying ML Models in Production: An Overview

The deployment of ML models in production is a delicate process filled with challenges. You can deploy a model via a REST API, on an edge device, or as as an off-line unit used for batch processing. You can build the deployment pipeline from scratch, or use ML deployment frameworks.

In this video, you'll learn about the different strategies to deploy ML in production. I provide a short review of the main ML deployment tools on the market (TensorFlow Serving, MLFlow Model, Seldon Deploy, KServe from Kubeflow). I also present BentoM - the focus of this mini-series - describing its features in detail.

=================

1st The Sound of AI Hackathon (register here!):
https://musikalkemist.github.io/theso...

Join The Sound Of AI Slack community:
https://valeriovelardo.com/the-sound-...

Interested in hiring me as a consultant/freelancer?
https://valeriovelardo.com/

Connect with Valerio on Linkedin:
  / valeriovelardo  

Follow Valerio on Facebook:
  / thesoundofai  

Follow Valerio on Twitter:
  / musikalkemist​  

=================

Content:

0:00 Intro
0:36 ML deployment strategies
1:32 Basic ML deployment
3:27 Disadvantages of basic ML deployment
4:57 Overview of ML deployment tools
9:54 BentoML
14:00 What's next?

Комментарии

Информация по комментариям в разработке