Improve Kubernetes Uptime and Resilience with a Canary Deployment

Описание к видео Improve Kubernetes Uptime and Resilience with a Canary Deployment

Your organization is successfully delivering apps in Kubernetes and now the team is ready to roll out v2 of a backend service. But there are concerns about traffic interruptions (a.k.a. downtime) and the possibility that v2 might unstable. As the Kubernetes engineer, you need to find a way to ensure v2 can be tested and rolled out with little to no impact to customers.

You need to implement a gradual, controlled migration – and what better way than with the traffic splitting technique “canary deployment”! Canary deployments provide a safe and agile way to test the stability of a new feature or version. Because your use case involves traffic moving between two Kubernetes services, you know use of a service mesh will yield the easiest and most reliable results. You use NGINX Service Mesh to send 10% of your traffic to v2, which the remaining 90% to v1. Then you gradually transition larger percentages of traffic to v2 until you reach 100%. Problem solved!

In This Lab You Will:
◆ Deploy minikube and NGINX Service Mesh
◆ Deploy two apps and use NGINX Service Mesh to observe traffic
◆ Use NGINX Service Mesh to implement a canary deployment

Technologies Used:
◆ NGINX Service Mesh
◆ Helm
◆ Jaeger

Try this demo for yourself and register today for Microservices March 2022!
⬡ https://bit.ly/3rVAz74
Get Started with NGINX Ingress Controller
⬡ https://bit.ly/35BHoSi
Free eBook: Taking Kubernetes from Test to Production
⬡ https://bit.ly/3HpvaJL

Chapters:
0:00 - How to Improve Kubernetes Uptime and Resilience
0:06 - Deploy a Cluster and NGINX Service Mesh
2:50 - Deploy Two Apps (a Frontend and a Backend)
8:06 - Use NGINX Service Mesh to Implement a Canary Deployment
15:13 - Try the Lab! Register for Microservices March 2022

Комментарии

Информация по комментариям в разработке