Kubernetes Deployment Strategy

Yash Bindlish
6 min readApr 29, 2020

Today adoption of containerization is already on scale where Enterprises are rapidly adopting containerization norms into their application modernization strategy. However, when it comes to orchestration Kubernetes, also known as K8s has already geared up its popularity.

Every organization is in race today to leverage all technology disruption advantages to meet their customer expectations and bring agility into their business deliverable’s. They want to abridge time to market, bring more resilience into their deployments without impacting any downtime, release apps and features swiftly and operate with great agility. Choosing the right Kubernetes Deployment strategy is key in distributing resilient applications and infrastructure at a scale which organizations are looking forward to leverage in order to deliver best results to their end customers.

Before we deep dive into the various Deployment Strategies, it is important to understand on how the Kubernetes Rollouts works.

Kubernetes Rollout workflow

1) Today infrastructure is no different than a software code. Infrastructure as Code (IaC) is the process of managing and provisioning computer data centers through machine-readable definition files. Writing your YAML Files describing the desired state configuration of the cluster

2) Next stage is to apply YAML file to the Kubernetes Cluster using a command line interface Kubectl

3) Kubectl submits the request to API Server (kube-apiserver) which is the main management point of the entire cluster. Kube-apiserver is responsible to authenticate your request and before making any changes to the worker nodes, it will record the changes into a database, etcd.

4) Kube Controller Manager will continuously monitor the requests coming and act as a daemon. Basically, a controller watches the state of the cluster through the API Server watch feature and, when it gets notified, it makes the necessary changes attempting to move the current state towards the desired state.

5) After all controllers have run, the kube-scheduler sees that there are pods in the “pending” state because they have not been scheduled to run on a node, yet. The scheduler finds suitable nodes for the pods then communicates with the kubelet in each node to take control and start the deployment.

6) Kubelet is the worker node agent. The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.

Deployment Strategies

There are several different types of deployment strategies you can take advantage of depending on your goal. For example, you may need to roll out changes to a specific environment for more testing, or a subset of users/customers or you may want to do some user testing before making a feature “Generally Available.”

- Rolling Deployment

Rolling Deployment is the default deployment strategy in Kubernetes. This Deployment will slowly replace PODs one at a time to avoid any downtime. Old pods will be scaled down ONLY when new PODs started running fine.

Rolling Update Strategy

Rolling updates wait for New PODs to become ready before it starts scaling out the old Pods one by one. The deployment can be aborted without bringing the whole cluster down.

YAML File:

apiVersion: v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 3
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: Nginx:alpine
ports:
- containerPort: 8080

Advantages:

- The application will not face any downtime as during the rolling update strategy old pods scale out one by one after new pods are up and running.

- When your application is stateful in nature, it is one the best recommended deployment strategy to adopt

Disadvantages:

- It is a time consuming deployment strategy as new pods will provision first before tearing off your old pods

- Versioning can be difficult to manage

- Recreate Deployment

In the following Deployment Strategy, all the Old Pods are killed all at once and get replaced all at once with new ones. This is one of the simple deployment strategy available which can be useful for scenarios where workloads are not production related.

Recreate Deployment Strategy

YAML File:

apiVersion: v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 3
strategy:
type: Recreate
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: Nginx:alpine
ports:
- containerPort: 8080

Advantages:

- When you do not have production scale application and can afford downtime

- When application does not support running multiple applications at a same time

Disadvantages:

— Application will result into a Downtime

  • No version control

- Blue/Green Deployment

In Blue/Green Deployment Old Version(Green) & New Version (Blue) are deployed together at the same time. When both the versions are active end users continue to access the Old Version (Green) & New Version (Blue) is only available for QA Team on a separate service or via port forwarding.

Blue/Green Deployment Type

This Deployment Strategy allows you to test the blue version in production while only exposing users to the green, stable version. Once tested, the blue version gradually replaces the green version.

Yaml:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: hellow-world-02
spec:
template:
metadata:
labels:
app: hello-world-v2
version: "02"

after the application has successfully been tested, the service is switched to blue version with the old green version being scaled down:

apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
selector:
app: hello-world
version: "02"
...

Advantages: -

· Instant rollout/rollback

· Avoid versioning issue, change the entire cluster state in one go.

Disadvantages: -

· Requires more resources for validation.

· Validation/Testing is required for manual intervention. This can extend to hours depending upon the features released.

- Canary Deployment

Canary Deployment type is very similar to blue/green deployment type. A Canary Deployment type is mainly used when you need to test a new functionality of your application on the backend. Traditionally, you may have had two almost identical servers: one that goes to all users and another with the new features that get rolled out to a subset of users and then compared. When no errors are reported, the new version can gradually roll out to the rest of the infrastructure.

Canary Deployment Strategy

Canary Deployment allow your customers to test your Kubernetes deployment by releasing the new version to a small group of them. You’ll run one ReplicaSet of the new version along with the current version and then, after a specified period of time without errors, scale up the new version as you remove the old version.

Conclusion

There are different ways to deploy an application, when releasing to development/staging environments, a recreate or ramped deployment is usually a good choice. When it comes to production, a ramped or blue/green deployment is usually a good fit, but proper testing of the new platform is necessary. If you are not confident with the stability of the platform and what could be the impact of releasing a new software version, then a canary release should be the way to go. By doing so, you let the consumer test the application and its integration to the platform.

--

--

Yash Bindlish

Principal Solution Architect with over 14 years of extensive IT Architecture who share the enthusiasm for exploiting technology to create business value.