Under the Hood: An Introduction to Kubernetes Architecture
Understanding Kubernetes Architecture is crucial for enterprises who are looking forward for agility in deploying and maintaining containerized applications.
As enterprises are rapidly adapting Microservice architecture for their applications, modern applications are increasing built using containers. Containerization helps them to packaged their application components (service) with all essential dependencies such as libraries, configurations etc together into a container which can easy to deploy and managed.
Once the application is deployed on container environment, there is a further need of scalability, reliability and orchestration to interact and manage application across multiple containers. Kubernetes (often abbreviated as k8s) is a container orchestration framework for containerized applications. Consider Kubernetes as shipping Port which manages all communication, scheduling, loading your containers to available resourceful ship.
Not only does Kubernetes have all the needed capabilities to support your complex containerized apps, but it’s also the most appropriate framework in the industry today for both developers and operations.
Kubernetes helps you to group containers that make up an application into logical units for easy management and discovery.
K8 flexible architecture for distributed systems empowers them to automatically orchestrate scaling and failover capabilities along with ease deployment patterns.
Kubernetes Architecture and Components
Kubernetes is extremely flexible and is capable of being deployed in many different configurations. It supports clusters as small as a single node and as large as a few thousand. It can be deployed using either physical or virtual machines on premises or in the cloud.
Kubernetes consists of two main components:
- Master (Control Plane)
- Worker Nodes
What is Master Node in Kubernetes Architecture?
Kubernetes master node is considered to be a brain (control plane) of whole system. This is where all the decisions are made, such as scheduling and detecting/responding to events. Kubernetes master receives inputs from developers, system administrators from a CLI or user interface via an API. Using the master node, you define pods, deployments, configurations, replication sets that you want Kubernetes to manage and maintain.
Master Components:
· Kube-apiserver: Kubernetes API Server is the central management entity that receives all REST requests for modifications (to pods, services, replication sets and others) , serving as frontend to the cluster. It acts as a gateway to the cluster and support lifecycle orchestration.
· Etcd cluster: is a simple, key value storage which is used to store the kubernetes cluster data (such as number of pods, their states, namespaces etc). in simple words, it is the database of kubernetes. It is only accessible from the API server for security reasons. etcd enables notifications to the cluster about configuration changes with the help of watchers. Notifications are API requests on each etcd cluster node to trigger the update of information in the node’s storage.
· Kube-controller-manager: runs a number of distinct controller processes in the background (for example, replication controller controls number of replicas in a pod, endpoints controller populates endpoint objects like services and pods, and others) to regulate the shared state of the cluster and perform routine tasks. When a change in a service configuration occurs (for example, replacing the image from which the pods are running, or changing parameters in the configuration yaml file), the controller spots the change and starts working towards the new desired state.
· Kube-scheduler: helps schedule the pods (a co-located group of containers inside which our application processes are running) on the various nodes based on resource utilization. It reads the service’s operational requirements and schedules it on the best fit node. For example, if the application needs 1GB of memory and 2 CPU cores, then the pods for that application will be scheduled on a node with at least those resources. The scheduler runs each time there is a need to schedule pods. The scheduler must know the total resources available as well as resources allocated to existing workloads on each node. kube-scheduler is like a port captain who is responsible to know which container needs to be scheduled on which node as per the available capacity and needed requirement. Kube-scheduler watches for new requests coming from the API server and assigns them to healthy nodes. If no nodes are available as per the specification kube-scheduler put pod in pending state until such a node appears.
What is Worker Node in Kubernetes Architecture?
A node is a worker machine in Kubernetes, previously known as a minion. A node may be a VM or physical machine, depending on the cluster. Each node contains the services necessary to run pods and is managed by the master components. The services on a node include the container runtime, kubelet and kube-proxy.
Below are the main components found on a (worker) node:
· Kubelet: the main service on a node, regularly taking in new or modified pod specifications (primarily through the kube-apiserver) and ensuring that pods and their containers are healthy and running in the desired state. It is the principal kubernetes agent. This component also reports to the master on the health of the host where it is running.
· kube-proxy: a proxy service that runs on each worker node to deal with individual host subnetting and expose services to the external world. It performs request forwarding to the correct pods/containers across the various isolated networks in a cluster.
· Container runtime: The container runtime pulls images from a container image registry and starts and stops containers. A 3rd party software or plugin, such as Docker, usually performs this function.
POD:
A pod is the smallest element of scheduling in Kubernetes. Without it, a container cannot be part of a cluster. If you need to scale your app, you can only do so by adding or removing pods. In instances where pods unexpectedly fail to perform their tasks, Kubernetes does not attempt to fix them. Instead, it creates and starts a new pod in its place.
Due to the flexible nature of Kubernetes architecture, applications no longer need to be tied to a particular instance of a pod. Instead, applications need to be designed so that an entirely new pod, created anywhere within the cluster, can seamlessly take its place. Containers within a Pod share an IP address and port space, and can find each other via localhost.
Conclusion
Kubernetes operates using a very simple and flexible model. We just need to input on how we would like our system to function — kubernetes compares the desired state to the current state within a cluster. Its service then automatically works to align the two states and achieve and maintain the desired state.
I hope you have a better understanding of Kubernetes Architecture. Start practicing more and more hands-on scenarios for more deep understanding.