Kubernetes Ingress — 101
Containerization has already created a buzz today in massive digital transformation space. Enterprises are widely adopting container orchestrations for their application modernization strategy. However, when it comes to exposing services to an external work are we still looking at the traditional approach of using the load balancers?
Ingress has picked up the momentum and enabling enterprises to centralized a number of services into a single entry point.
Kubernetes Ingress is that it exposes HTTP and HTTPS routes from outside of a cluster to service created inside the cluster. Ingress essentially works to implement rules to control traffic routes. Typically, ingress is set up to provide services to externally reachable URLs, load balance traffic, offer name-based virtual hosting and terminal secure sockets layers or transport layer security. It’s also important to note that ingress doesn’t expose all ports, only HTTP and HTTPS.
To further deep dive into Ingress Controller more, it is very important to understand the difference between Cluster IP, Node Port and Load Balancer.
Cluster IP: — A singular internal IP that refers to your service-only usable within a cluster. Cluster IP is a default kubernetes services.
However, we can still access the ClusterIP Service through internet using kubernetes proxy.
When would we use ClusterIP scenario?
- Debugging: — when a developer would like to debug his services by connecting the service over the internet locally from his development environment.
- Allowing internal traffic, displaying a dashboard etc.
Note: this is not a recommended approach to use as it would be exposing your service to internet directly and lead to security issues.
Node Port: — Used to communicate where a deployment’s pod is running with an address of the form, NODE_IP:NODE_PORT. Note that NodePorts are >30000. Node Port service is the simplest way to get external traffic to your service. As name implies, it opens a specific port on all the worker Nodes, and any traffic that is sent to this port is forwarded to the service.
Sample YAML for NodePort Type will look like this:
apiVersion: v1
kind: Service
metadata:
name: my-nodeport-example-service
spec:
selector:
app: my-app
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30082
protocol: TCP
When would we use NodePort scenario?
A Temporary application which does not have a long life shell. Service which does not require to be always available and has very low business significance. However, from the best practices point of view, it is not recommended to have NodePort service implementation as good practice. It has many downsides to it such as:
- You can only use port range above 30000 to 3267
- You can only configure one service port
- If the Node IP address changes, you need to reconfigure all your NodePort impacted service configurations
LoadBalancer: — Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created. On cloud providers which support external load balancers, setting the type field to LoadBalancer provisions a load balancer for your Service. The actual creation of the load balancer happens asynchronously, and information about the provisioned balancer is published in the Service’s. Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.
For LoadBalancer type of Services, when there is more than one port defined, all ports must have the same protocol and the protocol must be one of TCP, UDP, and SCTP.
Sample YAML for LoadBalancer Type will look like this:
apiVersion: v1kind: Servicemetadata:name: my-servicespec:selector:app: MyAppports:- protocol: TCPport: 80targetPort: 9376clusterIP: 10.0.171.239type: LoadBalancerstatus:loadBalancer:ingress:- ip: 192.0.2.127
When would we use LoadBalancer scenario?
This is the most widely adopted scenario where a service is directly expose to an external world using a Loadbalancer. All traffic on port you specify will be forwarded to the service listing on the loadbalancer.
Ingress: — Kubernetes Ingress Controllers are Layer 7 load balancers that run on Kubernetes Itself. The load balancer itself is running on Kubernetes and the Ingress manages external access to the services in a cluster. It’s basically a collection of rules that allow inbound traffic to reach the cluster services. While not as full-featured as a load balancer, it gives you what you would need from one and is much easier and faster to deploy. In fact, it’s so tidy that it allows you to set up a new routing from Layer 7 routing on your terminal with one command. If running smoothly, it will remove the headache of setting up a load balancer and make for a tidy, self-contained application delivery.
Ingress controller is kind of a smart router which sit in front of all services and act as a “smart router” or entry checkpoint to your cluster.
When would we use Ingress scenario?
Ingress is the most powerful method available to expose your service to the external world. There are many ingress controllers supported by Kubernetes which can be found on their official documentation link
Kubernetes Ingress with AWS ALB Ingress Controller:
AWS ALB Ingress Controller for Kubernetes is a controller that triggers the creation of an Application Load Balancer and the necessary supporting AWS resources whenever an Ingress resource is created on the cluster. The Ingress resource uses the ALB to route HTTP or HTTPS traffic to different endpoints within the cluster.
Actual ALB will not be created until you create an ingress object which is expected.
Conclusion
Ingress is a set of rules that authorize external access to the services in a Kubernetes cluster, providing Layer-7 Server Load Balancer capabilities. From the best practices point of view, always plan high availability architecture for your ingress controller based deployments.