Understanding Kubernetes

What is Kubernetes?

Kubernetes is a platform for running containers. It takes care of starting your containerized applications, rolling out updates, maintaining service levels, scaling to meet demand, securing access, and much more. The two core concepts in Kubernetes are the API you use to define your applications and the cluster that runs your applications. A cluster is a set of individual servers that have all been configured with a container runtime like Docker and then joined together into a single logical unit with Kubernetes.

For a developer, Kubernetes provides a managed execution environment for deploying, running, managing, and orchestrating containers across clusters or CHO. For DevOps and administrators, Kubernetes provides a complete set of building blocks that allow the automation of many operations for managing development, test, and production environments. Container orchestration enables coordinating containers in clusters of multiple nodes when complex containerized applications are deployed. This is important not just for the initial, but also for managing containers as a single entity for purposes of scalability, availability, and so on.

What problems does Kubernetes solve?

Kubernetes manages more than just containers, which makes it a complete application platform. The cluster has a distributed database, which you can use to store configuration files and to store secrets like API keys and connection credentials. Kubernetes delivers those seamlessly to your containers, which lets you use the same container images in every environment and apply the correct configuration from the cluster. Kubernetes also provides storage so your applications can maintain data outside of containers, giving you high availability for stateful apps. And Kubernetes manages network traffic coming into the cluster, sending it to the containers for processing. Kubernetes can store application configuration settings in the cluster. These get provided to containers as part of the container environment. Storage can be provided for containers. It’s physically stored on disks in the cluster nodes or a shared storage system. Configuration settings that contain confidential data can be managed securely in the cluster.

We didn’t mention what those applications in the containers look like, and that’s because Kubernetes doesn’t care. You can run a new application built with cloud-native design across microservices in multiple containers. You can run a legacy application built as a monolith in one big container. They could be Linux apps or Windows apps. You define all types of applications in YAML files using the same API, and you can run them all on a single cluster. The joy of working with Kubernetes is that it adds a layer of consistency on top of all your apps .

What features does Kubernetes offer?

Orchestration tools like Kubernetes guarantee the following features. One is high availability. In simple words, high availability means that the application has no downtime and is always accessible to the users. The second one is scalability which means that the application has high performance, loads fast, and the users have very high response rates from the application. The third one is disaster recovery which means that if the infrastructure has some problems like the data is lost or the servers explode, or something happens with the server centre. The infrastructure has to have some kind of mechanism to pick up the data and restore it to the latest date so that the application doesn’t lose any data and the containerized application can run from the latest stayed after the recovery and all of these are the functionalities that container orchestration technologies like Kubernetes offer.

Kubernetes Basic Architecture?

Kubernetes cluster is made up of at least one master node and then connected to it you have a couple of worker nodes where each node has a cubelet process running on it, and cubelet is a Kubernetes process that makes it possible for the cluster to talk to each other, to communicate to each other and execute some. Kubernetes follows a client-server architecture. It’s possible to have a multi-master setup (for high availability), but by default, there is a single master server that acts as a controlling node and point of contact. The master server consists of various components including a kube-episerver, an etcd storage, a kube-controller-manager, a cloud-controller-manager, a kube-scheduler, and a DNS server for Kubernetes services. Node components include kubelet and kube-proxy on top of Docker.

Kubernetes components

Node and Pod

Pods are the smallest, most basic deployable objects in Kubernetes. A Pod represents a single instance of a running process in your cluster.

Pods contain one or more containers, such as Docker containers. When a Pod runs multiple containers, the containers are managed as a single entity and share the Pod’s resources. Generally, running multiple containers in a single Pod is an advanced use case.

Cluster administrators manage the individual servers — called nodes in Kubernetes. You can add nodes to expand the capacity of the cluster, take nodes offline for servicing, understand Kubernetes 3 or roll out an upgrade of Kubernetes across the cluster. Those functions are wrapped in simple web interfaces or command lines in a managed service like Microsoft’s Azure Kubernetes Service or Amazon’s Elastic Kubernetes Service. In normal usage, you forget the underlying nodes and treat the cluster as a single entity. The Kubernetes cluster is there to run your applications. You define your apps in YAML files and send those files to the Kubernetes API. Kubernetes looks at what you’re asking for in the YAML and compares it to what’s already running in the cluster. It makes any changes needed to get to the desired state, such as updating configuration, removing containers, or creating new containers. Containers are distributed around the cluster for high availability, and they can all communicate over virtual networks managed by Kubernetes.

Service and Ingress

Service is a static IP address or permanent IP address that can be attached to each Pod. The app needs to have its service and the data will have its service. The good thing is the life cycle of the service and the Pod are not connected so even if Pod dies the service and its IP address will stay, so you don’t have to change that endpoint. A service is an external interface to a logical set of Pods. Services use a ‘virtual IP address local to the cluster, external services would have no way to access these IP addresses without an Ingress.

Ingress in K8S is an object that allows access to services within your cluster, from outside your cluster. An API object that manages external access to the services in a cluster, typically HTTP. Ingress may provide load balancing, SSL termination, and name-based virtual hosting. Traffic routing is defined by rules specified on the Ingress resource. Ingress objects refer to allowing HTTP or HTTPS traffic through to your cluster services. They do not expose other ports or protocols to the wider world. For this, a service type of LoadBalancer or NodePort should be used.

ConfigMap and Secret

A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume. A ConfigMap allows you to decouple environment-specific configuration from your container images so that your applications are easily portable.

A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or a container image. Using a Secret means that you don’t need to include confidential data in your application code.

Because Secrets can be created independently of the Pods that use them, there is less risk of the Secret (and its data) being exposed during the workflow of creating, viewing, and editing Pods. Kubernetes, and applications that run in your cluster, can also take additional precautions with Secrets, such as avoiding writing secret data to nonvolatile storage.

Secrets are similar to ConfigMaps but are specifically intended to hold confidential data


In Kubernetes, a volume can be thought of as a directory that is accessible to the containers in a pod. We have different types of volumes in Kubernetes and the type defines how the volume is created and its content.

The concept of volume was present with Docker, however, the only issue was that the volume was very much limited to a particular pod. As soon as the life of a pod ended, the volume was also lost.

On the other hand, the volumes that are created through Kubernetes are not limited to any container. It supports any or all the containers deployed inside the pod of Kubernetes. A key advantage of Kubernetes volume is, it supports different kinds of storage wherein the pod can use multiple of them at the same time.


Kubernetes is a great tool for orchestrating containerized applications. It automates the very complex task of dynamically scaling an application in real time. The problem with K8s is that it’s a complex system itself; this is an impairment when things are not working as expected. In this stage, we explained to you what Kubernetes is basically and what its basic components and characteristics are.

The series continues soon, and we’ll bring you some more practical examples of using Kubernetes! Stay up to date with our updates by following our social media from the links below!

LinkedIn | Facebook | Instagram



Breaking boundaries between industry and technology. waltercode.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store