• Articles
  • Tutorials
  • Interview Questions

Kubernetes Tutorial - Learn Kubernetes from Experts

Kubernetes is possibly the most widely used container orchestrating tool. This amazing tool was developed by Google, and it was made open source for the community to use. Because of the numerous features offed by it, Kubernetes is used in the majority of IT infrastructures today. Since it is open source and so popular around the world, Kubernetes has an extremely active community, which helps in providing a lot more innovation on this wonderful tool.

In this blog, we will be discussing the following points step by step:

Check out our Kubernetes Tutorial video for beginners, now on YouTube:

Why did the need for Kubernetes arise?

If you go back in time a little to see how people were running their IT infrastructures, you will see that it all started with physical systems and their servers. This era of deployment was called a traditional environment. The problem with this kind of deployment was that it was very costly and the hardware utilization was very less optimized. The entire setup was also highly vulnerable to attacks.

Virtual systems were developed to make the process much more efficient. In this kind of deployment, there can be multiple virtual OSs on top of your base. This deployment era is called the virtual deployment era, where the deployment is much more optimized as users can run multiple things on a single set of hardware. Soon, people realized that the applications did not require the full capabilities of the entire OS, they rather only needed a small set of capabilities. These small variations came to be known as containers.

People also realized that containers are a lot more useful than what was originally thought of them. Not only are they lightweight, but they are also much more secure as the entire application can be broken down into microservices, which improves the distribution of workload for developers and secures the application. Another outstanding merit of containers is the fact that they remove the environmental discrepancy between the developer team and the operations team.

Now, where does Kubernetes fit in?

Imagine the IT infrastructure of a company. Let us take the example of Amazon. Think about the number of things and services that it must be running. Now, think about the number of containers that would be required to run everything properly. It is a difficult task, is it not? This is where Kubernetes comes in to help. Now, let us learn Kubernetes step by step, in our comprehensive guide.

Read about the Docker Cheat Sheet in this blog by Intellipaat.

What is Kubernetes?

In a nutshell, Kubernetes is a container orchestration tool. It has a set of functionalities that allow you to manage and maintain n number of containers that are present in your infrastructure.

Kubernetes helps with workload management and scheduling of work for containers. Created by Google, it was made open source so that the public can use it and, in turn, improve it further; Kubernetes community is outstanding. Kubernetes’s compatibility with cloud providers in the market makes it an efficient solution for container management.

Go through our blog on What is Azure Kubernetes Service to learn more.

Features of Kubernetes

Kubernetes comes with many features that make it such a joy to work with. Some of the most notable features are given below.

Kubernetes Features
  • Automated scheduling: One of the greatest features of Kubernetes is the fact that it comes loaded with automated scheduling, which is exactly what it sounds like. In Kubernetes, you have a cluster that can have n number of nodes. Now, when a container is launched, it has to be attached to a node. Kubernetes manages the node that the pod should be attached to based on constraints such as resources required.
  • Self-healing capabilities: Self-healing is a feature of Kubernetes that is a dream for all developers. It basically helps to reschedule and replace containers when the nodes die. Moreover, it also eliminates the containers that do not respond readily to user-defined checks. When it kills those containers, it also makes sure that the clients are not able to see the faulty containers. If you are using deployments, it also respawns those containers to meet the desired number of replicas stated by the creator.
  • Automated rollbacks and rollouts: This feature is most handy when it comes to updating the application that you have running. Imagine that you have created an app that has multiple pods and containers running different things. Now, just like every other app, you decide to update it. What Kubernetes does to help you here is that it, without giving your app any downtime, brings down every old instance of your application and puts in a new one. Now, what if the recent update you created has some flaw? As soon as you realize it, you can roll the update, and you need not worry about it as Kubernetes has you covered there as well. It allows you to move to the older version of the application without any downtime.
  • Horizontal scaling and load balancing: This is another dream feature for developers. Let us say that you work for an e-commerce company. There will be some days, such as holidays, sales, etc., when the traffic on your website will be greater than other days. In days with increased traffic, you would need more instances running so that your application can bear the load generated by the traffic. Kubernetes allows you to scale up or down, by using simple commands, in such scenarios; moreover, it can distribute the load on the running instances so that your pods do not face heavy traffic as compared to other replicas.

Preparing for job interviews? Head to our most-asked Kubernetes Interview Questions and Answers.

Cloud Computing EPGC IITR iHUB

Architecture of Kubernetes

In Kubernetes, various subcomponents can be grouped into two main components. The main components are:

  • Master node
  • Worker node

The image given below depicts the overall architectural components of Kubernetes:

Kubernetes Architecture

Master Node

The management of a cluster is the responsibility of the master node as it is the first point of contact for almost all administrative tasks for the cluster. Depending on the setup, there will be one or more master nodes in a cluster. This is done to keep an eye on the failure tolerance.

As shown in the diagram, the master node comprises different components such as Controller-manager, ETCD, Scheduler, and API Server.

  • API Server: It is the first point of contact for the entirety of the REST commands, which are used to manage and manipulate the cluster.
  • Controller-manager: It is a daemon that is responsible for regulating the cluster in Kubernetes. Controller-manager also manages various other control loops that are non-terminating.
  • Scheduler: The scheduler, as its name suggests, is responsible for scheduling tasks to the worker nodes. It also keeps the resource utilization data for each of the worker or slave nodes.
  • ETCD: It is majorly employed for shared configuration and service discovery. It is basically a distributed key-value store.

Master this top container orchestration tool by enrolling in Intellipaat’s Kubernetes certification program.

Worker or Slave Node

Worker or slave node consist of all the services that are required to manage networking among containers. The services communicate with the master node and allocate resources to scheduled containers. As shown in the architecture diagram above, the worker node has the following components:

  • Docker container
  • Kubelet
  • Kuber-proxy
  • Pods

Docker container: Docker must be initialized and run on each worker node in a cluster. Docker containers run on each and every worker node. Docker container also runs the configured pods.

Kubelet: The job of Kubelet is to get the configuration of pods from the API server. Kubelet is also used to ensure that the mentioned containers are ready and running.

Kube-proxy: Kube-proxy behaves like a network proxy and as a load balancer for a service on any single worker node.

Pods: A pod can be thought of as one or more containers that can logically run on nodes together.

Check out this blog on Docker to get an in-depth view of Docker containerization.

Let us discuss and learn Kubernetes pods in a bit more depth now.

What is a Pod?

A pod is the smallest and the most elementary execution unit of Kubernetes. Pods are also the simplest unit in the Kubernetes object model, which you can create and deploy. Pods represent the processes that are running on the cluster.

Every pod has different phases that define where it lies in its life cycle. This phase of a pod is not actually a comprehensive roll up of the pod’s state or containers. The phase is just meant to depict the condition of the pod in the current timestamp.

The various phases of a pod are shown in the image below:

Kubernetes Pod Phases

Bootcamp in Cloud Computing and DevOps

What is a Deployment in Kubernetes?

Deployments in Kubernetes are a set of multiple identical pods. A deployment is responsible for running multiple replicas of your application. In the event that one of the instances fails, crashes, or becomes unresponsive, the deployment replaces the instance. This amazing feature makes sure that one of the instances of your application is always available. Kubernetes deployment controller manages all deployments.

To run replicas, deployments use pod templates. These pod templates have specifications on how the pod should look and behave such as which volumes the pod mounts, labels, taints, etc.

When you change a deployment’s pod template, new pods are created automatically one by one.

How to set up a Kubernetes cluster?

For this installation, you have a master node and only one worker node, you can add more if required. You need to run certain commands on both types of nodes. The master instance here will have the terminal text color as green and the slave instance will have orange color so that it is easy to differentiate between the two. Now, let us begin.

Step 1: Run the following commands on both master and slave instances:

~ Sudo su
~ Apt-get update
Apt-get update

Step 2: Get Docker on both master and slave instances. For that, run the following commands:

~ apt-get install docker.io #install docker
~ apt-get update && apt-get install -y apt-transport-https curl

Step 3: On both master and slave instances, run the following commands to get Kubernetes essentials:

~curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

~cat < /etc/apt/sources.list.d/kubernetes.list 

~deb https://apt.kubernetes.io/ kubernetes-xenial main

~EOF

~apt-get update

Step 4: Install Kubeadm by running the following command on both master and slave instances:

~apt-get install -y kubelet kubeadm kubectl

Now, it is time to create the cluster. To do that, the first step is to initialize Kubeadm on the master node.

Step 5: Initialize Kubeadm by running the below-mentioned command on the master node only:

~ kubeadm init --apiserver-advertise-address=<enter_your_master’s_private_ip_ here> --pod-network-cidr=192.168.0.0/16 --ignore-preflight-errors=NumCPU

Step 6: Once you are done with Step 5, you will find a token as shown in the screenshot below. You need to copy the token from the master node and paste it in the slave node:

Once you paste the token in your slave node, you will see a message like below:

Step 7: Now, you are at the final stage. Here, you need to exit the root directory on the master, create a folder for Kubernetes configurations, and then provide the following permissions:

~Ctrl + D (to exit root directory)
~  mkdir -p $HOME/.kube   
~  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
~ sudo chown $(id -u):$(id -g) $HOME/.kube/config 
~kubectl get nodes 

By doing this, you should receive the nodes of your cluster, but they will not be in a ready state as the network plug-in is not established yet.

Step 8: So, the next step is to install the network plug-in, which will allow communication. For installing the network plug-in, you can use the following command:

~ kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml

Step 9: Now, you need to check the status of everything by running the following command (Note: It might require a couple of minutes to get everything up and running):

~kubectl get pods --all-namespaces
~kubectl get nodes

There it is! Your first Kubernetes cluster is up and running!

Get 100% Hike!

Master Most in Demand Skills Now !

If you are interested in becoming a DevOps professional, you should consider joining our DevOps Certification program, designed by industry experts to equip you with everything you need to be successful in the industry. Hope you had a great time learning Kubernetes from our Kubernetes Tutorial for beginners.

Course Schedule

Name Date Details
AWS Certification 04 May 2024(Sat-Sun) Weekend Batch
View Details
AWS Certification 11 May 2024(Sat-Sun) Weekend Batch
View Details
AWS Certification 18 May 2024(Sat-Sun) Weekend Batch
View Details

Cloud-banner.png