Kubernetes Tutorial
Updated on 15th Jul, 21 231 Views

Kubernetes is probably the most widely used container orchestrating tool. This amazing tool was developed by Google, and it was made open-source for the community to use. Due to the enormous number of features it offers, Kubernetes is used in the majority of IT infrastructures today. Since it is open-source and so popular around the world, Kubernetes has an extremely active community, which helps in providing a lot more innovation on this wonderful tool.

In this Kubernetes Tutorial blog, we will be discussing the following points step-by-step:

Check out our Kubernetes tutorial video for beginners now on YouTube:

 

Kubernetes Tutorial: Why did the need for Kubernetes arise?

If you go back a little in time to see how people were running their IT infrastructures, you will see that it all started with physical systems and their servers. This era of deployments was called a traditional environment. The problem with this kind of deployment was that it was very costly, and the hardware utilization was very less optimized. Also, the entire setup was highly vulnerable to attacks.

To make the process much more efficient, people came up with virtual systems. In this kind of deployment, they can have multiple virtual OS on top of your base. This deployment era is called the virtual deployment era, where the deployment is much more optimized as on a single set of hardware, users can run multiple things. Soon people realized that the applications that were running did not require the full capabilities of the entire OS (which is very heavy); rather, they needed only a small set of capabilities. These small variations came to be known as containers.

Also, people realized that containers are a lot more useful than they think. Not only are they lightweight, but also they are much more secure as the entire application can be broken down into microservices, which improves the distribution of workload for developers and secures the application. It is like a whole new ball game to penetrate with different containers. Another outstanding merit of containers is the fact that they remove the environmental discrepancy between the developer team and the operations team.

Now, where does Kubernetes fit in?

Imagine the IT infrastructure of a company. Let’s take the example of Amazon. Think about the number of things and services it must be running! Now think about the number of containers that would be required to run everything properly! It is a difficult task, isn’t it? This is where Kubernetes comes in to help. Now, let’s learn Kubernetes step-by-step in our comprehensive guide.

 

Kubernetes Tutorial: What is Kubernetes?

Kubernetes is a container orchestration tool, in a nutshell. It has a set of functionalities that allows you to manage and maintain the ‘n’ number of containers that are present in your infrastructure.

Kubernetes helps with workload management and the scheduling of work for containers. Created by Google, it was made open-source so that the public can use it and, in turn, improve it further. Kubernetes community is extremely outstanding, and its compatibility with all the cloud providers in the market makes it an efficient solution to container management.

Go through our blog on What is Azure Kubernetes Service to learn more.

Certification in Cloud & Devops

 

Kubernetes Tutorial: Features of Kubernetes

Kubernetes is built with many features, which makes it such a joy to work with. Some of the most notable features are given below.

Kubernetes Features
  • Automated scheduling: One of the greatest features of Kubernetes is the fact that it comes loaded with automated scheduling, which is exactly what it sounds like. In Kubernetes, you have a cluster that can have ‘n’ number of nodes. Now, when a container is launched, it has to be attached to a node. Kubernetes manages that node the pod should be attached to based on constraints like resources required.
  • Self-healing Capabilities: Kubernetes has another feature, which is a dream for all developers, and it is called self-healing. This ability basically helps reschedule and replace containers when the nodes die. Moreover, it also eliminates the containers that do not respond readily to user-defined checks. When it kills those containers, it also makes sure that the clients are not able to see the faulty containers. If you are using deployments, it also respawns those containers to meet the desired number of replicas stated by the creator.
  • Automated rollbacks and rollouts: This feature is most handy when it comes to updating the application that you have running. Imagine that you have created an app. This app has multiple pods and containers running different things. Now, just like every other app, you decide to update it. What Kubernetes does to help you here is without giving your app any downtime, it brings down every old instance of your application and puts in a new one. Now, what if the recent update you created has some flaw? As soon as you realize it, you can roll the update, and you need not worry about it as Kubernetes has you covered there as well. It allows you to move to the older version of the application you deployed without giving you any downtime.
  • Load balancing and horizontal scaling: This is another dream feature for developers. Let’s say you work for an e-commerce company. As you would probably imagine, there are some days when the traffic on your website will be greater than the rest of the days (i.e., the festival season, sale days, etc.). In those days, when the traffic is more, you would need more instances running so that your application can take the load put up by the traffic. Kubernetes allows you to scale up or down, using simple commands, for such scenarios, and moreover, it can distribute the load on the running instances so that any of your pods do not face heavy traffic as compared to other replicas.

Preparing for job interviews? Head to our most asked Kubernetes Interview Questions and Answers.

 

Kubernetes Tutorial: Architecture of Kubernetes

In Kubernetes, various sub-components can be grouped into two main components. The main components are:

  • Master nodes
  • Worker nodes

Each of these has separate components, which build up the entire architecture. The blog further discusses both, and the below-given image depicts the overall architectural components of Kubernetes.

Kubernetes Architecture

Master Node

The management of a cluster is the responsibility of the master node as it is the first point of contact for almost all administrative tasks for the cluster. Depending upon the setup, there will be one or more master nodes in a cluster. This is done to keep an eye on the failure tolerance.

As shown in the diagram, a master node comprises different components such as Controller-manager, ETCD, Scheduler, and API Server.

  • API Server: It is the first point of contact for the entirety of the REST commands, which are used to manage and manipulate the cluster.
  • Controller-manager: It is a daemon that is responsible for regulating the cluster in Kubernetes, and it also manages various other control loops that are non-terminating.
  • Scheduler: The scheduler, as its name suggests, is responsible for scheduling tasks to the worker nodes. It also keeps the resource utilization data for each of the slave nodes.
  • ETCD: It is majorly employed for shared configuration, as well as for service discovery. It is basically a distributed key-value store.

Master this top container orchestration tool by enrolling for Intellipaat’s Kubernetes certification program.

Worker/Slave Nodes

Worker or slave nodes consist of all the needed services that are required to manage networking among containers. The services communicate with the master node and allocate resources to scheduled containers. As shown in the architecture diagram above, worker nodes have the following components:

  • Docker container
  • Kubelet
  • Kuber-proxy
  • Pods

Docker container: Docker must be initialized and run on each worker node in a cluster. Docker containers run on each and every worker node, and they also run the pods that are configured.

Kubelet: The job of kubelet is to get the configuration of pods from the API server. It is also used to ensure that the mentioned containers are ready and running.

Kube-proxy: Kube-proxy behaves like a network proxy and as a load balancer for a service on any single worker node.

Pods: A pod can be thought of as one or more containers, which can logically run on nodes together.

Check out this blog on Docker to get an in-depth view of Docker containerization.

Let’s discuss and learn Kubernetes pods in a bit more depth now.

 

Kubernetes Tutorial: What is a pod?

A pod is the smallest and the most elementary execution unit of Kubernetes. Pods are also the simplest unit in the Kubernetes object model, which you can create and deploy. It represents the processes that are running on the cluster.

Every pod has different phases that define where the pod lies in its life cycle. This phase of a pod is not actually a comprehensive rollup of the pod’s state or containers. The phase is just meant to depict the condition of the pod in the current time-stamp.

Various phases of a pod are shown in the image below:

Kubernetes Pod Phases
 

Kubernetes Tutorial: What is a deployment in Kubernetes?

Deployments in Kubernetes are a set of multiple identical pods. A deployment is responsible for running multiple replicas of your application. In the event that one of the instances fails, crashes, or becomes unresponsive, the deployment replaces the instance. This amazing feature makes sure that one of the instances of your application is always available. Kubernetes deployment controller manages all of the deployments.

To run these replicas, deployments use pod templates. These pod templates have specifications on how the pod should look and behave like, e.g., which volumes the pod mounts, labels, taints, etc.

When you change a deployment’s pod template, new pods are created automatically one by one.

Watch this video on Kubernetes Tutorial for Beginners:

 

Kubernetes Tutorial: How to set up a Kubernetes cluster?

For this installation, you have a master node and only one worker node (you can add more if required). You need to run certain commands on both. The master instance here will have the terminal text color as green, and the slave will have an orange color so that it is easy to understand which is which. Now, let’s begin.

Step 1: Run the following commands on both the master and slave instances:

~ Sudo su
~ Apt-get update
Apt-get update

Step 2: Now, let’s get Docker on both master and slave. For that, run the following commands:

~ apt-get install docker.io #install docker
~ apt-get update && apt-get install -y apt-transport-https curl

Step 3: On both master and slave, run the following commands to get Kubernetes essentials:

~curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

~cat < /etc/apt/sources.list.d/kubernetes.list 

~deb https://apt.kubernetes.io/ kubernetes-xenial main

~EOF

~apt-get update

Step 4: Now, install kubeadm by running the following command on both master and slave:

~apt-get install -y kubelet kubeadm kubectl

Now, it is time for creating the cluster. To do that, the first step is initializing kubeadm on the master node.

Step 5: Initialize kubeadm by running the below command on the master node only:

~ kubeadm init --apiserver-advertise-address=<enter_your_master’s_private_ip_ here> --pod-network-cidr=192.168.0.0/16 --ignore-preflight-errors=NumCPU

Step 6: Once you are done with Step 5, you will find a token as shown in the screenshot below. You need to copy the token from your master and paste it to your slave node:

Once you paste the token in your slave node, you will see a message like below:

Step 7: Now, you are in the final stage. Here, you need to exit the root directory on the master, create a folder for Kubernetes configurations, and then provide the following permissions:

~Ctrl + D (to exit root directory)
~  mkdir -p $HOME/.kube   
~  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
~ sudo chown $(id -u):$(id -g) $HOME/.kube/config 
~kubectl get nodes 

By this, you should receive the nodes of your cluster, but they will be in a not ready state as the network plugin is not established yet.

Step 8: So, the next step is to install the network plugin, which will allow communication. For installing the network plugin, you can use the following command:

~ kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml

Step 9: Now, you need to check the status of everything by running the following command (Note: It might require a couple of minutes to get everything up and running):

~kubectl get pods --all-namespaces
~kubectl get nodes

There it is! Your first Kubernetes cluster is up and running!

If you are interested in becoming a DevOps professional, you should consider joining our DevOps Certification program, designed by industry experts to equip you with everything you need to be successful in the industry. Hope you had a great time while learning Kubernetes from our Kubernetes tutorial for beginners.

Course Schedule

Name Date
AWS Certification 2021-08-07 2021-08-08
(Sat-Sun) Weekend batch
View Details
AWS Certification 2021-08-14 2021-08-15
(Sat-Sun) Weekend batch
View Details
AWS Certification 2021-08-21 2021-08-22
(Sat-Sun) Weekend batch
View Details

Leave a Reply

Your email address will not be published. Required fields are marked *

Let’s Talk

Get Free Consultation

Related Articles

Associated Courses

Subscribe to our newsletter

Signup for our weekly newsletter to get the latest news, updates and amazing offers delivered directly in your inbox.