• Articles
  • Tutorials
  • Interview Questions

What is Azure Kubernetes Service (AKS)?

What is Azure Kubernetes Service (AKS)?

Here is a recommended YouTube video on our YouTube channel about Kubernetes which will give you a head start:

What is Kubernetes?

According to Wikipedia, “It is an open-source container orchestration system for automating computer application deployment, scaling, and management”.

Kubernetes is commonly called K8s. It was originally designed by Google and is now maintained by  Cloud Native Computing Foundation. It works with a range of container tools and runs containers on clusters, often with images built using Docker. It was founded by Joe Beda, Brendan Burns, and Craig McLuckie. It was first announced in mid-2014. K8s v1.0 was released on July 21, 2015.

Kubernetes is employed because it makes the work of organizing and scheduling applications across multiple machines much easier. It can automatically install a storage system. It carries out automated rollouts and rollbacks. It possesses the characteristics of self healing. It supports clouds with three different types of privacy. They are public, private and hybrid.

What is Azure Kubernetes Service (AKS)?

It is a fully managed service that allows you to run Kubernetes in Azure without having to manage your own Kubernetes clusters.

The basic features of Azure Kubernetes Services are:

  • In the case of VMs, we need to pay only for the nodes.
  • It works with various Azure and OSS tools and services.
  • As a hosted Kubernetes service, Azure handles critical tasks, like health monitoring and maintenance.
  • Kubernetes can scale nodes using cluster autoscaler.
  • AKS automatically configures all of the Kubernetes nodes that control and manage the worker nodes during the deployment process.
  • Users can monitor a cluster directly or view all clusters with Azure Monitor.

Users can access AKS in three different ways:

  • Through the AKS management portal
  • Through AKS CLI
  • By using templates through

Become an expert in Azure. Enroll now in Post Graduate Program in Azure from Belhaven University

Pros and Cons of Azure Kubernetes Service (AKS)

  • Pros / Strengths:

-> AKS has a very good support system for windows.
-> Configuring the virtual network and subnet is very simple.
-> Vigorous support to the command line.
-> Azure Active Directory integration for cluster authentication.

  • Cons / Weaknesses:

-> Being a relatively new technology, many features of AKS are still in the testing levels.
-> The Virtual Machines do not support customization directly and there is no ability to provide a cloud init or user data script.
-> The server type cannot be changed, once it has been deployed.
-> Node updates are not automatically done.
Nodes do not recover automatically after failure.

Let’s learn more about Azure! Check out our Azure Administrator Course!

Cloud Computing EPGC IITR iHUB

The following concepts will give a headstart in understanding the Azure Kubernetes Services:

Control Plane

This is automatically created and configured when we create an AKS cluster. This is provided for free. We only need to pay for the Nodes attached. This exists only in the region where it is created. We can review the control plane logs through Azure Monitor logs to troubleshoot possible issues.

Resource Reservations

The usage of node resources can create a discrepancy between AKS allocated resources and total resources required. Suppose that the name of the node which we are using is node1. We can find the allocable resources for that particular node by using:

Kubectl describe node nodel

The size of the node is directly proportional to the resource reservation. This is due to a higher need for managing the user deployed pods.

The resources reserved are of two types:

  • CPUReserved

-> CPU is dependent on the node type and cluster configuration.

  • Memory

-> Memory utilization and allocation depends on sum of two values:

  1. kubelet daemon:
    • This is installed on Kubernetes agent nodes to manage container creation and termination.
    • A node must have at least 750 Mi allocatable.
    • In case it is less than this, kubelet will terminate a running pod and free up memory on the host machine.
  2. Regressive rate of memory reservations:
    • 25% of the first 4 GB of memory
    • 20% of the next 4 GB of memory (till 8 GB)
    • 10% of the next 8 GB of memory (till 16 GB)
    • 6% of the next 112 GB of memory (till 128 GB)
    • 2% of any memory adobe 128 GB
  • allocation rules

-> Keep agent nodes healthy. This is because the nodes will report less allocatable ‘memory’ and ‘CPU’ if it were not a part of Kubernetes cluster.

Become a Kubernetes Cluster Manager by enrolling in Kubernetes Cluster Management Certification.

Get 100% Hike!

Master Most in Demand Skills Now !

Nodes and Node Pools

We need a Kubernetes node to run the applications and supporting services. An AKS cluster has at least one node. This runs the kubernetes node components and container runtime. It’s important to scale out the number of nodes in the AKS cluster to meet the demand.

When we create an AKS cluster or scale out the number of nodes, while payment, agent nodes are considered as regular Virtual Machines, hence, any VM size discounts are automatically applied. Instead of containerd or Docker, we can use aks-engine. This helps to configure and deploy a Kubernetes cluster that meets the current needs.

Node Pools

Similar configured nodes are clubbed with one another into a group-like format called node pools. Kubernetes cluster will always contain one or more node pools. An initial number is defined for the nodes and size which creates a default node pool. This contains the VMs that run your agent nodes.

Node Pools

Node Selectors

We must specify the node pool which is to be used by the Kubernetes Scheduler, in case the Azure Kubernetes Service has multiple nodes.With the help of Node Selectors, we can define a lot many parameters. An example can be node OS, which can help control the pod location.

Explore the top Kubernetes Interview Questions and ace your next interview to get your dream job!

Pod

Each single instance of the application is represented by a particular pod. An instance of the application can be run by the use of pods. Pods and Containers usually have an existence as a 1:1 mapping with each other.

Resource requests can be defined while creating pods for requesting memory resources or a certain amount of CPU. Maximum resource limits can also be defined to not let any pod consume compute resources from the nodes which are underlying. Application workloads actually run on the containers, pods are just logical resources. Kubernetes Controllers is where pods are not only managed, but also deployed.

Pods

Networking

Let us look into the Networking part of Azure Kubernetes Services:

Services

  • Cluster IP
    • This comes in handy if an internal IP address needs to be created inside the cluster.
  • NodePort
    • While working on the underlying node, this helps to create a port mapping.
  • LoadBalancer
    • Helps in connecting the requested pods with the Azure load balancer. It creates a load balancer resource and configures an external IP address.
  • ExternalName
    • Helps in creating a specific DNS entry, which further helps in easier application access.

Azure virtual networks

  1. Kubenet networking
    • The network resources are typically created and configured as the AKS cluster is deployed.
  2. Azure Container Networking Interface (CNI) networking
    • The AKS cluster is connected to existing virtual network resources and configurations.

Master Node.js like a pro! Get a deep understanding of the parameters in each Node.JS module with our comprehensive blog post!

Azure CNI networking

We can directly access the pods which receive an IP address from the subnet. Every node has a configuration parameter for the maximum number of pods it supports. Unlike Kubenet, traffic to endpoints in the same virtual network is not NAT’d to the node’s primary IP.

Network security Groups

An Azure network security group filters traffic for VMs like the AKS nodes. We do not need to manually configure network security group rules. The Azure platform creates or updates the appropriate rules. We can also use network policies to automatically apply traffic rules to pods.

Network policies

Backend applications are only exposed to required frontend services. Database components are only accessible to the application tiers that connect to them. Network policy is an AKS feature that lets you control the traffic flow between the pods. We can allow or deny traffic based on the required settings.

Learn and gain knowledge,
But do not forget to implement it,
Because learning might be fun,
But what’s the use when you don’t know how to use it?
– Anonymous

Azure Kubernetes Service tutorial

It is very important to put into practice everything that we learn. Let us proceed with some Hands-On:

Bootcamp in Cloud Computing and DevOps

Before we Begin

  1. If you do not have an Azure subscription, you can create a free account.
  2. You can use the bash environment in the Azure Cloud Shell or if you want you can also install the Azure CLI to run the commands.
  3. Run the az version to check the version of the CLI.
  4. This Hands-On requires version 2.0.64 or above.
  5. To install the latest version, run az upgrade.

Enroll in Azure Training in Bangalore to get Microsoft Azure certifications.

Let’s Start

1. Create a resource group

az group create --name KAR --location eastus

KAR is the name given to the created cluster and eastus is the location. You can give your own cluster name but the location should only be set from the list of availability zones.

output
The Output

2. Enable Cluster Monitoring

Verify if Microsoft.OperationsManagement and Microsoft.OperationalInsights are registered on your subscription:

cluster monitoring and output

If not, then:

registering
registering

3. Create AKS Cluster

az aks create --resource-group KAR --name KARCluster --node-count 1 --enable-addons monitoring --generate ssh-keys

KARCluster is the Cluster name

Creating cluster

Wait a minute or two and a long JSON file format will be output on the screen.

4. Connect to the Cluster

connect cluster
This is to install kubectl
get credentials
Get the required credentials

Get the list of available nodes:

credentials

Wait for the status to change to ready and only then move to the next step

5. Run the Application

Create a file by name azure-vote.yaml
I have used nano, you can also use vi or code.

create file

Paste the following code into your file and save it:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: azure-vote-back
spec:
  replicas: 1
  selector:
    matchLabels:
      app: azure-vote-back
  template:
    metadata:
      labels:
        app: azure-vote-back
    spec:
      nodeSelector:
        "beta.kubernetes.io/os": linux
      containers:
      - name: azure-vote-back
        image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
        env:
        - name: ALLOW_EMPTY_PASSWORD
          value: "yes"
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 250m
            memory: 256Mi
        ports:
        - containerPort: 6379
          name: redis
---
apiVersion: v1
kind: Service
metadata:
  name: azure-vote-back
spec:
  ports:
  - port: 6379
  selector:
    app: azure-vote-back
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: azure-vote-front
spec:
  replicas: 1
  selector:
    matchLabels:
      app: azure-vote-front
  template:
    metadata:
      labels:
        app: azure-vote-front
    spec:
      nodeSelector:
        "beta.kubernetes.io/os": linux
      containers:
      - name: azure-vote-front
        image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 250m
            memory: 256Mi
        ports:
        - containerPort: 80
        env:
        - name: REDIS
          value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
  name: azure-vote-front
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: azure-vote-front

Deploy using kubectl

apply kubectl
4 line output with created files

6. Test the Application

Monitor the progress:

test app
Wait till the external-ip changes to an actual ip address and then press ctrl+C.

Open the external-ip in your browser to view

test app on browser

You can check the cluster nodes’ and pods’ health metrics captured in the Azure Portal.

7. Delete the Cluster

To stay away from azure charges, remove the unnecessary resources.

delete cluster

Prepare for the Azure Interview and crack like a pro with these Azure Interview Questions.

Conclusion

Along with managing modern and cloud-native applications, businesses are also quickly shifting from on-premises to the cloud. Kubernetes is an open-source solution that deploys cloud-native applications.

Azure Kubernetes Service is a robust and cost-effective container orchestration service.  Azure Kubernetes Service is a powerful service for running containers in the cloud. We hope you learn Azure Kubernetes service step-by-step in this tutorial. Do let us know in the comment section below.

Learn from Intellipaat’s Azure Community

Course Schedule

Name Date Details
Azure Training 27 Jul 2024(Sat-Sun) Weekend Batch
View Details
Azure Training 03 Aug 2024(Sat-Sun) Weekend Batch
View Details
Azure Training 10 Aug 2024(Sat-Sun) Weekend Batch
View Details

About the Author

Application Architect

Rupinder is a certified IT expert in AWS and Azure, working as a DevOps Architect and specializing in cloud and infrastructure. He designs and builds entire IT setups for important apps in banking, insurance, and finance.