Top 70+ Kubernetes Interview Questions and Answers (2026)

The 2026 DevOps and cloud-native landscape runs entirely on Kubernetes. Hiring managers have moved past basic Pod definitions, actively seeking Site Reliability Engineers (SREs) who can handle Cluster Security, Service Meshes, dynamic auto-scaling, and production debugging. Master these top 70+ Kubernetes interview questions and FAANG-level troubleshooting scenarios to dominate your next DevOps technical round.

Q1. What is the difference between Kubernetes and Docker Swarm? [Asked in Amazon]

Both are container orchestration tools, but they cater to different scales and complexities. Docker Swarm is native to Docker, making it easy to set up but limited for enterprise use. Kubernetes offers robust auto-healing and scaling for massive microservice architectures.

FeatureKubernetes (K8s)Docker Swarm
Setup & ComplexityHigh learning curve, complex setup.Fast setup, easy to learn.
ScalabilityExtremely high (ideal for massive clusters).Good, but struggles at high scale.
Auto-scalingBuilt-in horizontal pod scaling (HPA).Not natively supported.
Load BalancingRequires manual Service/Ingress configuration.Built-in automatic load balancing.

Q2. What is a pod in Kubernetes?

A Pod is the smallest, most basic deployable computing unit in Kubernetes. While Docker manages individual containers, Kubernetes manages Pods.

  • Structure: A Pod encapsulates either a single container (most common) or multiple tightly coupled containers (like a main app and a logging “sidecar”).
  • Shared Context: All containers within the exact same Pod share the same IP address, network namespace, and storage volumes, allowing them to communicate via localhost.
  • Ephemeral Nature: Pods are mortal and disposable. If a node fails, the Pod dies, and Kubernetes schedules a new replica to replace it.

Q3. What is a Kubernetes cluster?

A Kubernetes cluster is a group of nodes that run containerized applications across various environments and machines—cloud-based, physical, virtual, and on-premises. It enables the easy development of applications as well as their management and movement.

Table of Contents

Module 1: Kubernetes Fundamentals (Core Concepts)

Q4. What are the benefits of Kubernetes?

Kubernetes provides immense value for managing containerized applications at an enterprise scale. Its primary advantages include:

  • Self-Healing: Automatically restarts failed containers, replaces them, and kills containers that don’t respond to health checks.
  • Horizontal Scaling: Instantly scales applications up or down based on CPU usage or custom metrics (HPA).
  • Storage Orchestration: Automatically mounts your chosen storage system, whether local, cloud provider (AWS/GCP), or network-attached storage.

Q5. What is Kubernetes used for?

Kubernetes is used for the automation of the manual operations that are involved in the deployment, management, and scaling of containerized applications. It keeps track of the ones that are deployed into the cloud, restarts orphaned ones, shuts down the unused, and automatically provides resources such as storage, memory, and CPU when required.

Q6. How does Kubernetes work?

Kubernetes operates on a declarative model managed by a Control Plane and Worker Nodes.

You define the desired state of your application (e.g., “I need 3 replicas of Nginx”) using a YAML file and submit it to the API server. The Scheduler then assigns these Pods to optimal Worker Nodes. The Kubelet agent on each node continuously ensures the containers match your desired state.

If a node crashes, the Control Plane detects the drift and automatically spins up new Pods on healthy nodes to compensate.

Q7. What is orchestration in software?

Application orchestration in the software process means that we can integrate two or more applications. We will be able to automate arrangement, coordination, and management of computer software. The goal of any orchestration process is to streamline and optimize frequent repeatable processes.

Q8. What is a Kubernetes namespace?

The Kubernetes namespace is used in the environment wherein we have multiple users spread in the geographically vast areas and working on multiple projects. What the namespace does is dividing the cluster resources between multiple users.

Q9. What are federated clusters?

Federated clusters (often managed via KubeFed) allow you to coordinate and manage multiple distinct Kubernetes clusters as a single, unified entity from one API endpoint.

Q10. What is a node in Kubernetes?

A node in Kubernetes is a worker machine which is also known as a minion. This node could be a physical machine or a virtual machine. For each node, there is a service to run pods, and it is managed by master components. The node services could include kubelet, kube-proxy, and so on.

Q11. What is a container cluster?

A container cluster lets us place and manage containers in a dynamic setup. It can be considered as a set of nodes or Compute Engine instances. The API server of Kubernetes does not run on cluster nodes, instead the Container Engine hosts the API server.

Q11. What is Minikube?

Minikube is a lightweight, open-source tool that allows you to run a single-node Kubernetes cluster locally on your personal machine (Windows, macOS, or Linux).

It is the ultimate sandbox environment for developers. Instead of paying for expensive cloud infrastructure (like AWS EKS or Azure AKS), Minikube runs the control plane and the worker node within a local Virtual Machine (VM) or Docker container.

Module 2: Architecture and Control Plane

Q12. Explain the Kubernetes architecture.

Based on the visual diagram, Kubernetes features a centralized control system managing multiple worker nodes.

  • User Access: Users interact through a User Interface (UI) or Kubectl CLI, routing all commands to the API Server.
  • Control Components: This central block consists of the API Server, Scheduler, Controller-Manager, and etcd data store.
  • Worker Nodes: These machines execute the actual workloads. Each node relies on Docker, a kubelet, and a kube-proxy. Inside these nodes are Pods, which encapsulate the individual Containers.

Q13. What are the components of a Kubernetes Master?

The components of the Kubernetes Master include the API server, the controller manager, the Scheduler, and the etcd components. The Kubernetes Master components are responsible for running and managing the Kubernetes cluster.

Q14. What is the role of the API Server in Kubernetes?

The API Server acts as the central communication point in Kubernetes. It processes requests from users and internal components, validates them, and updates etcd.

Q15. What is etcd in Kubernetes, and why is it important?

etcd is a key-value store that Kubernetes uses to store all cluster data. It keeps track of the cluster’s state, configurations, and deployed objects. If etcd fails, Kubernetes may lose critical data.

Q16. Where is the Kubernetes cluster data stored?

The primary data store of Kubernetes is etcd, which is responsible for storing all Kubernetes cluster data.

Q17. What is a kubelet?

Kubelet as the lowest-level component in Kubernetes. It is responsible for making the individual machines run. The sole purpose of a kubelet is that in a given set of containers, it has to ensure that they are all running.

Q18. What does a kube-scheduler do? 

The kube-scheduler acts as the default matchmaking engine for your Kubernetes cluster. Its primary job is to watch for newly created Pods with no assigned Node and select the optimal machine for them to run on.

It makes this decision through a two-step process:

  • Filtering: It eliminates Nodes that don’t meet the Pod’s specific requirements (e.g., insufficient CPU/Memory, missing hardware like GPUs, or matching Taints).
  • Scoring: It ranks the remaining healthy Nodes based on rules like Pod affinity/anti-affinity and resource utilization, assigning the Pod to the highest-scoring Node.

Q19. How to write a Kubernetes scheduler?

While the default kube-scheduler works for most, you can write a Custom Scheduler for highly specialized workloads.

To implement one:

  • Write a program (often in Go) that watches the Kubernetes API for newly created Pods where the schedulerName matches your custom name.
  • Program your custom logic to find a suitable Node based on your unique business rules (e.g., specific hardware routing, compliance restrictions, or cost-optimization).
  • Once a Node is selected, your scheduler sends a Binding object back to the API Server, which then commands the Kubelet to start the Pod.

Q20. What is the use of kube-controller-manager?

It is the Kubernetes Controller Manager. The kube-controller-manager is a daemon that embeds the core control loops that regulate the system state, and it is a non-terminating loop.

Q21. What is kube-proxy?

The kube-proxy runs on each of the nodes. It can do simple tasks such as TCP, UDP, forwarding, and so on. It shows the services in the Kubernetes API on each node.

Q22. What is a Heapster?

The Heapster lets us do the container cluster monitoring. It lets us do cluster-wide monitoring and event data aggregation. It has native support for Kubernetes.

Q23. What is GKE?

GKE is Google Kubernetes Engine which is used for managing and orchestrating systems for Docker containers. GKE also lets us orchestrate container clusters within the Google Public Cloud.

Q24. What is the difference between containerd and Docker in the context of modern Kubernetes? [2026 Trend]

As of late 2022, Kubernetes officially deprecated Docker as its underlying runtime engine.

  • Docker is a complete, heavy software stack that includes a CLI, API, and image-building tools. It was not originally designed specifically for Kubernetes orchestration.
  • containerd is the lightweight, core runtime engine that actually runs the containers (which Docker itself uses under the hood).

Modern Kubernetes directly communicates with containerd (or CRI-O) via the Container Runtime Interface (CRI). This eliminates the heavy middleman (“dockershim”), resulting in lower node resource consumption, faster pod startup times, and enhanced cluster security.

Q25. Explain the role of the Cloud Controller Manager (CCM). 

The Cloud Controller Manager (CCM) is the bridge that links your Kubernetes cluster to a specific cloud provider’s API (like AWS, Azure, or GCP).

It decouples the core Kubernetes code from cloud-specific dependencies, allowing providers to release updates independently. The CCM handles:

  • Node Controller: Checks the cloud provider to determine if a Node has been deleted in the cloud after it stops responding.
  • Route Controller: Sets up underlying network routes in the cloud infrastructure.
  • Service Controller: Automatically provisions and destroys cloud load balancers when you create a Service of type LoadBalancer.

Q26. What happens to the cluster if the etcd database crashes? How do you recover it? [Asked in Red Hat]

etcd is the cluster’s brain. If it completely crashes, the cluster essentially becomes “read-only.”

  • The Impact: Existing Pods and applications will continue running without interruption. However, you cannot make any changes. You cannot deploy new Pods, scale existing ones, or update configurations, because the API Server has nowhere to read or write state.
  • The Recovery: You must restore the database from a backup. You use the etcdctl snapshot restore command, pointing it to your latest .db backup file, and then restart the kube-apiserver and etcd pods to resync the cluster.

Module 3: Commands, YAML, & Workloads

Q27. What is Kubectl?

Kubectl is a Kubernetes command-line tool that is used for deploying and managing applications on Kubernetes. Kubectl is especially useful for inspecting the cluster resources and for creating, updating, and deleting the components.

Q28. What is a Kubernetes context?

A Kubernetes context is a group of access parameters that has a cluster, a user, and a namespace. The current context is the cluster that is currently the default for kubectl, and all kubectl commands run against that cluster.

Q29. What is a Kubernetes deployment?

A Kubernetes deployment provides a declarative way to deploy and manage pods and replicas. Key characteristics of a deployment:

  • Specifies the desired state – number of replica pods to deploy
  • The deployment controller handles scaling up/down and rolling updates of pods
  • Supports rollback to previous versions
  • Maintains revision history of deployments
  • Offers availability and scaling guarantees for pods
  • Used in conjunction with pods, replicas, and replication controllers
  • Allows defining update strategies like rolling updates or blue-green deployments

Q30. What is the command to create a new deployment in Kubernetes?

To imperatively create a new deployment in Kubernetes, you use the kubectl create deployment command. This is the fastest way to get an application running without writing a full YAML manifest from scratch.

kubectl create deployment my-deployment --image=nginx:1.16 --replicas=3

In this command, –image specifies the exact container image to pull from your container registry, and –replicas tells the Control Plane how many identical Pods to spin up. For production, however, it is highly recommended to use the declarative approach (kubectl apply -f deployment.yaml) for better version control.

Q31. How can you list all deployments in the current namespace?

To view all active deployments within your currently active namespace, you utilize the basic get command.

kubectl get deployments

This command outputs a highly scannable table detailing critical metrics. You will see the NAME of the deployment, the READY state (showing how many replicas are currently available versus desired), the UP-TO-DATE count (replicas matching the latest desired state), and the AVAILABLE count. If you need to search across all namespaces simultaneously, you can append the –all-namespaces or -A flag to the command.

Q32. How do you check the status of a deployment rollout in Kubernetes?

When you update a deployment (like changing the image version), Kubernetes performs a rolling update. To monitor this progression in real-time, use the rollout status command:

kubectl rollout status deployment/my-deployment

This command is crucial for CI/CD pipelines. It blocks the terminal and streams the live status, showing you exactly how many old replicas have been terminated and how many new ones are online. If the rollout gets stuck (e.g., due to a CrashLoopBackOff error), you can easily halt it and use kubectl rollout undo to revert.

Q33. How can you update the image of a container in a Kubernetes deployment using kubectl?

You can update a running container’s image seamlessly using the imperative kubectl set image command. This triggers a Zero-Downtime Rolling Update automatically.

kubectl set image deployment/my-deployment nginx=nginx:1.17

Alternatively, you can edit the live configuration directly in your terminal using your default text editor (like Vim) by running:

kubectl edit deployment my-deployment

While these imperative commands are excellent for quick troubleshooting or hotfixes, modern DevOps best practices dictate updating the image tag directly in your version-controlled Git repository and letting a GitOps tool apply the change declaratively.

Q34. How does Kubernetes scale?

The kubectl scale command enables the ability to instantly change the number of replicas needed for running an application. While using this command, the new number of replicas needs to be specified by setting the –replicas flag.

Q35. What is the difference between a replica set and a replication controller?

Both are controllers that ensure a specified number of Pod replicas are running at any given time. However, the Replication Controller is a legacy technology, while the ReplicaSet is the modern standard used under the hood by Kubernetes Deployments.

FeatureReplicaSet (Modern)Replication Controller (Legacy)
Selector TypeSupports Set-based selectors (e.g., environment in (prod, qa)).Supports only Equality-based selectors (e.g., environment = prod).
UsageManaged automatically by Deployments.Managed directly by the user.
RecommendationThe current industry standard.Deprecated; avoid using in modern clusters.

Q36. What’s an Init Container in Kubernetes?

An Init Container is a special type of container that runs first, doing setup tasks like loading configurations or checking if services are ready. It completes its tasks before the main application container starts running.

Q37. When to use a DaemonSet instead of a Deployment?

Use a DaemonSet to create one pod on each node. This is perfect for tools that log data, monitor systems, or manage networks. Unlike Deployments, which work by increasing the number of identical pod copies.

Q38. How to utilize the kubectl command to eliminate a pod within a Kubernetes cluster?

To remove a specific Pod from your cluster, you use the kubectl delete command followed by the resource type and name.

kubectl delete pod my-pod-name

Important Note: If this Pod is managed by a higher-level controller (like a Deployment or ReplicaSet), Kubernetes will immediately notice the desired state has drifted. It will automatically spin up a brand-new Pod to replace the deleted one. If your goal is to permanently eliminate the application, you must delete the parent Deployment itself (kubectl delete deployment my-deployment), which subsequently terminates all associated Pods.

Q39. What command would you use to view the logs of a specific pod in Kubernetes?

To view the logs of a specific pod in Kubernetes, you can use the kubectl logs command. Syntax:

kubectl logs

Q40. Can you explain the purpose and usage of the kubectl describe pod command?

The kubectl describe pod command provides in-depth information about a specific pod within a Kubernetes cluster. It displays the pod’s status, events, configuration, and other relevant details. To utilize this command, you need to specify the name of the pod you want to examine. For instance, to describe a pod named my-pod, you would use the following command:

kubectl describe pod my-pod

Q41. How to use kubectl to monitor the health and status of a Kubernetes cluster?

To monitor the core health of your Kubernetes cluster, you can use several kubectl commands. The most fundamental is kubectl get nodes, which verifies if all worker and master nodes are in a Ready state.

For a broader system overview, kubectl cluster-info displays the health and addresses of the master and essential services like CoreDNS. Additionally, running kubectl get pods -n kube-system allows you to inspect the status of critical control plane components (like the API server and etcd) to ensure no underlying system crashes are occurring.

Q42. How to utilize kubectl to retrieve a list of all pods within the active namespace?

To instantly retrieve a list of all running, pending, or failing Pods within your currently active namespace, execute the following command:

kubectl get pods

This command returns a clean table displaying the Pod’s NAME, its READY status (e.g., 1/1 containers running), its current STATUS (Running, CrashLoopBackOff), the number of RESTARTS, and its AGE. If you need detailed operational insights—such as the IP address assigned to each Pod or the specific Node hosting it—simply append the wide output flag: kubectl get pods -o wide.

Q43. Write a YAML file to deploy a PHP/Nginx web application with 3 replicas. [Asked in Microsoft]

This requires a declarative Deployment YAML that utilizes a multi-container pod pattern.

apiVersion: apps/v1

kind: Deployment

metadata:

  name: php-nginx-app

spec:

  replicas: 3

  selector:

    matchLabels:

      app: web

  template:

    metadata:

      labels:

        app: web

    spec:

      containers:

      - name: nginx

        image: nginx:latest

        ports:

        - containerPort: 80

      - name: php-fpm

        image: php:8.1-fpm

This file specifies replicas: 3 for high availability. The containers block deploys both an Nginx web server and a PHP-FPM processor within the exact same Pod, allowing them to communicate seamlessly over localhost.

Q44. What is the difference between a StatefulSet and a Deployment? 

Both manage Pod replicas, but they serve entirely different architectures. Deployments manage stateless applications (like web servers), whereas StatefulSets are designed for stateful applications (like MySQL or MongoDB databases).

FeatureDeploymentStatefulSet
Pod IdentityRandom hashes (e.g., web-8a9b).Sticky, sequential network IDs (e.g., db-0, db-1).
StoragePods typically share the same volume.Each Pod gets its own dedicated persistent volume (PVC).
Scaling OrderSimultaneous creation/deletion.Strict, ordered creation and graceful deletion.

Q45. What are Liveness, Readiness, and Startup Probes? Write a YAML snippet for a Readiness probe. 

Probes are diagnostic checks performed by the Kubelet to determine a Pod’s health:

  • Liveness Probe: Checks if the container is deadlocked. If it fails, Kubelet automatically restarts the container.
  • Readiness Probe: Checks if the app is fully initialized and ready to accept traffic. If it fails, the Pod is removed from the Service load balancer.
  • Startup Probe: Used for slow-starting legacy apps, disabling the other probes until the app successfully starts.
readinessProbe:

  httpGet:

    path: /healthz

    port: 8080

  initialDelaySeconds: 5

  periodSeconds: 10

Q46. What are Taints and Tolerations? 

Taints and Tolerations work together to ensure Pods are not scheduled onto inappropriate Nodes. They act as a lock-and-key mechanism for cluster placement.

ConceptApplied ToPrimary FunctionExample Use Case
TaintsNodesActs as a repellent, actively rejecting Pods from scheduling unless they have a matching toleration.Dedicating a Node strictly for Master/Control Plane duties or specialized GPU workloads.
TolerationsPodsActs as a “VIP pass,” allowing a Pod to bypass a specific taint and schedule on that restricted Node.Allowing a cluster monitoring agent (DaemonSet) to run on a tainted Master node.

Q47. What is Node Affinity vs. Pod Anti-Affinity? 

While Taints repel Pods, Affinity attracts or intelligently distributes them based on specific architectural rules and labels.

FeatureTarget RulePrimary GoalReal-World Example
Node AffinityNode LabelsAttracts a Pod to a specific set of underlying Worker Nodes.Forcing a heavy data-processing Pod to only schedule on Nodes labeled disktype=ssd.
Pod Anti-AffinityPod LabelsRepels Pods from other Pods to spread out workloads across the cluster.Ensuring three replicas of a database are scheduled on three completely different Nodes to prevent a single point of failure (High Availability).

Module 4: Networking, Services and Security

Q48. How do you create a new service to expose a deployment in Kubernetes?

You can create a new service imperatively using the kubectl expose command. For example:

kubectl expose deployment my-deployment –port=80 –target-port=8080 –type=NodePort

This instantly creates a Service that routes traffic to the labels associated with my-deployment. However, for production environments, it is highly recommended to use a declarative YAML file (kind: Service). This allows you to define selectors, ports, and the specific Service type (like ClusterIP or LoadBalancer) while keeping the configuration stored in version control (GitOps).

Q49. How can you list all services in the current namespace?

To retrieve a list of all active Services within your currently configured namespace, execute the following command:

kubectl get services

Alternatively, you can use the shorthand command kubectl get svc. The output displays a highly scannable table containing critical network information: the Service NAME, the TYPE (e.g., ClusterIP, NodePort), the assigned internal CLUSTER-IP, the EXTERNAL-IP (if applicable), and the mapped PORT(S). To list Services across every namespace in the cluster, append the –all-namespaces or -A flag.

Q50. What command is used to delete a service in Kubernetes?

To delete a service in Kubernetes, you can use the kubectl delete service command.

Q51. What is load balancing on Kubernetes?

Load balancing in Kubernetes ensures network traffic is efficiently distributed across multiple Pod replicas, preventing any single Pod from becoming a bottleneck.

  • Internal Load Balancing: Managed by ClusterIP Services and kube-proxy, distributing East-West traffic between internal microservices.
  • External Load Balancing: Managed by Ingress controllers or cloud-provider LoadBalancer Services. It directs North-South traffic (from the outside internet) to the correct backend Pods based on configured routing rules.

Q52. How to set a static IP for Kubernetes load balancer?

Kubernetes Master assigns a new IP address. You can set a static IP for the Kubernetes load balancer by changing the DNS records whenever the Kubernetes Master assigns a new IP address.

Q53. What is a Headless Service?

The headless service is like normal services, but without the Cluster IP. It enables direct access to pods without the need for a proxy.

Q54. What is Ingress and how does it help route traffic in a Kubernetes cluster?

Ingress is an API object that manages external access (HTTP/HTTPS) to Services within a Kubernetes cluster. Unlike a standard LoadBalancer that operates at Layer 4 (IP/Port), an Ingress controller operates at Layer 7 (Application Layer).

Key Benefits:

  • Path-Based Routing: Routes traffic based on URLs (e.g., example.com/api maps to Service A, example.com/web to Service B).
  • Host-Based Routing: Directs traffic based on specific subdomains.
  • SSL/TLS Termination: Manages SSL certificates centrally, offloading decryption work from backend Pods.

Securing a Kubernetes cluster requires a defense-in-depth approach. Highly recommended measures include:

  • Implement RBAC: Enforce Role-Based Access Control to follow the principle of least privilege for users and service accounts.
  • Network Policies: Restrict East-West traffic by defining strict ingress/egress rules between Pods (Default Deny).
  • Pod Security Standards: Prevent Pods from running as the root user or mounting sensitive host file systems.
  • Audit Logging: Enable API server audit logs to track all kubectl interactions.
  • Image Scanning: Continuously scan container images for CVEs before deploying them.

Q56. What are the ways to provide API Security on Kubernetes?

The following are some of the ways that provide API Security:

  • Using the correct auth mode with the API server authentication mode= Node, RBAC
  • Ensuring that the traffic is protected by TLS
  • Using API authentication
  • Ensuring that kubeless protects its API via authorization-mode=Webhook
  • Monitoring RBAC failures
  • Removing default Service Account permissions
  • Ensuring that the kube-dashboard applies a restrictive RBAC policy
  • Implementing a pod security policy for container restrictions and the protection of the node
  • Using the latest version of kube

Q57. What are ConfigMaps and Secrets in Kubernetes?

Both objects decouple configuration artifacts from container images, making applications portable.

FeatureConfigMapSecret
PurposeStores non-confidential configuration data.Stores highly sensitive data.
ExamplesEnvironment variables, config files (nginx.conf).Passwords, SSH keys, OAuth tokens, TLS certificates.
EncodingStored as plain text.Encoded in Base64 (not encrypted by default).
UsageInjected as environment variables or mounted files.Ideally mounted strictly as read-only memory volumes.

Q58. How do you list ConfigMaps and Secrets in a Kubernetes cluster?

You can list these configuration objects using standard kubectl get commands:

kubectl get configmaps

kubectl get secrets

You can also use their shortnames: kubectl get cm and kubectl get secret. The output will display the name, the number of data items (key-value pairs) stored inside, and the age of the object.

Note: Listing secrets will not reveal the actual Base64 encoded values. To view the contents, you must describe the specific secret and output it as YAML: kubectl get secret <name> -o yaml.

Q59. How to inject secrets securely in Kubernetes (not env vars)?

Injecting Secrets as Environment Variables is a major security risk because they can be easily exposed in crash logs or via the printenv command inside the container.

Instead, you should inject Secrets strictly as Read-Only Volume Mounts. When a Secret is mounted as a volume, Kubernetes uses a tmpfs (RAM-backed filesystem). The sensitive data is never written to the underlying Node’s physical disk. Furthermore, if you update the Secret object in the API, the mounted volume automatically updates within the running Pod.

Q60. Difference between ClusterIP, NodePort, and LoadBalancer?

These are the three primary Service types used to expose applications in Kubernetes:

Service TypeAccessibilityPrimary Use Case
ClusterIP (Default)Internal only.East-West traffic (e.g., Frontend Pods communicating with Backend Database Pods).
NodePortExternal (via Node IP + static port).Quick debugging or exposing services in bare-metal environments without a cloud provider.
LoadBalancerExternal (via Cloud Provider IP).Production web apps. Automatically provisions an external Load Balancer (AWS ELB, Azure ALB) to route North-South traffic.

Q61. What is the Kubernetes Gateway API, and how does it differ from Ingress?

The Gateway API is the modern, highly extensible successor to the traditional Ingress resource.

FeatureGateway API (Modern)Ingress (Legacy)
Design ModelRole-Oriented: Separates concerns (Infra Admins manage Gateways, Devs manage HTTPRoutes).Single object managing everything.
Protocol SupportHTTP, HTTPS, gRPC, TCP, UDP.Limited primarily to HTTP/HTTPS.
Traffic SplittingNative support for advanced Canary/Blue-Green deployments (weighted routing).Requires vendor-specific custom annotations (e.g., NGINX snippets).

Q62. What are Kubernetes Network Policies?

Network Policies act as internal firewalls for your Pods. By default, Kubernetes operates on a “flat network” where all Pods can communicate with all other Pods (Default Allow).

A Network Policy uses Pod labels and namespaces to restrict East-West traffic. For security, you should implement a “Default Deny” policy to block all incoming and outgoing traffic, and then explicitly whitelist connections (e.g., allowing the Frontend Pods to only talk to the Backend Pods on port 3306). They require a supporting CNI plugin (like Calico or Cilium) to enforce the rules.

Q63. How do you configure RBAC (Role-Based Access Control) in Kubernetes? 

RBAC regulates who can access the Kubernetes API and what actions they can perform. You configure it using four primary objects:

  • Role: Defines permissions (e.g., get, create, delete pods) within a specific namespace.
  • RoleBinding: Connects a Role to a User, Group, or ServiceAccount within that namespace.
  • ClusterRole: Similar to a Role, but applies globally across the entire cluster (e.g., permission to view Nodes).
  • ClusterRoleBinding: Connects a ClusterRole to a subject across the entire cluster.

Module 5: Storage and CI/CD

Q64. In what ways can Kubernetes be paired with a CI/CD pipeline?

Kubernetes could be paired with the CI/CD pipelines with the assistance of Jenkins, GitLab CI/CD, and ArgoCD. The pipeline constructs a new container image, uploads it to a registry, and then uses kubectl or Helm to deploy it on Kubernetes.

Q65. Describe ArgoCD and its role in Kubernetes deployments.

ArgoCD is a tool classified under GitOps that provides continuous deployment automation for Kubernetes. It watches a Git repository for changes, then mirrors them into the cluster so that deployments can always be synchronized with the repository.

Q66. Define a Helm Chart and explain its purpose in CI/CD.

A Helm Chart is a collection of all components needed to deploy an application on Kubernetes; it contains YAML files that define the deployments, services and configuration. It facilitates easy application deployment and is regularly utilized in CI/CD pipelines to allow versioned and repeatable deployments.

Q67. What are the approaches to managing Rollbacks with Kubernetes CI/CD?

Rollbacks could be conducted by:

  • Execute kubectl rollout undo deployment for reverting a deployment to an earlier version.
  • Helm rollback command if the application was deployed using Helm.
  • Using GitOps tools such as ArgoCD in order to revert to the last known good state using Git.

Q68. What are some of the most effective methods for CI/CD in Kubernetes?

  • Use immutable container images for better version control.
  • Automate deployments with Helm or Kustomize.
  • Implement progressive delivery (Canary, Blue-Green deployments).
  • Use GitOps for declarative deployments.
  • Monitor deployments with Prometheus and Grafana.

Q69. Explain the difference between Persistent Volume (PV) and Persistent Volume Claim (PVC).

Storage in Kubernetes is decoupled into two objects to separate infrastructure management from application deployment.

FeaturePersistent Volume (PV)Persistent Volume Claim (PVC)
DefinitionA piece of physical/cloud storage provisioned by an administrator.A request for storage made by a developer/user.
RoleThe actual storage resource (e.g., AWS EBS, NFS).The “ticket” claiming a specific size and access mode from a PV.
LifecycleIndependent of any individual Pod.Bound to a PV; destroyed or retained based on reclaim policy.

Q70. What is a StorageClass and how does it enable dynamic provisioning?

A StorageClass eliminates the need for administrators to manually pre-provision Persistent Volumes (PVs).

It acts as a storage “profile” (e.g., AWS gp3 for standard SSDs or io1 for high-performance). When a developer creates a PVC referencing a specific StorageClass, Kubernetes automatically provisions the underlying cloud storage block on demand. This Dynamic Provisioning binds a new PV to the claim instantly, saving administrative time and preventing unused storage from sitting idle in your cloud environment.

Module 6: Cloud (AKS) and Scenario-Based Troubleshooting (FAANG Level)

Q71. If an organization is looking for ways to improve its deployment methods and desires a more scalable and responsive platform, what should be done?

The company should move to a cloud environment and implement a microservice architecture for implementing Docker containers. Once the base framework is set up, Kubernetes can be used for the autonomous development of applications and the quick delivery of the same by the team.

Q72. If an organization has a large distributed system with several data centers, virtual machines, and a huge number of employees working on various tasks, how can the tasks be managed with consistency with the help of Kubernetes?

The company can do well with something that offers scale-out capability, agility, and the DevOps practice to the cloud-based applications. Kubernetes, in this situation, can enable the customization of the scheduling architecture and support multiple container formats. This results in greater efficiency as well as provides support for various container networking solutions and container storage.

Q73. Why do you need on-premises to run the Kubernetes architecture? Explain Azure Kubernetes Service (AKS) and its key features.

AKS is a managed service provided by Azure that helps in the deployment, managing, and scaling of containerized applications using Kubernetes. A few features of AKS are:

  1. Scalability: AKS enable auto-scaling of applications by dynamic adjustment of the containers.
  2. Integration: It seamlessly integrates with other Azure Container services namely Azure Monitor, Azure Active Directory, Azure Policy, and so on.
  3. Hybrid Cloud Support: AKS supports hybrid cloud scenarios, allowing both on-premises and Azure cloud deployments.
  4. Cost Efficient: AKS has a pay-as-you-go policy that only asks for the cost of the services utilized.

Q74. What are the advantages of using AKS for deploying containerized applications compared to managing your own Kubernetes cluster?

Azure Kubernetes Service (AKS) dramatically reduces the operational overhead of managing a “vanilla” cluster.

  • Managed Control Plane: Microsoft handles the API server and etcd patching, scaling, and backups for free; you only pay for worker nodes.
  • Ecosystem Integration: It seamlessly integrates with Azure Active Directory (RBAC), Azure Monitor, and Azure Policy.
  • Automation: It features built-in node auto-repair and one-click Kubernetes version upgrades.
  • Elasticity: The built-in Cluster Autoscaler dynamically adds or removes worker nodes based on real-time traffic demands.

Q75. How do you configure and deploy a multi-container application on Azure Kubernetes Service (AKS)?

The advantages of using AKs are:

  1. Managed Services: AKS is entirely managed by Azure, which means the services are taken care of by Azure services along with scaling, upgrading, and maintaining the Kubernetes cluster.
  2. Simple Operations: AKS eases operations such as cluster provisioning, node scaling, and cluster upgrades.
  3. Build-in Availability: AKS ensures high availability features such as automatic node repair and multiple availability zone support.

Q76. Explain the concept of a pool in AKS.

In an AKS node, a pool is a group of nodes or virtual machines within a cluster that share configuration settings. It enables resource optimization, scalability, availability, fault tolerance, and cost management.

Q77. How can you achieve high availability in Azure Kubernetes Service, and what considerations should be taken into account?

Achieving High Availability (HA) in AKS requires multi-layered infrastructure planning:

  • Availability Zones (AZs): Distribute your worker nodes across multiple physical AZs to survive datacenter-level power or network failures.
  • Pod Anti-Affinity: Configure YAML rules to ensure replica Pods never schedule on the same exact node, preventing single points of failure.
  • Global Routing: Deploy replica AKS clusters across multiple geographic regions, routing user traffic via Azure Traffic Manager.
  • Node Auto-Repair: Enable this to let AKS automatically replace failing VMs.

Q78. How do you reduce Kubernetes costs without hurting performance?

To avoid wasting resources, set limits on how much your applications can use. Use tools like KubeCost or Prometheus to track usage, remove anything that’s not needed, and set up systems that adjust automatically based on demand.

Q79. How to fix a CrashLoopBackOff in Kubernetes?

A CrashLoopBackOff indicates a Pod is repeatedly crashing and restarting with an exponential delay. To resolve it:

  • Check App Logs: Run kubectl logs <pod-name> to identify application-level crashes (e.g., missing dependencies).
  • Check Previous Logs: Run kubectl logs <pod-name> –previous to see the exact fatal error that killed the prior instance.
  • Inspect Events: Run kubectl describe pod <pod-name> and scroll to “Events” to spot system-level issues, such as failed Liveness Probes or missing Secrets.

Q80. Your pod is stuck in a CrashLoopBackOff state. Walk me through your exact debugging steps.[FAANG Level]

“I isolate the root cause methodically:

  • Application Code Check: I execute kubectl logs <pod-name> –previous. If I see a stack trace (like ‘DB Connection Refused’), it’s a code or config error.
  • Environment Check: If logs are empty, the container failed to boot. I run kubectl describe pod to check for failed Probes or unmounted ConfigMaps.
  • Live Debugging: If it’s still unclear, I override the YAML command with sleep 3600, keeping the Pod alive so I can kubectl exec inside.”

Q81. A pod is evicted with an OOMKilled status. What does this mean and how do you resolve it?

(Out Of Memory) occurs when a container attempts to consume more RAM than allowed by its resources.limits.memory setting. To protect the node, the Linux kernel terminates the process.

Resolution:

Analyze Limits: Review the deployment YAML. If the memory limit is unreasonably low (e.g., 128Mi), increase it to match the application’s actual baseline.

Profile the Code: If the limit is already high (e.g., 4Gi), the application likely has a memory leak requiring developer profiling to fix.

Q82. You deployed a service, but you cannot access it from outside the cluster. How do you troubleshoot the network path? 

“I troubleshoot from the inside out to isolate the broken link:

  • Pod Check: I kubectl exec into a temporary pod and curl the target Pod’s IP. If it fails, the app isn’t listening on the right port.
  • Service Check: I run kubectl get endpoints <service-name>. If this is empty, the Service’s selector labels don’t match the Pod’s labels.
  • Ingress/Firewall Check: I verify the Ingress object rules match the Service, then check cloud security groups (e.g., AWS/Azure firewalls).”

Q83. Your pod is stuck in the Pending state for 10 minutes. What are the most likely causes? 

A Pending state means the Control Plane’s Scheduler cannot find a suitable Worker Node for the Pod.

Most Likely Causes:

  • Resource Exhaustion: No Node has enough unallocated CPU/Memory to meet the Pod’s strict requests.
  • Taints & Tolerations: The Pod lacks the required tolerations to schedule on tainted Nodes.
  • Storage Issues: The Pod is waiting on a Persistent Volume Claim (PVC) that hasn’t bound.
  • Affinity Rules: Strict nodeAffinity constraints cannot be met by the current cluster topology.

Q84. You need to update a live PHP deployment without dropping any user requests. How do you implement a Zero-Downtime Rolling Update?[Asked in Amazon]

“To guarantee true zero-downtime during an update, I implement strict Pod Lifecycle hooks in the YAML:

  • Strategy: Set maxUnavailable: 0 to ensure Kubernetes spins up new Pods before terminating old ones.
  • Readiness Probes: Prevent the Service from routing traffic to the new Pod until it returns a 200 OK status.
  • PreStop Hook: Add a preStop: sleep 10 hook. This pauses termination, allowing the Ingress controller enough time to update routing tables before the PHP process actually dies.”

Conclusion

Cracking a Kubernetes interview requires deep, hands-on architectural knowledge. By mastering these 70+ questions, from internal control plane mechanics to resolving complex CrashLoopBackOff scenarios, you are fully equipped to manage enterprise-grade clusters. Bookmark this page, review the critical kubectl snippets, and ace your 2026 Kubernetes interview.

Frequently Asked Questions

About the Author

Software Developer | Technical Research Analyst Lead | Full Stack & Cloud Systems

Ayaan Alam is a skilled Software Developer and Technical Research Analyst Lead with 2 years of professional experience in Java, Python, and C++. With expertise in full-stack development, system design, and cloud computing, he consistently delivers high-quality, scalable solutions. Known for producing accurate and insightful technical content, Ayaan contributes valuable knowledge to the developer community.