The aim of this Docker tutorial is to teach you the basics of Docker. Additionally, we will dive into more concepts such as virtualization, containerization, the need for Docker, Docker architecture, Docker installation, Docker images, and so on.
Table of Contents
Let’s begin by discussing each topic, module by module.
Module 1: What is Docker?
Docker is a containerization platform utilized to package software, applications, or code into docker containers. Docker enables seamless deployment to various environments, preventing dependency issues that might exist between developers and operations.
In other words, Docker is a free tool that makes the DevOps methodology easy by allowing developers or Docker experts to maintain consistency, automate infrastructure, deploy applications in isolated spaces, and improve resource utilization. Using Docker, developers can convert raw code into a portable template called a “Docker image,” which they can later use to create lightweight, portable, and isolated environments called “Docker containers.”
Explore this concept further with the help of this video.
You might think that if tasks related to deployment or the SDLC were being performed on individual machines/servers or using the concept of virtualization, then why should we choose Docker over these methods? The reason is that Docker uses containerization methodology and hence, it is also considered as one of the most popular containerization tools. So, let us understand containerization in detail.
Module 2: What is Containerization in DevOps?
Containerization is a technology user in the Docker tool that allows you to package an application and all its dependencies (such as libraries, frameworks, and settings) into a single unit called an image, which is further run in an isolated space within your system called a container. This container can run consistently on any system, regardless of the underlying environment. Let me bring up this very clearly, containerization and virtualization or Docker and Virtual machines are not the same. Let’s check how these are different.
Containerization vs Virtualization
As we have been introduced to containerization and virtualization, we know that both let us run multiple OSs on a host machine.
Now, what are the differences between containerization and virtualization? Let us check out the below table to understand the differences.
Containerization |
Virtualization |
Virtualizes only OS resources |
Virtualizes hardware resources |
Installs the container only on a host OS |
Requires the complete OS installation for every VM |
Uses only the kernel of the underlying host OS |
Requires the complete OS installation for every VM |
Lightweight |
A kernel is installed for every virtualized OS |
Native performance |
Fully isolated |
Process-level isolation |
Process-level isolation |
In the case of containerization, all containers share the same host OS. Multiple containers get created for every type of application making them faster but without wasting the resources, unlike virtualization where a kernel is required for every OS and lots of resources from the host OS are utilized.
We can easily figure out the difference from the architecture of containers given below:
In order to create and run containers on our host OS, we require software that enables us to do so. This is where Docker comes into the picture.
Docker vs Virtual Machine
There is always a discussion about choosing the environment for hosting an application, whether to go with Docker or Virtual Machines.
Docker |
Virtual Machines |
Lightweight, uses fewer resources by sharing the host OS kernel. |
Heavier, requires resources for a full OS for each VM. |
Starts quickly in seconds as it doesn’t need to boot a full OS. |
Takes longer to start as it requires a full OS boot. |
Ideal for running microservices, isolated apps, and fast deployments. |
Best for running full operating systems and applications requiring strong isolation |
Module 3: What is the use of Docker in DevOps?
We have adopted Docker in DevOps due to its features and versatility. Let’s understand its significance by comparing Traditional Deployment and Docker Deployment.
Traditional Deployment
In traditional deployment, there is a huge gap between the development team and the operations team. The code that works well on the developer’s computer does not work exactly on the tester’s machine. This is due to the different environment settings or configurations of their machines. So, in traditional deployment, everyone has to manually configure each machine by installing the necessary software, libraries, and settings in the same way. This approach is time-consuming and has a higher chance of errors, as different departments are involved in production environments.
Docker Deployment
In Docker deployment, developers just need to bind the raw code, libraries, and configuration files all together as one unit, which is called a Docker image, and send it to the people who also want to use that code. In this case, there is no need for the other people to set up the environment manually on their PC, as the image initially shared by the developer has the code, libraries, and other important files required to run that code. This is how Docker becomes helpful for easy shipping of code and its deployment.
Let’s explore more features of Docker in simple terms.
Master DevOps with Hands-On Projects and Labs!
DevOps Course
Module 4: Features of Docker
- Docker containers work the same on any computer or server.
- Containers are smaller and use less memory than virtual machines.
- You can start or stop apps in seconds with Docker.
- Each app runs in its own space without interfering with others. These features give us different advantages, but at the same time, we have some limitations of Docker too.
Module 5: Advantages and Disadvantages of Docker
Let’s explore the advantages and disadvantages of Docker.
Advantages |
Disadvantages |
Containers use less memory and CPU compared to virtual machines. |
Managing many containers at the same time is complex and may require other tools like Kubernetes. |
Docker containers work the same on any system. |
At the same time, containers depend on the Docker being installed, so not all systems can support them easily. |
Docker helps to package and deploy applications quickly. |
Debugging and monitoring containers is harder than traditional setups. |
Module 6: Docker Architecture
Docker uses a client-server architecture. The Docker client consists of Docker build, Docker pull, and Docker run. The client approaches the Docker daemon which further helps in building, running, and distributing Docker containers. Docker client and Docker daemon can be operated on the same system; otherwise, we can connect the Docker client to the remote Docker daemon. Both communicate with each other by using the REST API, over UNIX sockets or a network.
Learn from the docker architecture diagram.
Docker Client
The Docker Client serves as the primary interface for interacting with the Docker Engine. It provides a user-friendly command-line interface (CLI) that enables users to execute various operations, such as building, running, managing, and deleting Docker images and containers. The Docker Client acts as a bridge between the user and the underlying Docker Engine. This helps in, translating user commands into instructions that the Docker Engine can understand and execute.
Key Functions of the Docker Client
Manage Containers: Run, create, stop, start, and delete containers effortlessly with intuitive commands. For instance, docker run, docker stop, and docker rm are some basic commands to manage containers.
Work with Images: Pull, push, build, and manage Docker images using commands like docker pull, docker push, docker build, and docker rmi. These commands help you handle the blueprints for your containers.
Control Networks and Volumes: Create and manage networks for your containers and handle data persistence using volumes. Commands like docker network and docker volume come in handy for these tasks.
View System Information: Check the status of your Docker environment, see running containers, monitor resources, and troubleshoot using commands like docker ps, docker info, and docker stats.
Docker Host
Docker host is a machine where Docker is installed. In the Docker host, we have a Docker daemon and Docker objects such as docker containers and docker images. First, let us understand the objects on the Docker host, and then we will proceed toward the functioning of the Docker daemon.
Let us understand Docker objects like Docker Images and Docker Container with the Swiggy Box Analogy:
Docker Image
Imagine the Docker image is like a restaurant’s menu. It defines exactly what ingredients and recipes are needed to make a meal (your application). The Docker image includes all the required software and dependencies (like the ingredients), ready to be packaged into a box. Docker image is a template that contains the libraries, dependencies, and code that can be used for creating Docker containers to run your application code within it.
Docker Container
The container is like the Swiggy box that holds the meal (your application) and delivers it to your home (your machine). When you order food (run a Docker container), the restaurant (the Docker engine) packages the meal in a box (container) and sends it to you. The Docker container is based on the Docker image, which defines everything the application needs to run—just like a food delivery box that includes all the necessary ingredients, no matter where it’s delivered. To understand Docker containers in-depth, read “What is Docker Container?”.
Running the Container:
When you order food (run the Docker container), it gets delivered to your home (your computer). You can open the box (accessing the container), and everything you need is inside.
Docker daemon
Docker daemon helps in listening requests for the Docker API and in managing Docker objects such as images, containers, volumes, etc. Daemon issues building an image based on a user’s input and then saving it in the registry.
In case we do not want to create an image, then we can simply pull an image from the Docker hub, which might be built by some other user. In case we want to create a running instance of our Docker image, then we need to issue a run command that would create a Docker container. A Docker daemon can communicate with other daemons to manage Docker services.
Docker Registry (Docker Hub)
The Docker registry is a repository for Docker images that are used for creating Docker containers. We can use private registries like ACR, ECR, JFrog, Harbor, or the most popular official registry Docker Hub.
Docker Hub is an online cloud-based container registry where anyone can save and share Docker images. You just need to create an account on https://hub.docker.com/, and you are ready to store your images. You can find various pre-built images at https://docs.docker.com/trusted-content/official-images/, which are ready to be pulled into your environment.
Now that we are through the Docker architecture and understand how Docker works, let us get started with the installation and workflow of Docker and implement important Docker commands.
Module 7: How to Install Docker on Windows, MacOS, and Linux?
Docker can be installed as Docker Engine and Docker Desktop. It is your choice, which Docker you want to use. Let’s understand the process of installing both of them.
Docker Engine
Docker Engine manages containers on your system. DE is a docker daemon and CLI used for the Linux environment. It’s comprised of a daemon process, REST API, and a CLI to interact with the docker daemon.
Daemon Process
The Daemon Process is the process run by the Docker Engine. This process continuously listens for requests from clients and manages the containers. Request can be handling tasks like creating, running, stopping, and deleting containers.
REST API
This API acts as a language for other programs to talk to the Docker daemon, giving instructions on what actions to perform.
Command Line Interface (CLI) Client
The Docker CLI is the tool you interact with to control the Docker daemon.
To install Docker Engine, you can visit https://docs.docker.com/engine/install/.
Get 100% Hike!
Master Most in Demand Skills Now!
Docker Desktop
Docker Engine is also available for Windows, macOS, and Linux, through Docker Desktop. In addition to the docker CLI from Engine, Docker Desktop includes a graphical interface for managing your containers.
For installing Docker on Windows and macOS, the process is quite simple. All we have to do is download and install Docker from Download Docker, which includes Docker client, Docker machine, Compose (Mac only), Kitematic, and VirtualBox.
Installation on MAC OS:
Step 1: Check System Requirements
Before installing Docker, ensure your Mac meets the necessary requirements. You need MacOS Yosemite 10.10.3 or newer.
Step 2: Download Docker Desktop
Visit the official Docker website (docker.com) and navigate to the Docker Desktop page. Click on the download button for MacOS.
Step 3: Install Docker
Once the download completes, open the Docker.dmg file. Drag the Docker icon to the Applications folder to install Docker on your Mac.
Step 4: Launch Docker
Locate Docker in your Applications folder and double-click to open it. You might be asked to grant permissions for installation, by following the on-screen instructions.
Step 5: Login to Docker
After launching Docker, you’ll be asked to sign in using your Docker Hub credentials. You can create one for free if you don’t have an account.
Installation on Windows:
Step 1: Visit: https://docs.docker.com/desktop/install/windows-install/. Check system requirements before you proceed with the installation.
Step 2: Launch the Docker Desktop application that you have just downloaded.
Click OK.
Installation on Linux:
To install Docker on the Ubuntu distribution, first, we need to update its packages. To do so, type the below command in the terminal:
sudo apt-get update
As we are using this command on sudo, after we hit Enter, it may ask for a password. Provide the password and then follow the steps given further in this Docker tutorial.
Now, we must install its recommended packages. For that, just type the below-mentioned command:
sudo apt-get install docker.io -y
The Docker installation process is complete now. Use the below-mentioned command to verify if Docker is installed correctly.
docker --version
You will get an output as “Docker version <followed by version>”.This means that Docker has been started successfully.
Module 8: How Does Docker Work? (Docker Workflow)
Docker’s typical local workflow allows users to create images, pull images, publish images, and run containers.
Let us understand this typical local workflow from the diagram below:
Dockerfile, here, consists of the configuration and the name of the image pulled from a Docker registry such as a Docker hub. This file basically helps in building an image from it, which includes the instructions about container configuration, or it can be an image pulled from a Docker registry.
Let us understand this process in a little detailed way:
- It basically involves building an image from a Dockerfile that consists of instructions about container configuration or image pulling from a Docker registry.
- When this image is built in our Docker environment, then we should be able to run the image, which, further, creates a container.
- In our container, we can do any operations such as:
- Stopping the container
- Starting the container
- Restarting the container
- These runnable containers can be started, stopped, or restarted just like how we operate a virtual machine or a computer.
- Whatever manual changes, such as configurations or software installations, are made, these changes in a container can be committed to making a new image, which can, further, be used for creating a container from it later.
- At last, when we want to share our image with our team or the world, we can easily push our image into the Docker registry.
- One can easily pull this image from the Docker registry using the pull command.
Module 9: Docker Commands
Here is a list of the most commonly used Docker commands when working with Docker.
Docker Run Command
After the installation of Docker, we should be able to run the containers. Initially, we don’t get a container so we need to create our first container. It is only possible if you have a docker image. However, if you don’t have a docker image, we can run a simple “hello-world” container to cross-check if everything is working properly. For that, run the below-mentioned command:
docker run hello-world
Output:
Hello from Docker!
This message shows that the installation appears to be working correctly. Now, let us move forward in this Docker tutorial to understand many other Docker operations.
Pulling an Image from the Docker Registry
The easiest way to obtain an image, to build a container from, is to find an already prepared image from Docker’s official website.
We can choose from various common software, such as MySQL, Node.js, Java, Nginx, or WordPress, on the Docker hub as well as from the hundreds of open-source images made by common people around the globe.
For example, if we want to download the image for MySQL, then we can use the pull command:
docker pull mysql
In case we want the exact version of the image, then we can use:
docker pull mysql:5.5.45
To check the output, try the below-mentioned command:
docker images
When we run this command, we can observe the created image with the repository name mysql.
Output:
REPOSITORY TAG IMAGE VIRTUAL SIZE&amp;amp;amp;amp;amp;amp;lt;none&amp;amp;amp;amp;amp;amp;amp;gt &amp;amp;amp;amp;amp;amp;lt;none&amp;amp;amp;amp;amp;amp;gt; 4b9b8b27fb42 214.4 MB
mysql 5.5.45 0da0b10c6fd8 213.5 MB
Now, in this Docker Tutorial, we shall customize an image manually by installing software or by changing configurations. After completion, we can run the Docker commit command to create an image of the running container.
Learn How to Build, Ship, and Deploy with Docker!
Docker Training
Running a container using MySQL image
In order to run a Docker container, all we need to do is use the run command followed by our local image name or the one we retrieved from the Docker hub.
Usually, a Docker image requires some added environment variables, which can be specified with the -ite option. For long-running processes, such as daemons, we also need to use the -d option.
To run an “intellipaat_container” container, we need to run the command shown below, which runs a container using MySQL image.
docker run -it -d –name container_name mysql:5.5.45
Listing Containers
We have already seen, in this Docker tutorial, how to list the running containers using the ps command, but now what we want is to list all the containers, regardless of their state. Well, to do that, all we have to do is add the -a option as shown below:
docker ps -a
Now, we can easily distinguish between which container we want to start with and which container we want to remove.
To List All the Running Containers:
docker ps
This command lists all of our running processes, images, the name they are created from, the command that is run, ports that the software is listening on, and the name of the container.
We can figure out, from the above output, that the name of the container is eloquent_darwin, which is an auto-generated one.
When we want to explicitly name the container, the best practice is to use the –name option that inserts the name of our choice at the container startup:
docker run -it -d --name intellipaat-container ubuntu
We can easily name our container with this command.
Stopping and Starting Containers
Once we have our Docker container up and running, we can use it by typing the docker stop command with the container name as shown below:
docker stop eloquent_darwin
In case we want to run our container again from the state in which we shut it down, we can use the start command as our entire container is written on a disk:
docker start eloquent_darwin
Now, let us see how we can tag an image.
Tagging an Image
Once we have our image up and running, we can tag it with a username, image name, and version number before we push it into the repository by using the docker tag command:
docker tag centos docker6767/mycentos
Now, in this Docker Tutorial, let us see how we can push an image into the repository.
Pushing an Image into the Repository
Now, we are ready to push our image into the Docker hub for anyone to use it via a private repository.
- First, go to https://hub.docker.com/ and create a free account
- Next, log in to the account using the login command:
docker login
- Then, input the username, password, and email address that we are registered with
- Finally, push our image, with our username, image, and version name, by using the push command.
Within a few minutes, we will receive a message about our repository stating that our repository has been successfully pushed.
When we go back to our Docker hub account, we will see that there is a new repository as shown below:
Removing Containers
After using a container, we would usually want to remove it rather than have it lying around consuming disk space.
We can use the rm command to remove a container as shown below:
docker rm eloquent_darwin
Removing Images
We already know how to list all the locally cached images by using the images command. These cached images can occupy a significant amount of space, so in order to free up some space by removing unwanted images, we can use the rmi command as shown below:
docker rmi cetntos
Exposing Ports
Use the below command to expose a container to a specific port:
docker run -it -d –name container1 -p 81:80 ubuntu
Listing Processes
To display processing in a container, we can use the top command in Docker, which is very similar to the top command in Linux.
docker top
Executing Commands
To execute commands in a running container, we can use the exec command. For example, if we want to list the contents of the root of the hard drive, we can use the exec command as shown below:
docker exec
We can gain access to the bash shell if we wish to ssh as root into the container. To do so, we can use the following command:
docker exec -it container_name bash
Note: All communications between Docker clients and Docker daemons are secure since they are already encrypted.
This video will provide more clarity on the subject.
Module 10: Dockerfile
A Dockerfile contains all the instructions, e.g., the Linux commands to install and configure the software. Dockerfile creation, as we already know, is the primary way of generating a Docker image. When we use the build command to create an image, it can refer to a Dockerfile available on our path or to a URL such as the GitHub repository.
Instructions: The instructions in a Dockerfile are executed in the same order as they are found in the Dockerfile.
In order to create a Dockerfile, follow these steps:
-
- Create a new file and name it “Dockerfile”.
- Write the below script inside this Dockerfile.
FROM ubuntu
RUN apt-get update
RUN apt-get install -y apache2
-
- Save this file. This Dockerfile script pulls the Ubuntu base image, updates the package lists, and attempts to install the Apache2 server.
There can also be comments starting with the # character in the Dockerfile.
The following table contains the list of instructions available:
Instruction |
Description |
FROM |
The first instruction in the Dockerfile identifies an image to inherit from |
MAINTAINER |
This instruction provides visibility as well as credit to the author of the image |
RUN |
This instruction executes a Linux command to install and configure |
ENTRYPOINT |
The final script or application which is used to bootstrap the container and make it an executable application |
CMD |
This instruction uses a JSON array to provide default arguments to the ENTRYPOINT |
LABEL |
This instruction contains the name/value metadata about the image |
ENV |
This instruction sets the environment variables |
COPY |
This instruction copies files into the container |
ADD |
This instruction is basically an alternative to the COPY instruction |
WORKDIR |
This sets a working directory for RUN, CMD, ENTRYPOINT, COPY, and/or ADD instructions |
EXPOSE |
The ports on which the container listens |
VOLUME |
This instruction is to create a mount point |
USER |
An instruction to run RUN, CMD, and/or ENTRYPOINT instructions |
Module 11: Docker Networking
Docker networking allows containers to communicate with each other and with the outside world. Understanding Docker networking is crucial for building multi-container applications.
Network drivers
Bridge
It is the default network driver used for standalone container communication.
Host
It is responsible for maintaining isolation between the container and the Docker host network.
Overlay
An overlay is used for connecting multiple Docker daemons together.
Macvlan
Used for assigning a MAC address to a container.
None
Disables all networking for a container.
Listing and inspecting networks
List networks
docker network ls
Inspect a network
docker network inspect bridge
Creating custom networks
docker network create –driver bridge my-network
Connecting containers to networks
1. Connect a running container to a network:
docker network connect my-custom-network my-container
2. Disconnect a container from a network:
docker network disconnect my-custom-network my-container
Module 12: Docker Volumes and Management
By default, any data created inside a docker container is lost when the container is removed. To overcome this issue, Docker provides volumes and bind mount to retain data and share it with other containers is required.
Types of data persistence in Docker
Volumes
Managed by Docker and stored in a part of the host filesystem.
Bind mounts
File or directory on the host machine mounted into a container.
tmpfs mounts
Stored in the host system’s memory only.
Working with Docker volumes
Below are a few Docker volume commands tailored for an Intellipaat-named volume:
Create a volume
docker volume create intellipaat-vol
List volumes
docker volume ls
Inspect the volume
docker volume inspect intellipaat-vol
Remove the volume
docker volume rm intellipaat-vol
Run a container with the volume:
docker run -v intellipaat-vol:/app/data nginx
Working with Bind Mounts
To use bind mounts with Docker:
Create a directory on your host system
mkdir -p /path/to/intellipaat-data
Run a container with the bind mount:
docker run -v /path/to/intellipaat-data:/app/data nginx
Inspect the mounted directory by entering the container:
docker exec -it <container_id> bash
Stop and remove the container if needed:
docker stop <container_id>
docker rm <container_id>
Delete the host directory to clean up
rm -rf /path/to/intellipaat-data
Master Advance DevOps concepts for DevOps Engineering Roles
DevOps Training Course
Module 13: Docker Compose
Docker Compose is a tool that is used for running and managing multiple Docker containers at once. We just need to create one simple YAML file and define all the requirements like the services, networks, and storage configurations that your app needs to run properly.
Docker Compose working: The Docker Compose file
Docker Compose is used and handled using a configuration file called docker-compose.yml. Here, the YAML format is used to define the environment configuration for seamless working of your application.
Syntax of Compose file:
This Compose file sets up a simple (Nginx web server and a PostgreSQL database with basic configurations.
version: Specifies the version of Docker Compose syntax to use.
services: Defines the different containers (services) in your application.
web: A service using the nginx image, exposing port 80.
db: A service using the Postgres image with an environment variable to set the database password.
Module 14: Docker Swarm
A tool that helps you manage and control a cluster of Docker containers across multiple computers, machines, or servers is Docker Swarm. It treats a group of Docker hosts as a single virtual host which allows Docker experts to deploy and scale their applications easily ensuring that the containers are running properly across all machines, keeping the application live, even if one of the machines fails. . If you want to learn in more detail, read the Docker Swarm guide.
Module 15: Docker Machine
Docker machine is a command-line utility that is used to manage one or more local machines, which are usually run in separate VirtualBox instances, or remote machines that are hosted on Cloud providers, e.g., Amazon Web Services, Microsoft Azure, etc.
How to Create a Local Machine?
Docker Toolbox comes with a default Docker machine named default. This is just to give us a taste of it and to get us started, but we may need multiple machines later on to segment the different containers that are running. To do that, we can use the following command:
docker-machine create –d virtualbox intellipaat
This command will create a local machine by using a VirtualBox image named intellipaat.
Listing Machines
If we want to list the machines that we have configured, we can run the following command:
docker-machine ls
Starting and Stopping Machines
We can start the Docker machine that we have created by using the following command:
docker-machine start intellipaat
Now that the Docker machine has started, we will have to configure the Docker command line, with which the Docker daemon should interact. We can use the following commands to do this:
docker –machine env intellipaat
eval “$(docker-machine env intellipaat)”
Now, to stop a machine, use the following command:
docker-machine stop intellipaat
Note: These start and stop commands will start and stop our VirtualBox VMs, and we can watch the state of the VMs changing while we run the commands if we have the VirtualBox manager open.
Module 16: Career for Docker Experts
Docker experts can pursue various career roles. A few of them are listed below:
- You can become a DevOps Engineer as this role’s responsibility is to automate the deployment of applications where Docker is used.
- You can look to become a Cloud Architect and design cloud-based systems using Docker to make apps scalable and easy to manage.
- Site Reliability Engineer (SRE) will be a good choice for Docker experts. Your job will be to keep applications running smoothly where you can use your skills in Docker.
- Being a Software Engineer or a Developer, your role will be to create code and package software in Docker containers to make sure it runs consistently across different systems.Watch this DevOps Tutorial video on YouTube:
Conclusion
From this Docker Tutorial, we provided a detailed understanding of Docker concepts, such as its workflow, its need, and useful Docker commands, along with Docker images and containers.
While, here, we covered quite a bit of Docker’s core functionality, there is still a lot to know. If you are looking forward to learning Docker, then you must go for a structured DevOps Training provided by Intellipaat, where you will work on various case-based scenarios, along with exhaustive topic-wise assignments, hands-on sessions, and various industry-based projects that will prepare you to grab a DevOps job in any reputed MNC.
Our Devops Courses Duration and Fees
Cohort starts on 4th Feb 2025
₹22,743
Cohort starts on 28th Jan 2025
₹22,743
Cohort starts on 14th Jan 2025
₹22,743