Most of us, especially developers and administrators working in the software industry, must be familiar with the term Docker. Docker has become a standard in the IT industry when it comes to packaging, deploying, and running distributed applications with ease. Today, we will be learning about Docker end-to-end.
The aim of this Docker tutorial is to make you learn about Docker basics. Additionally, we will dive into more concepts such as virtualization, containerization, the need for Docker, Docker architecture, Docker installation, Docker images, and so on.
Let us learn Docker from scratch!
Watch this DevOps Tutorial video on YouTube:
What is Docker?
Docker is a containerization platform utilized to package software, applications, or code into docker containers. Docker enables seamless deployment to various environments, preventing dependency issues that might exist between developers and operations.
Docker simplifies the DevOps Methodology by allowing developers to create templates called “images,” using which we can create lightweight virtual machines called “containers.” Docker makes things easier for software developers by giving them the capability to automate infrastructure, isolate applications, maintain consistency, and improve resource utilization. There might arise a question that such tasks can also be done through virtualization, so why choose Docker over it? This is because virtualization is not as efficient.
Why? We shall discuss this as we move along in this Docker Tutorial.
To begin with, let us understand, what is virtualization.
Get 100% Hike!
Master Most in Demand Skills Now!
Containerization vs Virtualization
As we have been introduced to containerization and virtualization, we know that both let us run multiple OSs on a host machine.
Now, what are the differences between containerization and virtualization? Let us check out the below table to understand the differences.
Virtualization |
Containerization |
Virtualizes hardware resources |
Virtualizes only OS resources |
Requires the complete OS installation for every VM |
Installs the container only on a host OS |
A kernel is installed for every virtualized OS |
Uses only the kernel of the underlying host OS |
Heavyweight |
Lightweight |
Limited performance |
Native performance |
Fully isolated |
Process-level isolation |
In the case of containerization, all containers share the same host OS. Multiple containers get created for every type of application making them faster but without wasting the resources, unlike virtualization where a kernel is required for every OS and lots of resources from the host OS are utilized.
We can easily figure out the difference from the architecture of containers given below:
In order to create and run containers on our host OS, we require software that enables us to do so. This is where Docker comes into the picture!
Now, in this tutorial, let us understand the Docker architecture.
Docker Architecture
Docker uses a client-server architecture. The Docker client consists of Docker build, Docker pull, and Docker run. The client approaches the Docker daemon which further helps in building, running, and distributing Docker containers. Docker client and Docker daemon can be operated on the same system; otherwise, we can connect the Docker client to the remote Docker daemon. Both communicate with each other by using the REST API, over UNIX sockets or a network.
learn from the below-given docker architecture diagram.
- Docker Client
- Docker Host
- Docker Images
- Docker Container
- Docker Registry
Docker Client
The Docker Client serves as the primary interface for interacting with the Docker Engine. It provides a user-friendly command-line interface (CLI) that enables users to execute various operations, such as building, running, managing, and deleting Docker images and containers. The Docker Client acts as a bridge between the user and the underlying Docker Engine. This helps in, translating user commands into instructions that the Docker Engine can understand and execute.
Key Functions of the Docker Client
Manage Containers: Run, create, stop, start, and delete containers effortlessly with intuitive commands. For instance, docker run, docker stop, and docker rm are some basic commands to manage containers.
Work with Images: Pull, push, build, and manage Docker images using commands like docker pull, docker push, docker build, and docker rmi. These commands help you handle the blueprints for your containers.
Control Networks and Volumes: Create and manage networks for your containers and handle data persistence using volumes. Commands like docker network and docker volume come in handy for these tasks.
View System Information: Check the status of your Docker environment, see running containers, monitor resources, and troubleshoot using commands like docker ps, docker info, and docker stats.
Docker Host
In the Docker host, we have a Docker daemon and Docker objects such as containers and images. First, let us understand the objects on the Docker host, and then we will proceed toward the functioning of the Docker daemon.
- Docker objects:
- Docker image: A template that can be used for creating Docker containers. It includes steps for creating the necessary software.
- Docker container: A type of virtual machine that is created from the instructions found within the Docker image. It is a running instance of a Docker image that consists of the entire package required to run an application.
- Docker daemon:
- Docker daemon helps in listening requests for the Docker API and in managing Docker objects such as images, containers, volumes, etc. Daemon issues building an image based on a user’s input and then saving it in the registry.
- In case we do not want to create an image, then we can simply pull an image from the Docker hub, which might be built by some other user. In case we want to create a running instance of our Docker image, then we need to issue a run command that would create a Docker container.
- A Docker daemon can communicate with other daemons to manage Docker services
Docker Registry
- The Docker registry is a repository for Docker images that are used for creating Docker containers.
- We can use a local or private registry or the Docker hub, which is the most popular social example of a Docker repository.
Now that we are through the Docker architecture and understand how Docker works, let us get started with the installation and workflow of Docker and implement important Docker commands.
How to Install Docker on Windows and MacOS?
For installing Docker on Windows and macOS, the process is quite simple. All we have to do is download and install Docker from Download Docker, which includes Docker client, Docker machine, Compose (Mac only), Kitematic, and VirtualBox.
Installation on MAC OS:
Step 1: Check System Requirements
Before installing Docker, ensure your Mac meets the necessary requirements. You need MacOS Yosemite 10.10.3 or newer.
Step 2: Download Docker Desktop
Visit the official Docker website (docker.com) and navigate to the Docker Desktop page. Click on the download button for MacOS.
Step 3: Install Docker
Once the download completes, open the Docker.dmg file. Drag the Docker icon to the Applications folder to install Docker on your Mac.
Step 4: Launch Docker
Locate Docker in your Applications folder and double-click to open it. You might be asked to grant permissions for installation, by following the on-screen instructions.
Step 5: Login to Docker
After launching Docker, you’ll be asked to sign in using your Docker Hub credentials. You can create one for free if you don’t have an account.
Installation on Windows:
Step 1: Visit: https://docs.docker.com/desktop/install/windows-install/. Check system requirements before you proceed with the installation.
Step 2: Launch the Docker Desktop application that you have just downloaded.
Click OK.
Installation on Linux:
To install Docker on the Ubuntu distribution, first, we need to update its packages. To do so, type the below command in the terminal:
sudo apt-get update
As we are using this command on sudo, after we hit Enter, it may ask for a password. Provide the password and then follow the steps given further in this Docker tutorial.
Now, we must install its recommended packages. For that, just type the below-mentioned command:
sudo apt-get install docker.io -y
The Docker installation process is complete now. Use the below-mentioned command to verify if Docker is installed correctly.
docker --version
You will get an output as “Docker version <followed by version>”.This means that Docker has been started successfully.
Docker Workflow
Docker’s typical local workflow allows users to create images, pull images, publish images, and run containers.
Let us understand this typical local workflow from the diagram below:
Dockerfile, here, consists of the configuration and the name of the image pulled from a Docker registry such as a Docker hub. This file basically helps in building an image from it, which includes the instructions about container configuration, or it can be an image pulled from a Docker registry.
Let us understand this process in a little detailed way:
- It basically involves building an image from a Dockerfile that consists of instructions about container configuration or image pulling from a Docker registry.
- When this image is built in our Docker environment, then we should be able to run the image, which, further, creates a container.
- In our container, we can do any operations such as:
- Stopping the container
- Starting the container
- Restarting the container
- These runnable containers can be started, stopped, or restarted just like how we operate a virtual machine or a computer.
- Whatever manual changes, such as configurations or software installations, are made, these changes in a container can be committed to making a new image, which can, further, be used for creating a container from it later.
- At last, when we want to share our image with our team or the world, we can easily push our image into the Docker registry.
- One can easily pull this image from the Docker registry using the pull command.
Running a Docker Container
After the installation of Docker, we should be able to run the containers. Initially, we don’t get a container so we need to create our first container. It is only possible if you have a docker image. However, if you don’t have a docker image, we can run a simple “hello-world” container to cross-check if everything is working properly. For that, run the below-mentioned command:
docker run hello-world
Output:
Hello from Docker!
This message shows that the installation appears to be working correctly. Now, let us move forward in this Docker tutorial to understand many other Docker operations.
Pulling an Image from the Docker Registry
The easiest way to obtain an image, to build a container from, is to find an already prepared image from Docker’s official website.
We can choose from various common software, such as MySQL, Node.js, Java, Nginx, or WordPress, on the Docker hub as well as from the hundreds of open-source images made by common people around the globe.
For example, if we want to download the image for MySQL, then we can use the pull command:
docker pull mysql
In case we want the exact version of the image, then we can use:
docker pull mysql:5.5.45
To check the output, try the below-mentioned command:
docker images
When we run this command, we can observe the created image with the repository name mysql.
Output:
REPOSITORY TAG IMAGE VIRTUAL SIZE<none> <none> 4b9b8b27fb42 214.4 MB
mysql 5.5.45 0da0b10c6fd8 213.5 MB
Now, in this Docker Tutorial, we shall customize an image manually by installing software or by changing configurations. After completion, we can run the Docker commit command to create an image of the running container.
Running a container using MySQL image
In order to run a Docker container, all we need to do is use the run command followed by our local image name or the one we retrieved from the Docker hub.
Usually, a Docker image requires some added environment variables, which can be specified with the -ite option. For long-running processes, such as daemons, we also need to use the -d option.
To run an “intellipaat_container” container, we need to run the command shown below, which runs a container using MySQL image.
docker run -it -d –name <container_name> <image_name>
To list all the running containers:
docker ps
This command lists all of our running processes, images, the name they are created from, the command that is run, ports that the software is listening on, and the name of the container.
We can figure out, from the above output, that the name of the container is eloquent_darwin, which is an auto-generated one.
When we want to explicitly name the container, the best practice is to use the –name option that inserts the name of our choice at the container startup:
docker run -it -d --name intellipaat-container ubuntu
We can easily name our container with this command.
Stopping and Starting Containers
Once we have our Docker container up and running, we can use it by typing the docker stop command with the container name as shown below:
docker stop eloquent_darwin
In case we want to run our container again from the state in which we shut it down, we can use the start command as our entire container is written on a disk:
docker start eloquent_darwin
Now, let us see how we can tag an image.
Tagging an Image
Once we have our image up and running, we can tag it with a username, image name, and version number before we push it into the repository by using the docker tag command:
docker tag centos docker6767/mycentos
Now, in this Docker Tutorial, let us see how we can push an image into the repository.
Pushing an Image into the Repository
Now, we are ready to push our image into the Docker hub for anyone to use it via a private repository.
- First, go to https://hub.docker.com/ and create a free account
- Next, log in to the account using the login command:
docker login
- Then, input the username, password, and email address that we are registered with
- Finally, push our image, with our username, image, and version name, by using the push command.
Within a few minutes, we will receive a message about our repository stating that our repository has been successfully pushed.
When we go back to our Docker hub account, we will see that there is a new repository as shown below:
Docker Engine
Docker Engine is like a control center that manages containers on your system. It’s comprised of:
- Daemon Process (Server): The Docker Engine runs as a background process called a daemon. This daemon continuously listens for requests from clients and manages the containers, handling tasks like creating, running, stopping, and deleting containers.
- REST API: Docker Engine provides a RESTful API that defines interfaces, allowing programs to communicate with the daemon. This API acts as a language for other programs to talk to the Docker daemon, giving instructions on what actions to perform.
- Command Line Interface (CLI) Client: The Docker CLI is the tool you interact with to control the Docker daemon. It’s the bridge between you and the daemon, letting you send commands and instructions to manage containers through a simple command line interface
Docker Commands
Listing Containers
We have already seen, in this Docker tutorial, how to list the running containers using the ps command, but now what we want is to list all the containers, regardless of their state. Well, to do that, all we have to do is add the -a option as shown below:
docker ps -a
Now, we can easily distinguish between which container we want to start with and which container we want to remove.
Removing Containers
After using a container, we would usually want to remove it rather than have it lying around consuming disk space.
We can use the rm command to remove a container as shown below:
docker rm eloquent_darwin
Removing Images
We already know how to list all the locally cached images by using the images command. These cached images can occupy a significant amount of space, so in order to free up some space by removing unwanted images, we can use the rmi command as shown below:
docker rmi cetntos
Exposing Ports
Use the below command to expose a container to a specific port:
docker run -it -d –name container1 -p 81:80 ubuntu
Listing Processes
To display processing in a container, we can use the top command in Docker, which is very similar to the top command in Linux.
docker top <container_name>
Executing Commands
To execute commands in a running container, we can use the exec command.
For example, if we want to list the contents of the root of the hard drive, we can use the exec command as shown below:
docker exec <container_name>
We can gain access to the bash shell if we wish to ssh as root into the container. To do so, we can use the following command:
docker exec -it <container_name> bash
Note: All communications between Docker clients and Docker daemons are secure since they are already encrypted.
Docker Run Command
The run command is one of the most complicated commands of all the Docker commands. By using this command, we can perform various tasks such as configuring security and managing network settings and system resources such as memory, file systems, and CPU. We can visit the following link to see and understand how to do all of the above and more, by using the run command.
Dockerfile
A Dockerfile contains all the instructions, e.g., the Linux commands to install and configure the software. Dockerfile creation, as we already know, is the primary way of generating a Docker image. When we use the build command to create an image, it can refer to a Dockerfile available on our path or to a URL such as the GitHub repository.
Instructions:
The instructions in a Dockerfile are executed in the same order as they are found in the Dockerfile.
In order to create a Dockerfile, follow these steps:
-
- Create a new file and name it “Dockerfile”.
- Write the below script inside this Dockerfile.
FROM ubuntu
RUN apt-get update
RUN apt-get install -y apache2
-
- Save this file. This Dockerfile script pulls the Ubuntu base image, updates the package lists, and attempts to install the Apache2 server.
There can also be comments starting with the # character in the Dockerfile.
The following table contains the list of instructions available:
Instruction |
Description |
FROM |
The first instruction in the Dockerfile, it identifies an image to inherit from |
MAINTAINER |
This instruction provides visibility as well as credit to the author of the image |
RUN |
This instruction executes a Linux command to install and configure |
ENTRYPOINT |
The final script or application which is used to bootstrap the container and make it an executable application |
CMD |
This instruction uses a JSON array to provide default arguments to the ENTRYPOINT |
LABEL |
This instruction contains the name/value metadata about the image |
ENV |
This instruction sets the environment variables |
COPY |
This instruction copies files into the container |
ADD |
This instruction is basically an alternative to the COPY instruction |
WORKDIR |
This sets a working directory for RUN, CMD, ENTRYPOINT, COPY, and/or ADD instructions |
EXPOSE |
The ports on which the container listens |
VOLUME |
This instruction is to create a mount point |
USER |
An instruction to run RUN, CMD, and/or ENTRYPOINT instructions |
Docker Machine
Docker machine is a command-line utility that is used to manage one or more local machines, which are usually run in separate VirtualBox instances, or remote machines that are hosted on Cloud providers, e.g., Amazon Web Services, Microsoft Azure, etc.
How to Create a Local Machine?
Docker Toolbox comes with a default Docker machine named default. This is just to give us a taste of it and to get us started, but we may need multiple machines later on to segment the different containers that are running. To do that, we can use the following command:
docker-machine create –d virtualbox intellipaat
This command will create a local machine by using a VirtualBox image named intellipaat.
Listing Machines
If we want to list the machines that we have configured, we can run the following command:
docker-machine ls
Starting and Stopping Machines
We can start the Docker machine that we have created by using the following command:
docker-machine start intellipaat
Now that the Docker machine has started, we will have to configure the Docker command line, with which the Docker daemon should interact. We can use the following commands to do this:
docker –machine env intellipaat
eval “$(docker-machine env intellipaat)”
Now, to stop a machine, use the following command:
docker-machine stop intellipaat
Note: These start and stop commands will start and stop our VirtualBox VMs, and we can watch the state of the VMs changing while we run the commands if we have the VirtualBox manager open.
Conclusion
From this Docker Tutorial, we provided a detailed understanding of Docker concepts, such as its workflow, its need, and useful Docker commands, along with Docker images and containers.
While, here, we covered quite a bit of Docker’s core functionality, there is still a lot to know. If you are looking forward to learning Docker, then you must go for a structured DevOps Training provided by Intellipaat, where you will work on various case-based scenarios, along with exhaustive topic-wise assignments, hands-on sessions, and various industry-based projects that will prepare you to grab a DevOps job in any reputed MNC.
If you are willing to enter into the DevOps domain or upskill yourself with this domain, then you must go with this DevOps Certification Training as it will help you to understand the most important tools and services that must be learned and practiced to become a successful and productive team member at your workplace.
We hope this tutorial helps you gain knowledge of DevOps course online. If you are looking to learn DevOps Training in a systematic manner with expert guidance and support then you can enroll to our DevOps course.
Happy learning! 😊