Docker! Most of you working in the software industry must be familiar to this term, especially the developers and the administrators. Docker has become a famous standard in IT Industry when it comes to packaging, deploying and running your distributed applications with an ease. Today, we will be learning Docker end-to-end in this Docker Tutorial.
Watch this Docker Tutorial for Beginners video
In this Docker tutorial, we will dive you into the main concepts that are from virtualization to the need of docker, what is docker, along with its docker architecture, docker installation, docker images and many more.
Here we have the list of topics if you want to jump into a specific one:
- Introduction to Docker
- Docker Architecture
- Getting started
- Typical Workflow
- Docker Commands
- Docker Machine
Here you will also learn about the Docker basics and the key features of Docker
Docker is a platform used to containerize your software, using which you can easily build your application, package them with the dependencies required for your application into the container and further, these containers are easily shipped to run on other machines.
Docker is simplifying the DevOps methodology by allowing developers to create templates called images using which, you can create these lightweight virtual machines called as Containers. Docker is making things easier for software industries giving them the capability to automate the infrastructure, isolate the applications, maintaining consistency and improving the resource utilization. There might be a question in your mind that such tasks can also be done through virtualization, then why choose docker over it. It is correct, but virtualization didn’t turn out to be that efficient idea.
How? We shall discuss as we move along this docker tutorial.
Now, that we have understood what is Docker, let’s understand what is Virtualization?
Prepare yourself for the Top DevOps Interview Questions And Answers!
What is Virtualization?
When we talk of virtualization, it refers to importing a Guest operating system on your host operating system, allowing developers to run multiple OS on different VM’s while all of them run on the same host, thereby eliminating the need to provide extra hardware resources.
While these Virtual Machines were introduced, they helped the industry in many ways:
- Enabling multiple operating systems on the same machine.
- It was cheaper than the previous methods, due to less/compact infrastructure setup.
- If there’s any failure state, it was easy to recover and do its maintenance.
- Faster provisioning of applications and resources required for the tasks.
- Increase in IT productivity, efficiency, and responsiveness.
Let’s check its working with an architecture and understand that what were the issues in it.
What is Virtualization host
From the above VM architecture, you must be able to figure out that 3 Guest operating systems acting as virtual machines are running on a host operating system. In Virtualization, the process of manually reconfiguring hardware, firmware, installing a new OS, installing new operating systems can be entirely automated, all these steps get stored as a data in any files of a disk.
Virtualization lets you run your applications on fewer physical servers.
In virtualization, each application and operating system live in a separate software container called VM, where VM’s are completely isolated, all the computing resources like CPU’s, storage and networking are pooled together which are delivered dynamically to each VM by a software called as Hypervisor.
But, running multiple VM’s over the same host led to the degradation in the performance. As the guest OS have their own kernel, libraries and many dependencies running on a single host OS which takes up large occupation on resources like processor, hard disk and especially,
So, running multiple virtual machines over a host OS got very unstable leading to poor performance. Also, when you use VM’s in Virtualization, the bootup process takes a long time which would affect efficiency in case of real-time applications. In order to overcome such limitations, Containerization was introduced.
How? That we shall discuss in our further topics in this docker tutorial.
Become Master of DevOps by going through this online DevOps training in London.
First, let’s understand what exactly is containerization?
What is Containerization?
Containerization is a technique where the virtualization is being brought containerization to an operating system level. In containerization, you virtualize the operating system resources, it is more efficient as there is no guest operating system consuming the host resources, as containers utilize only the host’s operating system and shares relevant libraries & resources only when it is required. The required binaries and libraries of containers run on the host kernel leading to the faster processing and the execution.
So, in a nutshell, containerization(containers) is a lightweight virtualization technology acting as an alternative to hypervisor virtualization, bundle any application in a container and run them without thinking of any dependencies, libraries, and binaries.
So, if we look into its advantages:
- Containers are small and lighter in weight because they share the same OS kernel.
- Doesn’t take much time to boot-up (in a fraction of seconds).
- High performance with lower resource utilization.
Now let’s understand the difference between Containerization and Virtualization, in this Docker Tutorial.
Go through the Best DevOps Course in New York to get clear understanding of DevOps.
Containerization vs Virtualization
As we have been introduced to Containerization & Virtualization, both lets you run multiple operating systems on a host machine.
Now let’s check the below table to understand the difference.
Virtualizes the hardware resources
Virtualizes only OS resources
It requires the complete OS installation for every VM
Installs container only over a host OS
A kernel is installed again for every virtualized OS
Only the underlying host operating system’s kernel is used
In case of containerization, all the containers are sharing a similar Host operating system, multiple containers get created for every type of application making them faster that too without wasting the resources unlike virtualization where kernel for every OS was required, which utilizes a lot of resources from the host OS.
You can easily figure out the difference from the architecture of containers below:
In order to create and run containers on your host operating system, we require a software which enables us to do so.
This is where Docker comes into the picture!
Watch this video on DevOps Tutorial for Beginners
Now in this docker tutorial, let’s understand and learn what is Docker container architecture.
Learn more about DevOps in this DevOps training in Sydney to get ahead in your career!
Docker uses a client-server architecture. The docker client consists of docker build, pull and run. The client talks to the docker daemon which further helps in building, running and distributing the docker containers. Docker client & daemon can be operated on the same system otherwise, you can connect the docker client to the remote docker daemon, these both communicates with each other using REST API, over UNIX sockets or a network
The basic architecture in Docker consists of 3 parts:
- Docker Client
- Docker Host
- Docker Registry
- It is a primary way for many docker users to interact with docker.
- Uses command-line utility or other tools which uses the Docker API in order to communicate with Docker Daemon.
- The Docker client can communicate with more than one Daemon.
In Docker Host, we have Docker Daemon, Containers and Images
First, let’s understand the object on the Docker Host then we will proceed towards the functioning of Docker Daemon.
- What is a Docker Image: A Docker image is a type of recipe/template which can be used for creating the docker containers including the steps for creating the necessary software.
- What is a Docker Container: A type of Virtual machine which is created from the instructions found within the docker image, a running instance of a docker image which consists of an entire package required to run an application.
- Docker daemon helps in listening requests for Docker API.
- It helps in managing the docker objects such as images, containers, and volumes etc. Daemon issues to build an image based on the user’s input and then save it in the registry.
- In case you don’t want to create an image, then simply pull an image from the docker hub (which will be built by some other user differently).
- Then in case, you want to create a running instance of your docker image, issue a run command which should create a Docker container.
- Also, docker daemon can communicate with other daemons to manage the docker services.
- Docker registry is a repository for Docker images which is used for creating the Docker containers.
- One can use a local/private registry or use Docker Hub, which is the most popular social example of a Docker repository.
As we understand how Docker works in this docker tutorial, not let’s get started with Docker’s Installations, workflow, and important docker commands.
If you have any doubts or Queries related to DevOps, get it clarifies from DevOps Experts on DevOps Community.
For installing docker on windows & mac, the installation is quite simpler, all you have to do is download & install docker from https://docs.docker.com/toolbox/ which includes the Docker Client, Docker Machine, Compose (Mac only), Kitematic, and VirtualBox.
Whereas, in the case of Linux, there are several steps that you need to follow, let’s check.
In order to install docker on Ubuntu box, first, we need to update its packages.
Use the below command on your terminal:
sudo apt-get update
As we are using this command on sudo, after you hit enter, it will ask for a password, enter the password then follow the further steps in this docker tutorial.
Now before installing docker, we must install its recommended packages, for that just type the below command:
sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
After this, now we have successfully installed the pre-requisites for docker. Then press “y” to continue further.
sudo apt-get install docker- engine
Let’s move forward in this docker tutorial and install docker engine:
Your Docker installation process is completed now.
Now use the below command to verify if docker installed correctly.
sudo service docker start
You will get an output as start: Job is already running: docker
This means your Docker has been started successfully.
Running a Container
After the docker installation, you should be able to run the containers, if you don’t have a container you want to run, then docker will download the image in order to build the container from the docker hub, then build and run it.
You can run a simple hello-world container to crosscheck if everything is working properly, run the command below:
docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
You must be getting an output like this.
Now let’s move forward in this docker tutorial, for understanding what is docker workflow.
Typical Local Workflow
Docker’s typical local workflow allows users to create images, pull images, publish images and run the containers.
Let’s understand this typical local workflow from the diagram below:
Typical local workflow
Dockerfile here consists of configuration and the name of image pulling from a Docker registry like Docker hub. This file basically helps in building an image from it which includes the instructions about container configuration or it can be image pulling from a Docker registry like Docker Hub.
Let’s understand this process in a little-detailed way:
- It basically involves building an image from a Dockerfile which consists of instructions about container configuration or image pulling from a Docker registry like Docker hub.
- When this image is built in your docker environment, you should be able to run the image which further creates a container.
- In your container, you can do any operations such as:
- You can stop the container.
- You can start the container.
- As well as you can restart the container.
- These runnable containers can be started, stopped or restarted just like you’re operating a Virtual Machine or a Computer.
- Whatever the manual changes are made like configurations or software installations, those changes in a container can be committed to making a new image which can be used for creating a container from it later.
- At last, when you want to share your image with your team or world, you can easily push your image into a Docker Registry.
- One can easily pull this image from the Docker Registry using the pull command.
Interested in becoming DevOps Expert? Click here to learn more in this DevOps Course in Toronto
Pull an Image from Docker Registry
The easiest way to obtain an image is you can find an already prepared image from Docker Official Website to build a container from.
You can choose from various common software such as MySQL, Node.js, java, Nginx or WordPress on DockerHub as well as hundreds of open source images made by common people across the globe.
For example, you want to execute the pull command for downloading the image for MySQL, and then you can use the pull command:
docker pull mysql
In case, you want the exact version of the image, then you can use:
docker pull mysql:5.5.45
REPOSITORY TAG IMAGE VIRTUAL SIZE<none> <none> 4b9b8b27fb42 214.4 MB mysql 5.5.45 0da0b10c6fd8 213.5 MB
When you will run this command, you will observe created an image with the repository name of <none>, so in order to add the identity of the repository, we will use the following command:
docker build –t test-intellipaat .
After -t you can add any name of your choice to identify your repository.
REPOSITORY TAG IMAGE ID VIRTUAL SIZEtest-intellipaat latest 4b9b8b27fb42 214.4 MB mysql 5.5.45 0da0b10c6fd8 213.5 MB
Now in this docker tutorial, we shall customize image manually by installing software or by changing configurations, after the completion, you can run the docker commit command to create an image of the running container.
Running an Image
In order to run a docker image, all you need to do is use the run command followed by your local image name or the one you retrieved from Docker hub.
Usually, a Docker image requires some added environment variables, which can be specified with the -e option. For long-running processes like daemons, you also need to use a –d option.
To start the test-intellipaat image, you need to run the command shown below which configure the MySQL root user’s password, as documented in the Docker Hub mysql repository’s documentation:
docker run -e MYSQL_ROOT_PASSWORD=root+1 –d test-intellipaat
To check the container running, use the command below:
This command lists all of your running processes, image, the name they were created from, the command that was run, any ports that software are listening on, and the name of the container.
CONTAINER ID IMAGE COMMAND30645F307114 test-intellipaat “/entrypoint.sh mysql”PORTS NAMES 3306/tcp shubham_rana
You can figure out from the above output that the name of the container is shubham_rana.
This name of the container is auto-generated.
When you want to explicitly name the container then the best practice is to use –name option which provides your name at container startup:
docker run –name intellipaat-sql -e MYSQL_ROOT_PASSWORD=root+1 -d est-mysql
You can easily name your container with this command.
Interested in getting an industry-recognized certification in DevOps? Enroll in Intellipaat’s DevOps Course in Bangalore now!
Stopping and starting containers
Once you have your docker container up and running, you can use it by using the docker stop command and the container name like shown below:
docker stop intellipaat-sql
As your entire container was written in a disk, in case you want to run your container again from the state you shut it down, then you can use this start command:
docker start intellipaat-sql
Now let’s see how you can tag an image.
Tagging an Image
Once you have your image up and running, you can tag it with a username, image name and the version number before you push it into the repository using the docker tag command:
docker tag intellipaat-sql javajudd/est-mysql:1.0
Now let’s see in this docker tutorial, how you can push an image to the registry.
Push Image to the Repository
Now, you are ready to push your image to the docker hub for the world to use it or may your team to use it via a private repository. First, go to https://hub.docker.com/ create a free account and next you need to login using login command.
- Input the username
- The email address you registered with
Then push your image using the push command, with your username, image, and the version name.
In a few minutes, you will receive a message about your repository stating that your repository has been successfully pushed.
When you go back to your Docker Hub account, you will see that there is a new repository as shown below:
You have already seen in this Docker Tutorial, how to list the running containers using the ps command, but what you want is list all the containers, regardless of their state? Well, to do that all you have to do is add -a option as shown below.
docker ps -a
Now, you can easily decide which of the containers you want to start and which to remove.
Talking about removing containers, after using the container you will want to remove it rather than having it lying around to reuse the disk space.
You can use the rm command to remove a container as shown below:
docker rm intellipaat-sql
You already know how to list all the locally cached images using the images command. These cached images can occupy a significant amount of space, so in order to free up some space by removing unwanted images, you can use the rmi command as shown below:
docker rmi intellipaat-sql
Now you know how to remove cached images, but what about the unwanted and unnamed images that you may end up generating during the debugging cycle of creating a new image? These images are denoted with the name of <none>. You can remove them all by using the following command.
docker rmi $(docker images -q -f dangling=true)
Knowing which ports are exposed by a container beforehand might make your work a lot easier and faster, for example, port 3306 for accessing a MySQL database or port 80 for accessing a web server. Using the port command, as shown below, you can display all the exposed ports.
docker port intellipaat-sql
To display the processing in a container, you can use the top command in docker, much similar to the top command in Linux.
docker top intellipaat-sql
To execute commands in a running container you can use the exec command. For example:
If you want to list the contents of the root of the hard drive you can use exec command as shown below
docker exec intellipaat-sql ls /
You can gain the access to bash shell if you wish to ssh as root into the container you can use the following command:
Note: All the communications between Docker clients and Docker daemon are secure since they are already encrypted. docker exec -it my-est-mysql bash
Get in touch with Intellipaat for a comprehensive DevOps Training and be a certified DevOps Engineer!
The run command is one of the most complicated commands of all the docker commands. Using this command you can perform various tasks like configuring security, managing network settings and system resources such as memory, filesystems and CPU. You can visit the following link to see and understand how to do all of the above and more using run command.
Watch this Docker video
A Dockerfile contains all the instructions, for example, the linux commands to install and configure software. Dockerfile creation, as you already know, is the primary way of generating a Docker image. When you use build command to create an image, it can refer to a dockerfile available on your path or on a URL such as GitHub repository.
The instructions in a Dockerfile are executed in the same order as they are found in the Dockerfile.
There can also be comments starting with #character in the Dockerfile.
The following table in this docker tutorial contains the list of instructions available:
The first instruction in the Dockerfile, it identifies the image to inherit from
This instruction provides visibility as well as credit to the author of the image
This instruction executes a Linux command to install and configure
The final script or application which is used to bootstrap the container and makes it an executable application
This instruction uses a JSON array to provide default arguments to the ENTRYPOINT
This instruction contains the Name/value metadata about the image
This instruction sets the environment variables
This instruction copies files into the container
This instruction is basically an alternative to copying instruction
This instruction sets a working directory for RUN, CMD, ENTRYPOINT, COPY, and/or ADD instructions
Those ports on which the container will listen
This instruction is to create a mount point
Instruction to run RUN, CMD, and/or ENTRYPOINT instructions
Docker machine is a command line utility that is used to manage one or more local machines (which are usually run in seperate VirtualBox instances) or remote machines which are hosted on Cloud providers, for example, Amazon Web Services, Microsoft Azure etc.
How to create Local Machine?
Docker Toolbox comes with default Docker Machine named “default”. This is just to give you a taste of Docker machine and to get you started but you may need multiple machines later on to segment the different containers that are running. To do that you can use the following command:
docker-machine create –d virtualbox intellipaat
This command will create a local machine using a VirtualBox image named intellipaat.
If you want to list the machines that you have configured you can run the following command:
Start and Stop Machines
You can start the Docker machine that you have created using the following command
docker-machine start intellipaat.
Now that the Docker machine has started, you will have to configure the Docker command line with which the Docker Daemon should interact. You can use the following command to do that:
docker –machine env intellipaat eval “$(docker-machine env intellipaat)”
Now, to stop a machine, use the following command:
docker-machine stop intellipaat
So, from this docker tutorial, you got a detailed understanding of Docker’s workflow, its need, useful commands with docker images and containers.
While here we covered quite a bit of Docker’s core functionality, still there is a lot to know about Docker. If you’re looking forward to learning more about it, then you must go for a structured DevOps training provided by Intellipaat, where you will work on various case-based scenarios along with the exhaustive topic wise assignments, hands-on sessions and various industrial based projects which prepares you from scratch to top-notch understanding of DevOps.
If you’re willing to enter into DevOps domain or up-skill yourself with this domain, then you must go with this DevOps Certification Training which will help you understand the most important tools and services that you must learn and practice to become a successful and a productive team member at your workplace.
Happy Learning! 😊