bing
Flat 10% & upto 50% off + 10% Cashback + Free additional Courses. Hurry up
×
UPTO
50%
OFF!
Intellipaat
Intellipaat
  • Live Instructor-led Classes
  • Expert Education
  • 24*7 Support
  • Flexible Schedule

Introduction

Docker! Most of you working in the software industry must be familiar to this term, especially the developers and the administrators. Docker has become a famous standard in IT Industry when it comes to packaging, deploying and running your distributed applications with an ease.

Docker Tutorial

In this Docker tutorial, we will dive you into the main concepts that are from virtualization to the need of docker along with its architecture, installation, docker images and many more.

Here we have the list of topics if you want to jump into a specific one:

Watch this Docker video

Here you will also learn about the Docker basics and the key features of Docker

Introduction to Docker

Docker is a platform used to containerize your software, using which you can easily build your application, package them with the dependencies required for your application into the container and further, these containers are easily shipped to run on other machines. Docker is simplifying the DevOps methodology by allowing developers to create templates called images using which, you can create these lightweight virtual machines called as Containers. Docker is making things easier for software industries giving them the capability to automate the infrastructure, isolate the applications, maintaining consistency and improving the resource utilization. There might be a question in your mind that such tasks can also be done through virtualization, then why choose docker over it. It is correct, but virtualization didn’t turn out to be that efficient idea.

How? We shall discuss as we move along this tutorial.

Learn DevOps in 16 hrs from experts

So, let’s understand what is Virtualization?

What is Virtualization?

docker
When we talk of virtualization, it refers to importing a Guest operating system on your host operating system, allowing developers to run multiple OS on different VM’s while all of them run on the same host, thereby eliminating the need to provide extra hardware resources.
While these Virtual Machines were introduced, they helped the industry in many ways:

  • Enabling multiple operating systems on the same machine.
  • It was cheaper than the previous methods, due to less/compact infrastructure setup.
  • If there’s any failure state, it was easy to recover and do its maintenance.
  • Faster provisioning of applications and resources required for the tasks.
  • Increase in IT productivity, efficiency, and responsiveness.

Let’s check its working with an architecture and understand that what were the issues in it.

What is Virtualization host

From the above VM architecture, you must be able to figure out that 3 Guest operating systems acting as virtual machines are running on a host operating system. In Virtualization, the process of manually reconfiguring hardware, firmware, installing a new OS, installing new operating systems can be entirely automated, all these steps get stored as a data in any files of a disk.

Virtualization lets you run your applications on fewer physical servers.

In virtualization, each application and operating system live in a separate software container called VM, where VM’s are completely isolated, all the computing resources like CPU’s, storage and networking are pooled together which are delivered dynamically to each VM by a software called as Hypervisor.

But, running multiple VM’s over the same host led to the degradation in the performance.As the guest OS have their own kernel, libraries and many dependencies running on a single host OS which takes up large occupation on resources like processor, hard disk and especially,
its RAM.

So, running multiple virtual machines over a host OS got very unstable leading to poor performance. Also, when you use VM’s in Virtualization, the bootup process takes a long time which would affect efficiency in case of real-time applications. In order to overcome such limitations, Containerization was introduced.

How? That we shall discuss in our further topic.

First, let’s understand what exactly is containerization?

What is Containerization?

Containerization is a technique where the virtualization is being brought containerization to an operating system level. In containerization, you virtualize the operating system resources, it is more efficient as there is no guest operating system consuming the host resources, as containers utilize only the host’s operating system and shares relevant libraries & resources only when it is required. The required binaries and libraries of containers run on the host kernel leading to the fast processing and the execution.
So, in a nutshell, containerization(containers) is a lightweight virtualization technology acting as an alternative to hypervisor virtualization, bundle any application in a container and run them without thinking of any dependencies, libraries, and binaries.

So, if we look into its advantages:

  • Containers are small and lighter in weight because they share the same OS kernel.
  • Doesn’t take much time to boot-up (in a fraction of seconds).
  • High performance with lower resource utilization.

Wish to Learn DevOps? Click Here

Now let’s understand the difference between Containerization and Virtualization.

Containerization vs Virtualization

Containerization Virtualization
As we have been introduced to Containerization & Virtualization, both lets you run multiple operating systems on a host machine.

Now let’s check the below table to understand the difference.

Virtualization

Containerization

Virtualizes the hardware resources

Virtualizes only OS resources

It requires the complete OS installation for every VM

Installs container only over a host OS

A kernel is installed again for every virtualized OS

Only the underlying host operating system’s kernel is used

Heavyweight

Lightweight

Limited performance

Native performance

Fully isolated

Process-level isolation

In case of containerization, all the containers are sharing a similar Host operating system, multiple containers get created for every type of application making them faster that too without wasting the resources unlike virtualization where kernel for every OS was required, which utilizes a lot of resources from the host OS.

You can easily figure out the difference from the architecture of containers below :

Container

In order to create and run containers on your host operating system, we require a software which enables us to do so.

This is where Docker comes into the picture!

Now let’s understand the Docker architecture.

Become DevOps Certified in 16 hrs.
CLICK HERE

Docker Architecture

Docker uses a client-server architecture.The docker client consists of docker build, pull and run. The client talks to the docker daemon which further helps in building, running and distributing the docker containers.Docker client & daemon can be operated on the same system otherwise, you can connect the docker client to the remote docker daemon, these both communicates with each other using REST API, over UNIX sockets or a network

Docker Architecture

The basic architecture in Docker consists of 3 parts:

  • Docker Client
  • Docker Host
  • Docker Registry

Docker Client

  • It is a primary way for many docker users to interact with docker.
  • Uses command-line utility or other tools which uses the Docker API in order to communicate with Docker Daemon.
  • The Docker client can communicate with more than one Daemon.

In Docker Host, we have Docker Daemon, Containers and Images

First, let’s understand the object on the Docker Host then we will proceed towards the functioning of Docker Daemon.

Docker Objects

  • Docker Image: A Docker image is a type of recipe/template which can be used for creating the docker containers including the steps for creating the necessary software.
  • Container: A type of Virtual machine which is created from the instructions found within the docker image, a running instance of a docker image which consists of an entire package required to run an application.

Docker Daemon

  • Docker daemon helps in listening requests for Docker API.
  • It helps in managing the docker objects such as images, containers, and volumes etc. Daemon issues to build an image based on the user’s input and then save it in the registry.
  • In case you don’t want to create an image, then simply pull an image from the docker hub (which will be built by some other user differently).
  • Then in case, you want to create a running instance of your docker image, issue a run command which should create a Docker container.
  • Also, docker daemon can communicate with other daemons to manage the docker services.

Docker Registry

  • Docker registry is a repository for Docker images which is used for creating the Docker containers.
  • One can use a local/private registry or use Docker Hub, which is the most popular social example of a Docker repository.

As we understand how Docker works, not let’s get started with Docker’s Installations, workflow, and important commands.

Getting Started

Let’s start off from the installation of docker.

Installing Docker

For mac & windows, the installation is quite simpler, all you have to do is download & install docker from https://docs.docker.com/toolbox/ which includes the Docker Client, Docker Machine, Compose (Mac only), Kitematic, and VirtualBox.

Whereas, in the case of Linux, there are several steps that you need to follow, let’s check.

In order to run Docker on your Ubuntu box, first, we need to update its packages.
Use the below command on your terminal:

sudo apt-get update

As we are using this command on sudo, after you hit enter, it will ask for a password, enter the password then follow the further steps.

Now before installing docker, we must install its recommended packages, for that just type the below command:

sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual

After this, now we have successfully installed the pre-requisites for docker. Then press “y” to continue further.

sudo apt-get install docker- engine

Let’s move forward and install docker engine:

Your Docker installation process is completed now.

Now use the below command to verify if docker installed correctly.

sudo service docker start

You will get an output as start: Job is already running: docker

This means your Docker has been started successfully.

Running a Container

After the docker installation, you should be able to run the containers, if you don’t have a container you want to run, then docker will download the image in order to build the container from the docker hub, then build and run it.

You can run a simple hello-world container to crosscheck if everything is working properly, run the command below:

docker run hello-world

OUTPUT:

Hello from Docker!

This message shows that your installation appears to be working correctly.

You must be getting an output like this.

Now let’s move towards understanding its workflow.

Typical Local Workflow

Docker’s typical local workflow allows users to create images, pull images, publish images and run the containers.

Let’s understand this typical local workflow from the diagram below:


Typical local workflow

Dockerfile here consists of ininer configuration or image pulling from a Docker registry like Docker hub. This file basically helps in building an image from it which includes the instructions about container configuration or it can be image pulling from a Docker registry like Docker Hub.

Let’s understand this process in a little-detailed way:

  • It basically involves building an image from a Dockerfile which consists of instructions about container configuration or image pulling from a Docker registry like Docker hub.
  • When this image is built in your docker environment, you should be able to run the image which further creates a container.
  • In your container, you can do any operations such as:
    • You can stop the container.
    • You can start the container.
    • As well as you can restart the container.
  • These runnable containers can be started, stopped or restarted just like you’re operating a Virtual Machine or a Computer.
  • Whatever the manual changes are made like configurations or software installations, those changes in a container can be committed to making a new image which can be used for creating a container from it later.
  • At last, when you want to share your image with your team or world, you can easily push your image into a Docker Registry.
  • One can easily pull this image from the Docker Registry using the pull command.

Pull an Image from Docker Registry

The easiest way to obtain an image is you can find an already prepared image from Docker Official Website to build a container from.

You can choose from various common software such as MySQL, Node.js, java, Nginx or WordPress on DockerHub as well as hundreds of open source images made by common people across the globe.

For example, you want to execute the pull command for downloading the image for MySQL, and then you can use the pull command:

docker pull mysql

In case you want the exact version of the image, then you can use:

docker pull mysql:5.5.45

Output:

REPOSITORY       TAG      IMAGE VIRTUAL SIZE<none&gt  <none> 4b9b8b27fb42 214.4 MB
mysql  5.5.45     0da0b10c6fd8 213.5 MB

When you will run this command, you will observe created an image with the repository name of <none>, so in order to add the identity of the repository, we will use the following command:

docker build –t test-intellipaat .

After -t you can add any name of your choice to identify your repository.

Output:

REPOSITORY TAG IMAGE ID VIRTUAL SIZEtest-intellipaat  latest 4b9b8b27fb42   214.4 MB
mysql  5.5.45  0da0b10c6fd8   213.5 MB

Now customize image manually by installing software or by changing configurations, after the completion, you can run the docker commit command to create an image of the running container.

Running an Image

In order to run a docker image, all you need to do is use the run command followed by your local image name or the one you retrieved from Docker hub.

Usually, a Docker image requires some added environment variables, which can be specified with the -e option. For long-running processes like daemons, you also need to use a –d option.

To start the test-intellipaat image, you need to run the command shown below which configure the MySQL root user’s password, as documented in the Docker Hub mysql repository’s documentation:

docker run -e MYSQL_ROOT_PASSWORD=root+1 –d test-intellipaat

To check the container running, use the command below:

docker ps

This command lists all of your running processes, image, the name they were created from, the command that was run, any ports that software are listening on, and the name of the container.

CONTAINER ID IMAGE COMMAND30645F307114 test-intellipaat “/entrypoint.sh mysql”PORTS NAMES

3306/tcp  shubham_rana

You can figure out from the above output that the name of the container is shubham_rana.

This name of the container is auto-generated.

When you want to explicitly name the container then the best practice is to use –name option which provides your name at container startup:

docker run –name intellipaat-sql -e MYSQL_ROOT_PASSWORD=root+1 -d est-mysql

You can easily name your container with this command.

Stopping and starting containers

Once you have your docker container up and running, you can use it by using the docker stop command and the container name like shown below:

docker stop intellipaat-sql

As your entire container was written in a disk, in case you want to run your container again from the state you shut it down, then you can use this start command:

docker start intellipaat-sql

Now let’s see how you can tag an image.

Tagging an Image

Once you have your image up and running, you can tag it with a username, image name and the version number before you push it into the repository using the docker tag command:

docker tag intellipaat-sql javajudd/est-mysql:1.0

Now let’s see how you can push an image to the registry.

Push Image to the Repository

Now, you are ready to push your image to the docker hub for the world to use it or may your team to use it via a private repository. First, go to https://hub.docker.com/ create a free account and next you need to login using login command.

docker login
  • Input the username
  • Password
  • The email address you registered with

Then push your image using the push command, with your username, image, and the version name.

In a few minutes, you will receive a message about your repository stating that your repository has been successfully pushed.

When you go back to your Docker Hub account, you will see that there is a new repository as shown below:

Push Image to the Repository

Other Helpful Commands

List Containers:

You already know how to list the running containers using the ps command, but what you want is list all the containers, regardless of their state? Well, to do that all you have to do is add -a option as shown below.

docker ps -a

Now, you can easily decide which of the containers you want to start and which to remove.

Remove containers:

Talking about removing containers, after using the container you will want to remove it rather than having it lying around to reuse the disk space.

You can use the rm command to remove a container as shown below:

docker rm intellipaat-sql

Remove images:

You already know how to list all the locally cached images using the images command. These cached images can occupy a significant amount of space, so in order to free up some space by removing unwanted images, you can use the rmi command as shown below:

docker rmi intellipaat-sql

Now you know how to remove cached images, but what about the unwanted and unnamed images that you may end up generating during the debugging cycle of creating a new image? These images are denoted with the name of <none>. You can remove them all by using the following command.

docker rmi $(docker images -q -f dangling=true)

List Ports:

Knowing which ports are exposed by a container beforehand might make your work a lot easier and faster, for example, port 3306 for accessing a MySQL database or port 80 for accessing a web server. Using the port command, as shown below, you can display all the exposed ports.

docker port intellipaat-sql

List processes:

To display the processing in a container, you can use the top command in docker, much similar to the top command in Linux.

docker top intellipaat-sql

Execute commands:

To execute commands in a running container you can use the exec command. For example:

If you want to list the contents of the root of the hard drive you can use exec command as shown below

docker exec intellipaat-sql ls /

You can gain the access to bash shell if you wish to ssh as root into the container you can use the following command:

Note: All the communications between Docker clients and Docker daemon are secure since they are already encrypted.
docker exec -it my-est-mysql bash

Run Container:

The run command is one of the most complicated commands of all the docker commands. Using this command you can perform various tasks like configuring security, managing network settings and system resources such as memory, filesystems and CPU. You can visit the following link to see and understand how to do all of the above and more using run command.

Dockerfile

A Dockerfile contains all the instructions, for example, the linux commands to install and configure software. Dockerfile, as you already know, is the primary way of generating a Docker image. When you use build command to create an image, it can refer to a dockerfile available on your path or on a URL such as GitHub repository.

Instructions:

The instructions in a Dockerfile are executed in the same order as they are found in the Dockerfile.

There can also be comments starting with #character in the Dockerfile.

The following table contains the list of instructions available:

INSTRUCTION

DESCRIPTION

FROM

The first instruction in the Dockerfile, it identifies the image to inherit from

MAINTAINER

This instruction provides visibility as well as credit to the author of the image

RUN

This instruction executes a Linux command  to install and configure

ENTRYPOINT

The final script or application which is  used to bootstrap the container and makes it an executable application

CMD

This instruction uses a JSON array to provide  default arguments to the ENTRYPOINT

LABEL

This instruction contains the Name/value metadata about the image

ENV

This instruction sets the environment variables

COPY

This instruction copies files into the container

ADD

This instruction is basically an alternative to copying instruction

WORKDIR

This instruction sets a working directory for RUN, CMD, ENTRYPOINT, COPY, and/or ADD instructions

EXPOSE

Those ports on which the container will listen

VOLUME

This instruction is to create a mount point

USER

Instruction to run RUN, CMD, and/or ENTRYPOINT instructions

Docker machine

Docker machine is a command line utility that is used to manage one or more local machines (which are usually run in seperate VirtualBox instances) or remote machines which are hosted on Cloud providers, for example, Amazon Web Services, Microsoft Azure etc.

How to create Local Machine?

Docker Toolbox comes with default Docker Machine named “default”. This is just to give you a taste of Docker machine and to get you started but you may need multiple machines later on to segment the different containers that are running. To do that you can use the following command:

docker-machine create –d virtualbox intellipaat

This command will create a local machine using a VirtualBox image named intellipaat.

List Machines

If you want to list the machines that you have configured you can run the following command:

docker-machine ls

Start and Stop Machines

You can start the Docker machine that you have created using the following command

docker-machine start intellipaat.

Now that the Docker machine has started, you will have to configure the Docker command line with which the Docker Daemon should interact. You can use the following command to do that:

docker –machine env intellipaat
eval “$(docker-machine env intellipaat)”

Now, to stop a machine, use the following command:

docker-machine stop intellipaat

Note: These start and stop commands will start and stop your VirtualBox VMs and you can watch the state of the VM change while you run the commands if you have the VirtualBox manager open.

Conclusion

So, from this tutorial, you got a detailed understanding of Docker’s workflow, its need, useful commands with docker images and containers.

While here we covered quite a bit of Docker’s core functionality, still there is a lot to know about Docker. If you’re looking forward to learning more about it, then you must go for a structured DevOps training provided by Intellipaat, where you will work on various case-based scenarios along with the exhaustive topic wise assignments, hands-on sessions and various industrial based projects which prepares you from scratch to top-notch understanding of DevOps.

If you’re willing to enter into DevOps domain or up-skill yourself with this domain, then you must go with this DevOps Certification Training which will help you understand the most important tools and services that you must learn and practice to become a successful and a productive team member at your workplace.

Happy Learning! 😊

Previous Next

Download Interview Questions asked by top MNCs in 2019?

"0 Responses on Docker Tutorial"

Leave a Message

100% Secure Payments. All major credit & debit cards accepted Or Pay by Paypal.
top

Sales Offer

Sign Up or Login to view the Free Docker Tutorial.