Performing the analysis once is cool. How about having a continuous analysis done every time a change is made? How about finding out the bugs each step of the way rather than wait till the end? Sounds interesting doesn’t it? This is exactly what CI or continuous integration is! Now let’s see how we can use Jenkins to accomplish this!
In this tutorial we shall cover everything from introduction to the architecture and how to create a master slave system. We will also be going over one of the most distinct attributes of Jenkins that is Plugins and how to manage and handle them. Moving forward we will check out how to create, schedule and run builds. Finally, we shall move onto CI/CD pipelines and how to implement them in Jenkins. This tutorial would help give you a start on how to set up and use Jenkins and how to make use of this powerful tool for continuous integration.
If you are willing to take your DevOps skills to the next level then join IntelliPaat DevOps Certification Training Course
Here is a list of all the topics we will be covering you would like to jump to a particular section directly:
Learn DevOps in 16 hrs from experts
Before getting into our main topic let’s begin with Continuous Integration.
Continuous integration happens to be one of the most vital part of DevOps that is primarily used to integrate the various stages of DevOps. In other words, it is a coding practice that essentially drives the development teams to make and implement small changes in code and version control methods quite frequently. It is usually done in the form where all the developers push code onto a shared repository mostly multiple times a day. It is pretty standard for a project to be coded and developed on different platforms and tools. It becomes important to have such a mechanism in place to integrate and validate the changes made.
So what exactly are the benefits of continuous integration? Why would you adopt this practice.? To help answer this question we have listed some of the advantages or the benefits of CI:
Reduction of integration links: All projects employ more than one person to develop it and it greatly increases the risk of errors rising up during integration. Depending on the complexity of the code it is possible that a lot of changes would have to be happen. Here comes CI to the rescue and helps alleviate the issues as it allows for regular integration.
Higher quality of code: As the risks drastically reduce a lot of the time and manpower can be diverted to creating a much more functionally oriented code.
Code in version control works: committing something that breaks the build immediately triggers a notification thereby preventing anyone from pulling a broken code.
Ease of testers: retaining the different versions and builds of the code eases the work of the QA to understand, locate and to trace the bugs efficiently.
Decreased deployment time: automating the process of deployment eases and frees up a lot of time and manpower.
Increased confidence: The lack of a possible failure or breaking something eases the mind of the developers and thereby help in a greater productivity and yields a higher quality product as a whole.
Watch this Jenkins for beginners video by Intellipaat
This video will help you learn Jenkins better. You will get to learn all about various Jenkins capabilities and more.
Consider an existing system which is meant to create a healthcare system that pulls the code from the shared repository at a certain time every day and builds it. This is in some form or way like a CI except for the fact that it only builds at that time just once. This essentially will lead to finding of the bugs only at one point of time in the day.
An alteration to this system to make it CI compliant would be to enable the system to push the code onto the shared repository every time a change is made to it and build it.
This will make sure that all the developers who work on the healthcare system and are making changes to the latest code set and find and resolve bugs as soon as they appear in the system. This ensures an up to date software rollout especially with critical systems such as healthcare.
It is an automation tool written in Java with built in plugins for Continuous Integration tasks. It is used to continuously build and test the projects making it easier to integrate the changing codes to it. It also allows for rapid delivery by integrating large number of deployment and testing technologies. It allows you to continuously deliver your software by integrating with a large number of testing and deployment technologies. It also allows for rapid acceleration of the development phase via automation of tasks. It is primarily a server based app and requires a web server like Tomcat etc. Jenkins rose to fame due to the fact of monitoring repeated tasks. If a team is developing a project, then Jenkins will constantly check and evaluate the code thereby returning any possible errors and failures early on in the development phase.
Standalone Jenkins instances can be an intensive disk and CPU resource eating process. To avoid this, we scale it by implementing a slave node architecture which essentially helps us offload a part of the master node’s responsibilities. A slave is just a device that is configured to act as an executer on behalf of the master. The master is the base installation of Jenkins and does basic operations and serves the user interface while the slaves do the actual work.
In the below example you may note that the Jenkins master is in charge of the UI and the interface and the slave nodes are of different OS types.
So how would one go about creating a typical Jenkins master and slave system on AWS.?
—-To create a master:
Step 1) Create a new Ec2 instance with Amazon Linux AMI 2016.03 AMI
Step 2) update the system and install Jenkins on it with the following commands:
NOTE: user must note that it can be performed either in the root or sudo.
Step 3) Update /etc/sysconfig/Jenkins to authorize Jenkins to access the environment variables
used by the plugins.
Also note to change the default time zone to the one you live in.
Step 4) Now that we have updated it we need to only register and start it. Do so with the
$chkconfig jenkins on$service jenkins start
Step 5) Now Jenkins is enabled and is running. Proceed to the site http://SERVER IP :8080
You would be redirected to this screen. Retrieve the password and unlock Jenkins.
You would now have an empty master running!
Now let’s configure the slaves!
Step 1) You will have to configure the slave and update it in the master configuration.
You would have to create a new EC2 instance just as before with either the sudo or the root access.
Step 2) ADD the base dependencies like java, Git, Docker and so on with the following commands:
Step 3) CREATE the AMI on the AWS: On the EC2 panel go to instances and click on the slave that you recently configured and create a new image. Once created, note the AMI id.
Step 4) Now we would have to configure the master to use the AMI.
To do so, follow the steps below:
Manage Jenkins> Configure system > add a new cloud > Amazon EC2. Complete the form as needed.
Now you would have a master and a slave configured!
Refer the screenshot below to see the sample setup screen.
You may practice with other specifications of your choice. This is only the default and the generalized format.
One of the core feature that Jenkins boasts is the usefulness and integration of Plugins. They help add functionality over the core to give you more powerful tools with regard to the project.
Now let’s look into how we can list, add, modify, update and remove these plugins from Jenkins. To list all the plugins supported by Jenkins go to https://wiki.jenkins-ci.org/display/JENKINS/Plugins.
Once you log in, head over to the “manage Jenkins” tab on the left hand side. This is where you would handle all the installed plugins as well as add or remove new ones.
Under the manage Plugins tab you would be able to search for a plugin or see all the available plugins. Now select the plug in and click on install without restart. This will help you install and check the functionality sooner rather than having to wait to restart Jenkins.
Alternatively, if you do need to uninstall a plugin, head over to the installed tab and select the plugins that you would like to remove and click on uninstall. However, you must make sure to restart Jenkins for the changes to take place.
In some cases, you would like to use an older version of the certain plugins, in such a situation you have to download the needed plugin from the desired site and then upload it into Jenkins manually.
If you have created your own plugins you may also upload it to the site and help further grow the community base.
A build is most often called when the source code is converted to a usable and runnable form. It includes compiling the code into an executable form. The process of building is typically handled by the build tool. Builds are usually done when we reach a critical standpoint such as integration of a feature or so on. As Jenkins is CI based, we have a powerful feature where we can automate the Build process to happen at a particular time or event. This is called as “Scheduled Builds”
Now let’s find out how we can schedule the builds at certain times and triggers.
To schedule a build, follow the steps below:
The general syntax is MINUTE (0-59), HOUR (0-23), DAY (1-31), MONTH (1-12), DAY OF THE WEEK (0-7)
So let’s look at some examples on how to schedule builds.
We have now seen how to automate builds in Jenkins with respect to a preset time or date. Now what if we could have it build every time we push a code in? To do exactly this we would need to install the “GitLab Hook Plugin” just as we have seen in the sections above.
You would have to start a new project if not already created. Then, scroll down to source control management and add the URL of the Git repository along with any credentials needed. Now add the steps that you would want to be performed during the build. Go to the add built setup and add build step. Note that this step will depend on what you want to do and the basis of your environment. Check the screenshot below to see the setup page.
Now we would have to add a webhook for our repository in GitLab. Now navigate to the instance and select the cog icon and chose the webhook. Now fill in the URL fields with
Now click on the “Add webhook” button at the end of the page.
Add the features that are required by checking all the boxes below as mentioned in the screenshot above.And that’s it! You’re all set to go!
So what exactly is continuous integration and delivery? Well to put it plainly, it is a coding practice that drives the development team to make small changes and code to version control checks much more often. The primary goal of CI is to have an automated way to build and test applications. A higher software quality is expected since it leads to frequent changes and a better effort in collaboration in the team. Continuous delivery on the other hand is an extension of CI in a way that it picks up where CI left off. CD basically lets you automate all the releases to the infrastructure defined. It basically ensures an automated way to push the code changes. It alternatively performs required service calls to the servers, databases and anything else that may require to be restarted.
Another term we come across most often when talking about CI/CD is the “CI/CD pipeline”. So what exactly is it? And what does it do?
The tasks and the jobs handling the source code transformation into a releasable form is typically strung together into a pipeline of sorts, where the completion of a task with a successful status would start of another automated process which is next in the sequence. They are most often also referred to as CD pipeline, deployment pipeline and software dev pipeline. A supervisory application takes charge of managing, running, monitoring and reporting of different parts of the pipeline as they keep getting executed. Now let’s look into how this actually works?
The real world implementation can differ based on the type of project it is. Although the overall workflow is the same, it basically differs based on the facts and parameters of source tracking, building, gathering metrics and testing. The supervisory application that was mentioned above will manage the processes that all have its own jobs. A job is created to perform the functions such as testing, building and so on as we have stated before, the primary point is that all the jobs are automated, repeatable and efficient. Once a job has returned a successful status, the supervisory app triggers the next job or task in the sequence. Thanks to automation the errors each step of the way can be easily identified and fixed at the earliest. This is most often referred to as “fail fast”
“Fail fast” usually refers to the idea of finding the errors as soon as possible and notifying the respective team. Another advantage is that the software looks at the process history and assigns the error to the respective team so that they can handle and resolve it.
Now that we know what it is, let’s dig in deeper find out how to create a pipeline and that too from scratch.
Jenkins is a powerful tool that we can use to automate the entire process with the help of various interfaces and tools. The GIT repository is typically where the dev team commits the code. Jenkins takes over with the help of a front end tool to help define the job or task. Jenkins now pulls the code and moves it to the commit phase. Next comes the build phase where the code is compiled.
Now it moves on to the staging area with the help of Docker to deploy it.
So now we’ll get into creating the CI/CD pipeline with the help of Jenkins and Docker.
Step 1) open the terminal in the VM and run the following commands.
Step 2) Open Jenkins on the specified port and click on new item which will help you create a new job.
Step 3) SELECT a “freestyle” option and enter the item name of your choice.
Step 4) Select the “Source code management” and provide the GIT repository. Now click on apply and save.
Step 5) Go to build and select the execute shell
Step 6) Now provide the shell commands. It will generate a wat file. Now the code is pulled up and installs the package along with the dependencies and compiles the application.
Step 7) RUN steps 3-5 again and provide the shell commands. This will start integration and build in the Docker.
Step 8) RUN step 3-5 again with a different job name and provide the shell commands as before. Here it will check the Docker container file and will deploy it to the pre- defined port.
Step 9) Now we would have to configure the jobs. Click on Job1 and then “post build actions” and “build other projects” respectively.
Step 10) PROVIDE the project name and then save
Step 11) PERFORM step 9 again to configure for “job 2”
Step 12) Let’s now create a pipeline view for the same. Click on the (+) symbol and then select the build pipeline view and provide a view name
Step 13) NOW select “Run” which will start the CI/CD process in the system.
Step 14) After the build is complete go to localhost:8180/sample.text. it will now run the application.
As we come to the end of this tutorial, we have seen what exactly Jenkins is and its role in the CI/CD space. We have also learnt how to set up Jenkins and create masters and slaves. We have also dabbled in the huge world of plugins that are available for Jenkins. I would advise you to try out installing newer plugins and also upload a plugin if you’ve created one to the community and keep it alive. Coming further, we have learnt how to create builds and how to schedule them as per our needs. Creation of CI/CD pipelines is one of the most important aspects of Jenkins that we have taken a look at. I would advise creating your own jobs and scheduling them with different parameters to get an in-depth working knowledge of it.
Jenkins is one of the most valuable DevOps tools out there at our disposal. Learning the ways and the skills with Jenkins can be an invaluable asset. To learn more and get certified, head over to intellipaat.com to find the best trainers and e-learning courses. Get certified today!!Previous Next
Learn SQL in 16 hrs from experts