Case Study on Jenkins(CI/CD) Tool

Neeteesh Yadav
8 min readMar 12, 2021

Jenkins is an open-source server that is written entirely in Java. It lets you execute a series of actions to achieve the continuous integration process, that too in an automated fashion.

Jenkins offers a simple way to set up a continuous integration or continuous delivery (CI/CD) environment for almost any combination of languages and source code repositories using pipelines, as well as automating other routine development tasks. While Jenkins doesn’t eliminate the need to create scripts for individual steps, it does give you a faster and more robust way to integrate your entire chain of build, test, and deployment tools than you can easily build yourself.

Jenkins automates the software builds in a continuous manner and lets the developers know about the errors at an early stage. A strong Jenkins community is one of the prime reasons for its popularity. Jenkins is not only extensible but also has a thriving plugin ecosystem.

Some of the possible steps that can be performed using Jenkins are:

  • Software build using build systems such as Gradle, Maven, and more.
  • Automation testing using test frameworks such as Nose2, PyTest, Robot, Selenium, and more.
  • Execute test scripts (using Windows terminal, Linux shell, etc.
  • Achieve test results and perform post actions such as printing test reports, and more.
  • Execute test scenarios against different input combinations for obtaining improved test coverage.
  • Continuous Integration (CI) where the artifacts are automatically created and tested. This aids in identification of issues in the product at an early stage of development.

Salient Features Of Jenkins

Jenkins is more functionality-driven rather than UI-driven hence, there is a learning curve involved in getting to know what is Jenkins. Here are the powerful developer-centric features offered by Jenkins:

1. Easy Installation & Configuration

Jenkins is a self-contained Java program that is agnostic of the platform on which it is installed. It is available for almost all the popular operating systems such as Windows, different flavors of Unix, and Mac OS.

2. Open-Source

As it is open-source, it is free for use. There is a strong involvement of the community which makes it a powerful CI/CD tool. You can take support from the Jenkins community, whether it is for extensibility, support, documentation, or any other feature related to Jenkins.

3. Thriving Plugin Ecosystem

The backbone of Jenkins is the community and the community members have been instrumental in the development (and testing) of close to 1500+ plugins available in the Update Center.

4. Easy Distribution

Jenkins is designed in such a manner that makes it relatively easy to distribute work across multiple machines and platforms for accelerated build, testing, and deployment.

How Does Jenkins Work?

In this section of the What is Jenkins blog, we look at the internal functioning of Jenkins i.e. what happens once the developer commits changes to the repository and how CI/CD is realized in Jenkins. We also look at the Master-Agent architecture in Jenkins.

Architecture Of Jenkins

Before we dive into how does Jenkins work, we must understand the architecture of Jenkins. These are the series of steps that outlines the interaction between different elements in Jenkins:

  • Developers do the necessary modifications in the source code and commit the changes to the repository. A new version of that file will be created in the version control system that is used for maintaining the repository of source code.
  • The repository is continuously checked by Jenkins CI server for any changes (either in the form of code or libraries) and changes are pulled by the server.
  • In the next step, we ensure that the build with the ‘pulled changes’ is going through or not. The Build server performs a build with the code and an executable is generated if the build process is successful. In case of a build failure, an automated email with a link to build logs and other build artifacts is sent to the developer.
  • In case of a successful build, the built application (or executable) is deployed to the test server. This step helps in realizing continuous testing where the newly built executable goes through a series of automated tests. Developers are alerted in case the changes have caused any breakage in functionality.
  • If there are no build, integration, and testing issues with the checked-in code, the changes and tested application are automatically deployed to the Prod/Production server.

How to use Jenkins to solve Netflix challenge?

How Netflix continuously delivers code that serves TV shows and movies to more than 75 million viewers is explained in a blog post by three Netflix employees: Ed Bukoski, Brian Moyles and Mike McGarr.

The Immutable Server pattern is the basis for Netflix deployment. Each deployment creates a brand new Amazon Machine Image (AMI).

Netflix’s microservice architecture allows Netflix teams to be loosely coupled. Changes are pushed at the speed with which each team is comfortable.

Netflix does not require any team to use any set of tools, but they are responsible for maintaining the tools they do implement. Centralized teams at Netflix offer tools as part of a “paved road” to reduce the cognitive load of the majority of Netflix engineers.

The “paved road” code delivery process consists of several steps. Code is built and tested locally using Nebula. Changes are committed to a central Git repository. A Jenkins job, builds, tests, and packages the application for deployment. Using Spinnaker, Netflix’s global continuous delivery platform, these packages are deployed into Amazon Machine Images (AMI).

How Netflix continuously delivers code that serves TV shows and movies to more than 75 million viewers is explained in a blog post by three Netflix employees: Ed Bukoski, Brian Moyles and Mike McGarr.

The Immutable Server pattern is the basis for Netflix deployment. Each deployment creates a brand new Amazon Machine Image (AMI).

Netflix’s microservice architecture allows Netflix teams to be loosely coupled. Changes are pushed at the speed with which each team is comfortable.

Netflix does not require any team to use any set of tools, but they are responsible for maintaining the tools they do implement. Centralized teams at Netflix offer tools as part of a “paved road” to reduce the cognitive load of the majority of Netflix engineers.

The “paved road” code delivery process consists of several steps. Code is built and tested locally using Nebula. Changes are committed to a central Git repository. A Jenkins job, builds, tests, and packages the application for deployment. Using Spinnaker, Netflix’s global continuous delivery platform, these packages are deployed into Amazon Machine Images (AMI).

Build

Nebula is a set of plugins for the Gradle build system which builds, tests, and packages Java applications. Most of Netflix’s code is written in Java. These plugins extend Gradle’s automation functionality to include dependency management, release management, and packaging. A project’s build file declares the dependencies and plugins to be used.

Integrate

The next step is to push the locally built, tested, and packaged source code to a Git repository. The particular workflow is chosen by the team.

Upon commit, a Jenkins job is triggered to build, test, and package the code for deployment. The appropriate package type will be chosen based on whether a library or an application is being built.

Deploy

The Netflix “Bakery” exposes an API that is used to create an AMI. The actual image is created by using Aminator. The user specifies what foundation image and packages are to be put into the AMI. The foundation image is a Linux environment with the common conventions, tools, and services required for integration with the Netflix ecosystem.

Yahoo! How to Implement Continuous Delivery?

Several years ago, continuous delivery was a new concept being tried by a small group of early adopters. Boy have things changed! Now, continuous delivery, or CD, is a practice that companies big and small are embracing as part of a new, faster moving, on-demand business culture.

We’re now hearing the term described often as being a “table stakes” obligation that companies need to pursue to compete. Companies that ante up have an advantage. Companies that don’t, fall behind. Continuous delivery is now seen as the first stepping-stone of a DevOps transformation.

The Need

Coding would take place in intense six-week sessions, followed by eight weeks of testing. QA would sign off with a long list of exceptions and it would usually take another four weeks to launch into production.

The system had a long windup and was not very good at course-correcting once parts were in motion. This made for a fairly non-responsive and cumbersome process that to a long time to deliver middling updates.

The Challenge

You’re not going to find many joint advertising and data platforms that are busier or more complex than Yahoo! Ad Exchange. The media titan’s platforms are backed by a massively distributed system that processes over 100 billion events each day.

The system consists of hundreds of unique software components and thousands of servers. You have hundreds of programmers working in more than a dozen languages, on different teams with different priorities and schedules twenty-four hours a day.

Bringing this type of complex corporate structure into the DevOps Age is by no means impossible, but it’s most assuredly the stuff of logistical nightmares.

The Process

The first thing that needed to be tackled was the culture. To successfully implement continuous delivery, Yahoo! would need to build a culture of both inter- and intra-team collaboration, procedural controls, peer review, task chunking, smart change management, and automation.

These sorts of wholesale changes are sure to cause a little bit of turbulence and cannot be realized overnight. Credit, therefore, must be given to the decision makers at Yahoo! who had clarity of vision and fortitude of conviction to stay the course and see the process through.

With culture a work in progress, attention was then given to the build process and the tools that facilitate it. Yahoo! developed Screwdriver.cd to serve as their dynamic infrastructure build system. The system was specially designed for the needs of Yahoo! to smoothly and conveniently handle deployment pipelines, to make trunk development easier and better, and to remove the hassle and delays from rollbacks.

From there, it was mostly just a matter of bringing everyone up to speed, continuing to push for the required culture, securing employee buy-in, sharing wins to develop a sense of collective purpose, and building out core competencies from within this new working model. With each passing month, the system ran better and Ad Exchange moved closer to a model for truly continuous delivery.

The Results

Today, the system hums — thanks to a process overhaul that started with the decision to implement continuous delivery. The Ad Exchange team generates more than 8,000 builds a day, committing code to production without human intervention.

Before the CD initiative, it was a very different story. “The same team today launches regularly,” says Yahoo! senior product architect Stas Zvinyatsokovsky.

The team goes from commit to certification in about six hours and they queue up to launch every day. If there’s a break or a security issue, it’s fixed the same day.

Zvinyatsokovsky says the Ad Exchange team has gone “from continuous debacle to continuous delivery” in the space of two years.

“Continuous delivery has become part of the culture,” he said. “Nowadays, when a new product is delivered we expect them to launch to production multiple times a day.”

Thanks for reading this artical/blog.

--

--

Neeteesh Yadav

Technical Enthusiast | MlOps(Machine learning + Operations)| DevOps Assembly Line| Hybrid Multi cloud