Arth-task-30(Case Study on OpenShift)

OpenShift is a cloud development Platform as a Service (PaaS) developed by Red Hat. It is an open source development platform, which enables the developers to develop and deploy their applications on cloud infrastructure. It is very helpful in developing cloud-enabled services. This tutorial will help you understand OpenShift and how it can be used in the existing infrastructure. All the examples and code snippets used in this tutorial are tested and working code, which can be simply used in any OpenShift setup by changing the current defined names and variables.

OpenShift Container Platform — an on-premises platform as a service built around Docker containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux. The family’s other products provide this platform through different environments: OKD serves as the community-driven upstream (akin to the way that Fedora is upstream of Red Hat Enterprise Linux), OpenShift Online is the platform offered as software as a service, and Openshift Dedicated is the platform offered as a managed service.

OpenShift supports a very large variety of applications, which can be easily developed and deployed on OpenShift cloud platform. OpenShift basically supports three kinds of platforms for the developers and users.

Infrastructure as a Service (IaaS)

In this format, the service provider provides hardware level virtual machines with some pre-defined virtual hardware configuration. There are multiple competitors in this space starting from AWS Google cloud, Rackspace, and many more.

The main drawback of having IaaS after a long procedure of setup and investment is that, one is still responsible for installing and maintaining the operating system and server packages, managing the network of infrastructure, and taking care of the basic system administration.

Software as a Service (SaaS)

With SaaS, one has the least worry about the underlying infrastructure. It is as simple as plug and play, wherein the user just has to sign up for the services and start using it. The main drawback with this setup is, one can only perform minimal amount of customization, which is allowed by the service provider. One of the most common example of SaaS is Gmail, where the user just needs to login and start using it. The user can also make some minor modifications to his account. However, it is not very useful from the developer’s point of view.

Platform as a Service (PaaS)

It can be considered as a middle layer between SaaS and IaaS. The primary target of PaaS evaluation is for developers in which the development environment can be spin up with a few commands. These environments are designed in such a way that they can satisfy all the development needs, right from having a web application server with a database. To do this, you just require a single command and the service provider does the stuff for you.

Why Use OpenShift?

OpenShift provides a common platform for enterprise units to host their applications on cloud without worrying about the underlying operating system. This makes it very easy to use, develop, and deploy applications on cloud. One of the key features is, it provides managed hardware and network resources for all kinds of development and testing. With OpenShift, PaaS developer has the freedom to design their required environment with specifications.

OpenShift provides different kind of service level agreement when it comes to service plans.

Free − This plan is limited to three years with 1GB space for each.

Bronze − This plan includes 3 years and expands up to 16 years with 1GB space per year.

Sliver − This is 16-year plan of bronze, however, has a storage capacity of 6GB with no additional cost.

Other than the above features, OpenShift also offers on-premises version known as OpenShift Enterprise. In OpenShift, developers have the leverage to design scalable and non-scalable applications and these designs are implemented using HAproxy servers.


There are multiple features supported by OpenShift. Few of them are −

  • Multiple Language Support
  • Multiple Database Support
  • Extensible Cartridge System
  • Source Code Version Management
  • One-Click Deployment
  • Multi Environment Support
  • Standardized Developers’ workflow
  • Dependency and Build Management
  • Automatic Application Scaling
  • Responsive Web Console
  • Rich Command-line Toolset
  • Remote SSH Login to Applications
  • Rest API Support
  • Self-service On Demand Application Stack
  • Built-in Database Services
  • Continuous Integration and Release Management
  • IDE Integration
  • Remote Debugging of Applications

OpenShift — Architecture:-

OpenShift is a layered system wherein each layer is tightly bound with the other layer using Kubernetes and Docker cluster. The architecture of OpenShift is designed in such a way that it can support and manage Docker containers, which are hosted on top of all the layers using Kubernetes. Unlike the earlier version of OpenShift V2, the new version of OpenShift V3 supports containerized infrastructure. In this model, Docker helps in creation of lightweight Linux-based containers and Kubernetes supports the task of orchestrating and managing containers on multiple hosts.

Components of OpenShift

One of the key components of OpenShift architecture is to manage containerized infrastructure in Kubernetes. Kubernetes is responsible for Deployment and Management of infrastructure. In any Kubernetes cluster, we can have more than one master and multiple nodes, which ensures there is no point of failure in the setup.

Kubernetes Master Machine Components

Etcd − It stores the configuration information, which can be used by each of the nodes in the cluster. It is a high availability key value store that can be distributed among multiple nodes. It should only be accessible by Kubernetes API server as it may have sensitive information. It is a distributed key value Store which is accessible to all.

API Server − Kubernetes is an API server which provides all the operation on cluster using the API. API server implements an interface which means different tools and libraries can readily communicate with it. A kubeconfig is a package along with the server side tools that can be used for communication. It exposes Kubernetes API”.

Controller Manager − This component is responsible for most of the collectors that regulate the state of the cluster and perform a task. It can be considered as a daemon which runs in a non-terminating loop and is responsible for collecting and sending information to API server. It works towards getting the shared state of the cluster and then make changes to bring the current status of the server to a desired state. The key controllers are replication controller, endpoint controller, namespace controller, and service account controller. The controller manager runs different kind of controllers to handle nodes, endpoint, etc.

Scheduler − It is a key component of Kubernetes master. It is a service in master which is responsible for distributing the workload. It is responsible for tracking the utilization of working load on cluster nodes and then placing the workload on which resources are available and accepting the workload. In other words, this is the mechanism responsible for allocating pods to available nodes. The scheduler is responsible for workload utilization and allocating a pod to a new node.

Kubernetes Node Components

Following are the key components of the Node server, which are necessary to communicate with the Kubernetes master.

Docker − The first requirement of each node is Docker which helps in running the encapsulated application containers in a relatively isolated but lightweight operating environment.

Kubelet Service − This is a small service in each node, which is responsible for relaying information to and from the control plane service. It interacts with etcd store to read the configuration details and Wright values. This communicates with the master component to receive commands and work. The kubelet process then assumes responsibility for maintaining the state of work and the node server. It manages network rules, port forwarding, etc.

Kubernetes Proxy Service − This is a proxy service which runs on each node and helps in making the services available to the external host. It helps in forwarding the request to correct containers. Kubernetes Proxy Service is capable of carrying out primitive load balancing. It makes sure that the networking environment is predictable and accessible but at the same time it is isolated as well. It manages pods on node, volumes, secrets, creating new containers health checkup, etc.

Integrated OpenShift Container Registry

OpenShift container registry is an inbuilt storage unit of Red Hat, which is used for storing Docker images. With the latest integrated version of OpenShift, it has come up with a user interface to view images in OpenShift internal storage. These registries are capable of holding images with specified tags, which are later used to build containers out of it.

Frequently Used Terms

Image − Kubernetes (Docker) images are the key building blocks of Containerized Infrastructure. As of now, Kubernetes only supports Docker images. Each container in a pod has its Docker image running inside it. When configuring a pod, the image property in the configuration file has the same syntax as the Docker command.

Project − They can be defined as the renamed version of the domain which was present in the earlier version of OpenShift V2.

Container − They are the ones which are created after the image is deployed on a Kubernetes cluster node.

Node − A node is a working machine in Kubernetes cluster, which is also known as minion for master. They are working units which can a physical, VM, or a cloud instance.

Pod − A pod is a collection of containers and its storage inside a node of a Kubernetes cluster. It is possible to create a pod with multiple containers inside it. For example, keeping the database container and web server container inside the pod.

What is OKD?

OKD is a distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OKD adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams. OKD is a sibling Kubernetes distribution to Red Hat OpenShift.

OKD embeds Kubernetes and extends it with security and other integrated concepts. OKD is also referred to as Origin in github and in the documentation.

OpenShift Container Platform

OpenShift Container Platform (formerly known as OpenShift Enterprise) is Red Hat’s on-premises private platform as a service product, built around a core of application containers powered by Docker, with orchestration and management provided by Kubernetes, on a foundation of Red Hat Enterprise Linux and Red Hat Enterprise Linux CoreOS (RHCOS).

Ford Motor Company adopts Kubernetes and Red Hat OpenShift?

Ford Motor Company seeks to provide mobility solutions at accessible prices to its customers, including dealerships and parts distributors who sell to a variety of retail and commercial consumers. To speed delivery and simplify maintenance, the company sought to create a container-based application platform to modernize its legacy stateful applications and optimize its hardware use. With this platform, based on Red Hat OpenShift and supported by Red Hat and Sysdig technology, Ford has improved developer productivity, enhanced its security and compliance approach, and optimized its hardware use to improve operating costs. Now, the company can focus on exploring new ways to innovate, from big data to machine learning and artificial intelligence.


  • Improved productivity with standardized development environment and self-service provisioning
  • Enhanced security with enterprise technology from Red Hat and continuous monitoring provided by Sysdig
  • Significantly reduced hardware costs by running OpenShift on bare metal

Automotive innovation requires modern platform to enhance legacy applications

Ford Motor Company is a leader in creating reliable, technologically advanced vehicles worldwide. Its mission is to provide mobility solutions at accessible prices to its customers, including dealerships and parts distributors who sell to a variety of retail and commercial consumers.

”We’re a well-known brand. Everybody knows the Ford oval,” said Jason Presnell, CaaS [Containers-as-a-Service] Product Service Owner, at Ford Motor Company. “Our mission in becoming a mobility company is to not only find new ways to help people get from place to place, but also to get them the information and tools they need to support their travel, like mobile apps that let you start or unlock your car. We need to support and deliver these capabilities at a global scale.”

Each of Ford’s business units hosts a robust, engaged development community that is focused on building products and services that take advantage of the latest technological innovations, from machine learning for crash analysis and autonomous driving to high-performance computing (HPC) for prototype creation and testing. But this engagement across hundreds of thousands of employees and thousands of internal applications and sites created complexity that Ford’s traditional IT environment and development approaches could not accommodate. Even with hypervisors and virtual machines, the company struggled with inefficient resource use and high staffing costs to maintain this environment.

“We needed faster delivery for our stateful applications,” said Satish Puranam, Technical Specialist, Cloud Platforms, at Ford Motor Company. “Pivotal Cloud Foundry worked fine for newer, stateless applications that were built for portability, but we’re a hundred-year-old company with a lot of stateful, data-heavy, legacy applications. For things like inventory systems, dealer-facing applications, and CI/CD [continuous integration and delivery] that needed data persistence, getting the right infrastructure could take as long as 6 months.”

New container-based application platform uses enterprise and community open source technology.

After running tests and proofs of concept (POCs) of container technology, Ford began looking for an enterprise partner offering commercially supported open source solutions to help run containers in production and support innovative experimentation.

“We have several open source technologies in our IT environment and products. We want to move toward being able to use and contribute to open source more — to help somebody else in the community take what we’ve done and improve on it,” said Presnell. “But we needed a container platform that had an enterprise offering, one that was well-known in the industry and was well-engineered.”

Past experience with Kubernetes led Ford to adopt CoreOS Tectonic. When CoreOS was acquired by Red Hat, Ford migrated to Red Hat OpenShift Container Platform, a solution that enhanced the strengths of CoreOS’s offering with new automation and security capabilities. Based on Red Hat Enterprise Linux®, OpenShift Container Platform offers a scalable, centralized Kubernetes application platform to help teams quickly and more reliably develop, deploy, and manage container applications across cloud infrastructure.

Performance and security improvements help Ford deliver services and work with partners more efficiently

Significantly increased developer productivity

Using OpenShift Container Platform, Ford has accelerated time to market by centralizing and standardizing its application development environment and compliance analysis for a consistent multicloud experience. For example, OpenShift’s automation capabilities help Ford deploy new clusters more rapidly.

These improvements are enhanced by the company’s shift from a traditional, waterfall approach to iterative DevOps processes and a continuous integration and delivery (CI/CD) workflow.

Now, some of the same processes for stateful workloads take minutes instead of months, and developers no longer need to focus on underlying infrastructure with self-service provisioning. These improvements extend to Ford’s IT hosting, where the company has seen a significant productivity improvement for CaaS support. Dealers and plant operators gain access to new features, fixes, and updates faster through Ford’s multitenant OpenShift environment.

“With OpenShift, we have a common framework that can be reused for deploying applications or services within our datacenter or to any major cloud provider,” said Presnell. “We can now deliver features in a more secure, reliable manner.”

Successful adoption of OpenShift and DevOps creates foundation for new opportunities to innovate.

Ford is already experiencing significant growth in demand for its OpenShift-based applications and services. It aims to achieve migration of most of its on-premise, legacy deployments within the next few years.

The company is also looking for ways to use its container platform environment to address opportunities like big data, mobility, machine learning, and AI to continue delivering high-quality, timely services to its customers worldwide.

“Kubernetes and OpenShift have really forced us to think differently about our problems, because we can’t solve new business challenges with traditional approaches. Innovation and constantly exploring and questioning are the only way we can move forward,” said Puranam. “It’s a journey, but one that we have a good start on. Thanks to having the right set of partners, with both Red Hat and Sysdig, we’re well-situated for future success.”

Thanks for reading my artical and Blogs.

Technical Enthusiast | MlOps(Machine learning + Operations)| DevOps Assembly Line| Hybrid Multi cloud