Arth-Task-16(Study of Kubernetes)

Neeteesh Yadav
9 min readDec 26, 2020

--

In this task we learn some basic discussion of Kubernetes and some basic use cases how kubernetes use in the industry?.

Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.You tell Kubernetes where you want your software to run, and the platform takes care of almost everything else.

Kubernetes was originally developed and designed by engineers at Google. Google was one of the early contributors to Linux container technology and has talked publicly about how everything at Google runs in containers. (This is the technology behind Google’s cloud services.)

Red Hat was one of the first companies to work with Google on Kubernetes, even prior to launch, and has become the 2nd leading contributor to the Kubernetes upstream project. Google donated the Kubernetes project to the newly formed Cloud Native Computing Foundation (CNCF) in 2015.

Many cloud services offer a Kubernetes-based platform or infrastructure as a service (PaaS or IaaS) on which Kubernetes can be deployed as a platform-providing service. Many vendors also provide their own branded Kubernetes distributions.

Kubernetes helps organizations build applications and manage containers on site and across hybrid cloud environments. It provides:

  • The origin, functions, and benefits of Kubernetes.
  • Basics of modern application development and container management and orchestration.
  • A look at basic Kubernetes architecture.
  • Factors for you to consider when adopting Kubernetes.
  • Information about how Red Hat® OpenShift® can help you simplify and scale Kubernetes applications.

What can you do with Kubernetes?

The primary advantage of using Kubernetes in your environment, especially if you are optimizing app dev for the cloud, is that it gives you the platform to schedule and run containers on clusters of physical or virtual machines (VMs).

More broadly, it helps you fully implement and rely on a container-based infrastructure in production environments. And because Kubernetes is all about automation of operational tasks, you can do many of the same things other application platforms or management systems let you do — but for your containers.

Developers can also create cloud-native apps with Kubernetes as a runtime platform by using Kubernetes patterns. Patterns are the tools a Kubernetes developer needs to build container-based applications and services.

With Kubernetes you can:

  • Orchestrate containers across multiple hosts.
  • Make better use of hardware to maximize resources needed to run your enterprise apps.
  • Control and automate application deployments and updates.
  • Mount and add storage to run stateful apps.
  • Scale containerized applications and their resources on the fly.
  • Declaratively manage services, which guarantees the deployed applications are always running the way you intended them to run.
  • Health-check and self-heal your apps with autoplacement, autorestart, autoreplication, and autoscaling.

However, Kubernetes relies on other projects to fully provide these orchestrated services. With the addition of other open source projects, you can fully realize the power of Kubernetes. These necessary pieces include (among others):

  • Registry, through projects like Docker Registry.
  • Networking, through projects like OpenvSwitch and intelligent edge routing.
  • Telemetry, through projects such as Kibana, Hawkular, and Elastic.
  • Security, through projects like LDAP, SELinux, RBAC, and OAUTH with multitenancy layers.
  • Automation, with the addition of Ansible playbooks for installation and cluster life cycle management.
  • Services, through a rich catalog of popular app patterns.

As is the case with most technologies, language specific to Kubernetes can act as a barrier to entry. Let’s break down some of the more common terms to help you better understand Kubernetes.

Control plane: The collection of processes that control Kubernetes nodes. This is where all task assignments originate.

Nodes: These machines perform the requested tasks assigned by the control plane.

Pod: A group of one or more containers deployed to a single node. All containers in a pod share an IP address, IPC, hostname, and other resources. Pods abstract network and storage from the underlying container. This lets you move containers around the cluster more easily.

Replication controller: This controls how many identical copies of a pod should be running somewhere on the cluster.

Service: This decouples work definitions from the pods. Kubernetes service proxies automatically get service requests to the right pod — no matter where it moves in the cluster or even if it’s been replaced.

Kubelet: This service runs on nodes, reads the container manifests, and ensures the defined containers are started and running.

kubectl: The command line configuration tool for Kubernetes.

Volumes: A Kubernetes Volume provides persistent storage that exists for the lifetime of the pod itself. This storage can also be used as shared disk space for containers within the pod. Volumes are mounted at specific mount points within the container, which are defined by the pod configuration, and cannot mount onto other volumes or link to other volumes.

Namespaces: Kubernetes provides a partitioning of the resources it manages into non-overlapping sets called namespaces. They are intended for use in environments with many users spread across multiple teams, or projects, or even separating environments like development, test, and production.

ConfigMaps and Secrets: A common application challenge is deciding where to store and manage configuration information, some of which may contain sensitive data. Configuration data can be anything as fine-grained as individual properties or coarse-grained information like entire configuration files or JSON / XML documents. Kubernetes provides two closely related mechanisms to deal with this need: “configmaps” and “secrets”, both of which allow for configuration changes to be made without requiring an application build.

StatefulSets: It is very easy to address the scaling of stateless applications: one simply adds more running pods — which is something that Kubernetes does very well. Stateful workloads are much harder, because the state needs to be preserved if a pod is restarted, and if the application is scaled up or down, then the state may need to be redistributed. Databases are an example of stateful workloads. When run in high-availability mode, many databases come with the notion of a primary instance and secondary instance(s).

DaemonSets: Normally, the locations where pods are run are determined by the algorithm implemented in the Kubernetes Scheduler. For some use cases, though, there could be a need to run a pod on every single node in the cluster. This is useful for use cases like log collection, ingress controllers, and storage services.

Labels and selectors: Kubernetes enables clients (users or internal components) to attach keys called “labels” to any API object in the system, such as pods and nodes.

Kubernetes is commonly used as a way to host a microservice-based implementation, because it and its associated ecosystem of tools provide all the capabilities needed to address key concerns of any microservice architecture.

Kubernetes Use Cases

  1. Tinder’s Move to Kubernetes:-

Due to high traffic volume, Tinder’s engineering team faced challenges of scale and stability. What did they do?

The answer is, of course, Kubernetes.

Tinder’s engineering team solved interesting challenges to migrate 200 services and run a Kubernetes cluster at scale totaling 1,000 nodes, 15,000 pods, and 48,000 running containers.

Was that easy? No way. However, they had to do it for the smooth business operations going further. One of their engineering leaders said, “As we onboarded more and more services to Kubernetes, we found ourselves running a DNS service that was answering 250,000 requests per second.” Tinder’s entire engineering organization now has knowledge and experience on how to containerize and deploy their applications on Kubernetes.

2. Reddit’s Kubernetes Story

Credits: Reddit

Reddit is one of the busiest sites in the world. Kubernetes forms the core of Reddit’s internal infrastructure.

From many years, the Reddit infrastructure team followed traditional ways of provisioning and configuring. However, this didn’t go far until they saw some huge drawbacks and failures happening while doing the things the old way. They moved to Kubernetes.

3. The New York Times’s Journey to Kubernetes

Credits: The New York Times

Today the majority of the NYT’s customer-facing applications are running on Kubernetes. What an amazing story. The biggest impact has been an increase in the speed of deployment and productivity. Legacy deployments that took up to 45 minutes are now pushed in just a few. It’s also given developers more freedom and fewer bottlenecks. The New York Times has gone from a ticket-based system for requesting resources and weekly deploy schedules to allowing developers to push updates independently.

Check out the evolution and the fascinating story of The New York Times’s tech stack.

4. Airbnb’s Kubernetes Story

Credits: Slides by Melanie Cebula at QCon London 2019

Airbnb’s transition from a monolithic to a microservices architecture is pretty amazing. They needed to scale continuous delivery horizontally, and the goal was to make continuous delivery available to the company’s 1,000 or so engineers so they could add new services. Airbnb adopted Kubernetes to support over 1,000 engineers concurrently configuring and deploying over 250 critical services to Kubernetes (at a frequency of about 500 deploys per day on average). I want you to see this excellent presentation from Melanie Cebula, the infrastructure engineer at Airbnb.

5. Pinterest’s Kubernetes Story

Image credits: Pinterest

With over 250 million monthly active users and serving over 10 billion recommendations every single day, the engineers at Pinterest knew these numbers are going to grow day by day, and they began to realize the pain of scalability and performance issues.

Their initial strategy was to move their workload from EC2 instances to Docker containers; they first moved their services to Docker to free up engineering time spent on Puppet and to have an immutable infrastructure.

The next strategy was to move to Kubernetes. Now they can take ideas from ideation to production in a matter of minutes, whereas earlier they used to take hours or even days. They have cut down so much overhead cost by utilizing Kubernetes and have removed a lot of manual work without making engineers worry about the underlying infrastructure.

Read their impressive story on Kubernetes website ‘Pinterest Case Study’

6. Pokemon Go’s Kubernetes Story

Image credits: iThome

How was Pokemon Go able to scale so efficiently became so successful? The answer is Kubernetes. Pokemon Go was developed and published by Niantic Inc., and grew to 500+ million downloads and 20+ million daily active users.

Pokemon Go engineers never thought their user base would increase exponentially to surpass expectations within a short time. They were not ready for it, and the servers couldn’t handle this much traffic.

Pokemon Go also faced a severe challenge when it came to vertical and horizontal scaling because of the real-time activity by millions of users worldwide. Niantic was not prepared for this.

The solution was in the magic of containers. The application logic for the game ran on Google Container Engine (GKE) powered by the open source Kubernetes project. Niantic chose GKE for its ability to orchestrate their container cluster at planetary-scale, freeing its team to focus on deploying live changes for their players. In this way, Niantic used Google Cloud to turn Pokémon GO into a service for millions of players, continuously adapting and improving. This gave them more time to concentrate on building the game’s application logic and new features rather than worrying about the scaling part.

Summary:-

In this artical we discuss some basic introduction of Kubernetes and some basic terms like pods, volumes etc. and discuss some industry great use cases how to industry use kubernetes.

Thanks vimal sir give task research on kubernetes topics.

--

--

Neeteesh Yadav

Technical Enthusiast | MlOps(Machine learning + Operations)| DevOps Assembly Line| Hybrid Multi cloud