Kubernetes features and It's Industrial Use-cases in Modern World
First of all, let’s know what is Kubernetes and then talk about its use-cases.
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google’s experience running production workloads at scale with best-of-breed ideas and practices from the community.
Kubernetes features the ability to automate web server provisioning according to the level of web traffic in production. Web server hardware can be located in different data centers, on different hardware, or through different hosting providers. Kubernetes scales up web servers according to the demand for the software applications, then degrades web server instances during downtimes. Kubernetes also has advanced load balancing capabilities for web traffic routing to web servers in operations.
Kubernetes architecture and how it works?
Kubernetes evolved from the code that Google used to manage its data centers at scale with the “Borg” platform. AWS introduced elastic web server frameworks to the public with the launch of the EC2 platform. Kubernetes allows companies to orchestrate containers like EC2 but using open source code. Google, AWS, Azure, and the other major public cloud hosts all offer Kubernetes support for cloud web server orchestration. Customers can use Kubernetes for complete data center outsourcing, web/mobile applications, SaaS support, cloud web hosting, or high-performance computing.
Why are IT pros deploying more containers in the first place?
Deployment speed, workload portability, and a good fit with the DevOps way of working, for starters. Containers can greatly simplify the provisioning of resources to time-pressed developers. “Once organizations understand the benefits of containers and Kubernetes for DevOps, application development, and delivery, it opens up so many possibilities, from modernizing traditional applications to hybrid- and multi-cloud implementations and the development of new, cloud-native applications with speed and agility,” says Ashesh Badani, SVP and general manager for cloud platforms at Red Hat.
Here’s an analogy: You can think of a container orchestrator like Kubernetes as you would a conductor for an orchestra, says Dave Egts, chief technologist, North America Public Sector, Red Hat. “In the same way a conductor would say how many trumpets are needed, which ones play the first trumpet, and how loud each should play,” Egts explains, “a container orchestrator would say how many web server front end containers are needed, what they serve, and how many resources are to be dedicated to each one.”
Use cases solved by Kubernetes
Since its inception, Kubernetes has been a project that has enjoyed great recognition and has always had a lot of impacts, but in recent months its influence has been consolidated based on different factors.
The community has grown considerably. Google and Red Hat contribute the most, but there are also Meteor, CoreOS, Huawei, Mesosphere, and many more.
This growing interest is also noted in the number of issues that arise in the Stack Overflow platform or in the number of meetups that are held related to this technology. In addition, it is no longer perceived as a toy with which to experiment, it is acquiring sufficient solvency so that it is increasingly used in production, according to the CNCF survey
Where can you use Kubernetes?
The options to use Kubernetes hardly have any restrictions, almost any option of use is possible thanks to all the possibilities of installation that it offers and because many solutions are integrating it in their architectures. Thus, we have a wide range to use K8S in the flavor we want.
- Bare Metal: we can deploy our cluster directly on physical machines using multiple operating systems: Fedora, CentOS, Ubuntu, etc.
- Virtualization On-Premise: if we want to mount our cluster on-premise, but with virtual machines, the possibilities grow. We can use Vagrant, CloudStack, Vmware, OpenStack, CoreOS, oVirt, Fedora, etc.
- Cloud solutions: if we want to have all the advantages of Kubernetes, without taking care of managing everything below, we have all these alternatives in the cloud:
- Google Container Engine: service managed and offered by Google, who is responsible for managing the instances of Compute Engine. It also deals with monitoring, logging, the health of the instances, and updating Kubernetes to the latest available version.
- OpenShift: the leader of the PaaS integrates Kubernetes and, when using it in its different options (enterprise, online, etc.), we will be using managed K8S clusters.
- CoreOS Tectonic: is the product through which CoreOS provides Kubernetes. Facilitates portability, among several providers, of public and private cloud. Its installation, updating, and maintenance require fewer operations work. It includes Prometheus for the monitorization and management of alerts.
- CoreOS: will substitute its fleet system for Kubernetes.
- Kops: it serves to create and manage Kubernetes’ clusters (if required, in production and with high availability) from the command line. So far, it has been the unofficial way to install Kubernetes in AWS and has in its plans reaching Google Compute Engine y VMware vSphere.
- Deis: its opensource PaaS, which is now called Workflow, has been based on Kubernetes for years
- Mesosphere: it seems after their latest announcement, they will veer towards using Kubernetes as the orchestrator in place of Marathon.
- CloudFoundry offers Kubernetes in its Container Runtime.
- Others: Azure, IBM, Kube2Go, GiantSwarm also offer services managed by Kubernetes.
And if that wasn’t enough, now Amazon Web Services has joined as a Platinum member of the Kubernetes DevOps group, the Cloud Native Computing Foundation.
Containers have meant a radical change in the way we build and deploy applications. As the density of containers increases, tools are needed to facilitate communication, administration, and planning.
It is in this environment where an orchestrator becomes necessary. From its appearance, it was already clear how Kubernetes took advantage of other orchestrators.
Although there have been many changes in recent months and there is great competition, it seems that we are witnessing the consolidation of Kubernetes as the main orchestration solution: it is found in all PaaS and in all Cloud services, it has the best features and the community never stops improving it.
Without a doubt it is the technology that everybody talks about, everyone wants to contribute to in their modules, and is the preferred platform to build applications based on containers.
What specifically can Kubernetes do for us?
Five fundamental business capabilities that Kubernetes can drive in the enterprise–be it large or small. And to add teeth to these use cases, we have identified some real-world examples to validate the value that enterprises are getting from their Kubernetes deployments
- Faster time to market
- IT cost optimization
- Improved scalability and availability
- Multi-cloud (and hybrid cloud) flexibility
- Effective migration to the cloud
1. Faster time to market (aka improved app development/deployment efficiencies)
Kubernetes enables a “microservices” approach to build apps. Now you can break up your development team into smaller teams that focus on a single, smaller microservice. These teams are smaller and more agile because each team has a focused function. APIs between these microservices minimize the amount of cross-team communication required to build and deploy. So, ultimately, you can scale multiple small teams of specialized experts who each help support a fleet of thousands of machines.
Kubernetes also allows your IT teams to manage large applications across many containers more efficiently by handling many of the nitty-gritty details of maintaining container-based apps. For example, Kubernetes handles service discovery, helps containers talk to each other, and arranges access to storage from various providers such as AWS and Microsoft Azure.
Real-World Case Study
Airbnb’s transition from a monolithic to a microservices architecture is pretty amazing. They needed to scale continuous delivery horizontally, and the goal was to make continuous delivery available to the company’s 1,000 or so engineers so they could add new services. Airbnb adopted Kubernetes to support over 1,000 engineers concurrently configuring and deploying over 250 critical services to Kubernetes. The net result is that Airbnb can now do over 500 deploys per day on average.
Tinder: One of the best examples of accelerating time to market comes from Tinder. Due to high traffic volume, Tinder’s engineering team faced challenges of scale and stability. And they realized that the answer to their struggle is Kubernetes. Tinder’s engineering team migrated 200 services and ran a Kubernetes cluster of 1,000 nodes, 15,000 pods, and 48,000 running containers. While the migration process wasn’t easy, the Kubernetes solution was critical to ensure smooth business operations going further.
2. IT cost optimization
Kubernetes can help your business cut infrastructure costs quite drastically if you’re operating at a massive scale. Kubernetes makes a container-based architecture feasible by packing together apps optimally using your cloud and hardware investments. Before Kubernetes, administrators often over-provisioned their infrastructure to conservatively handle unexpected spikes, or simply because it was difficult and time-consuming to manually scale containerized applications. Kubernetes intelligently schedules and tightly packs containers, taking into account the available resources. It also automatically scales your application to meet business needs, thus freeing up human resources to focus on other productive tasks. There are many examples of customers who have seen dramatic improvements in cost optimization using K8s.
Real-World Case Study
Spotify is an early K8s adopter and has significant cost-saving values by adopting K8s Leveraging K8s, Spotify has seen 2–3x CPU utilization using the orchestration capabilities of K8s, resulting in better IT spend optimization.
Pinterest is another early K8s customer. Leveraging K8s, the Pinterest IT team reclaimed over 80 percent of capacity during non-peak hours. They now use30 percent fewer instance-hours per day compared to the static cluster.
3. Improved scalability and availability
The success of today’s applications does not depend only on features, but also on the scalability of the application. After all, if an application cannot scale well, it will be highly non-performant at best scale, and totally unavailable, in the worst case. As an orchestration system, Kubernetes is a critical management system to “auto-magically” scale and improve app performance. Suppose we have a service that is CPU-intensive and with a dynamic user load that changes based on business conditions (for example, an event ticketing app that will see dramatic users and loads prior to the event and low usage at other times). What we need here is a solution that can scale up the app and its infrastructure so that new machines are automatically spawned up as the load increases (more users are buying tickets) and scale it down when the load subsides. Kubernetes offers just that capability by scaling up the application as the CPU usage goes above a defined threshold — for example, 90 percent on the current machine. And when the load reduces, Kubernetes can scale back the application, thus optimizing the infrastructure utilization. The Kubernetes auto-scaling is not limited to just infrastructure metrics; any type of metric — resource utilization metrics — even custom metrics can be used to trigger the scaling process.
Real-World Case Study
LendingTree: Here’s a great article from LendingTree. LendingTree has many microservices that make up its business apps. LendingTree uses Kubernetes and its horizontal scaling capability to deploy and run these services, and to ensure that their customers have access to service even during peak load. And to get visibility into these containerized and virtual services and monitor its Kubernetes deployment, LendingTree uses Sumo Logic
4. Multi-cloud flexibility
One of the biggest benefits of Kubernetes and containers is that it helps you realize the promise of hybrid and multi-cloud. Enterprises today are already running multi-cloud environments and will continue to do so in the future. Kubernetes makes it much easier is to run any app on any public cloud service or any combination of public and private clouds. This allows you to put the right workloads on the right cloud and to help you avoid vendor lock-in. And getting the best fit, using the right features, and having the leverage to migrate when it makes sense all help you realize more ROI (short and longer-term) from your IT investments.
Need more data to validate the multi-cloud and Kubernetes made-in-heaven story? This finding from the Sumo Logic Continuous Intelligence Report identifies a very interesting upward trend on K8 adoption based on the number of cloud platforms organizations use, with 86 percent of customers on all three using managed or native Kubernetes solutions. Should AWS be worried? Probably not. But, it may be an early sign of a level playing field for Azure and GCP — because apps deployed on K8s can be easily ported across environments (on-premise to cloud or across clouds).
Real-World Case Study
Gannett/USA Today is a great example of a customer who is using Kubernetes to operate multi-cloud environments across AWS and Google Cloud Platform. In the beginning, Gannett was an AWS shop. Gannett moved to Kubernetes to support their growing scale of customers (they did 160 deployments per day during the 2016 presidential news season!), but as their business and scaling needs changed, Gannett used the fact that they are deployed on Kubernetes in AWS to seamlessly run the apps in GCP.
5. Seamless migration to the cloud
Whether you are rehosting (lift and shift of the app), re-platforming (make some basic changes to the way it runs), or refactoring (the entire app and the services that support it are modified to better suit the new compartmentalized environment), Kubernetes has you covered.
Since K8s runs consistently across all environments, on-premise, and clouds like AWS, Azure, and GCP, Kubernetes provides a more seamless and prescriptive path to port your application from on-premise to cloud environments. Rather than deal with all the variations and complexities of the cloud environment, enterprises can follow a more prescribed path:
- Migrate apps to Kubernetes on-premise. Here you are more focused on re-platforming your apps to containers and bringing them under Kubernetes orchestration.
- Move to a cloud-based Kubernetes instance. You have many options here — run Kubernetes natively or choose a managed Kubernetes environment from the cloud vendor.
- Now that the application is in the cloud, you can start to optimize your application to the cloud environment and its services.
Real-World Case Study
Shopify started as a data center based application and over the last few years has completely migrated all their application to Google Cloud Platform. Shopify first started running containers (docker); the next natural step was to use K8s as dynamic container management and orchestration system.
Many enterprises adopting Kubernetes realize that Kubernetes is the first step to building scalable modern applications. To get good value from Kubernetes, enterprises need solutions that can monitor and secure Kubernetes applications. Sumo Logic provides the industry’s first Continuous Intelligence Solution for Kubernetes to help enterprises control and manage their Kubernetes deployments.
Thank you for reading 😊