All about Kubernetes and why you need more

Kubernetes is a platform for managing containerized workloads. Over recent years, it has pushed out alternative platforms and become a defacto standard.

All major cloud vendors now offer managed Kubernetes services, and there are no emerging competitors poised to unseat it. This article gives an overview of the rationale behind the emergence of Kubernetes and related technologies, and their place in an overall automation strategy.

To understand the value of Kubernetes, you need to understand what containers are and why they are important. Superficially, containers can be viewed as lightweight virtual machines. Whereas virtual machines provide the illusion of a physical host to workloads, containers provide an isolated environment in a host (whether virtual or physical). Put another way, virtual machines simulate physical machines, including hardware and operating systems, and containers carve out isolated environments in an operating system.

The reason containers have exploded in popularity is high performance. Where a virtual machine has to bootstrap an operating system when it starts, a container only needs to start a process. In practical terms, a virtual machine often takes a minute or more to start, but a container often takes milliseconds to start. Also containers are far leaner than virtual machines because they share rather than replicate an operating system. So why containers? Speed and density.

Container Engines

Containers have been a part of Linux operating systems since 2006, building on a gradual introduction of features to enable users to run processes in isolated environments. The first container manager, focused on the easy creation and management of containers, was LXC. But it wasn’t until the introduction of Docker that the popularity of containers began to grow exponentially. Container engines provide simple command line utilities to define containers including storage, network, and resource quotas.

Kubernetes

Kubernetes was open sourced in 2015, and was based on an internal proprietary container management platform called Borg. Kubernetes was created to manage complex workloads running on large numbers of containers distributed across physical or virtual hosts, and uses container engines as a building block. The Kubernetes philosophy advocates the use of containers for software organization into small interconnected services or microservices, and automated using declarative methods. This philosophy, along with development approaches like Devops and Continuous Deployment, are essential components of the “cloud native” movement. As such, Google along with the Linux Foundation formed the Cloud Native Computing Foundation (CNCF) in 2015, which promotes cloud native approaches and defines an ecosystem to support it.

Kubernetes uses a master/work paradigm to define a cluster, or group of servers (physical or virtual). Each worker in the cluster runs a container engine, and the Kubernetes master commands them to run and configure workloads. Generally, the user who deploys applications in Kubernetes views the cluster as an undifferentiated compute and storage resource, or a single large computer. An application in Kubernetes typically consists of many containers, organized into services, that consume each other. This architecture supports software release parallelism, enabling the rapid and relatively low risk deployments of bug fixes, upgrades, and feature releases.

Once an application is deployed into Kubernetes, the platform control plane actively manages the deployment using a scheduler, which commands the various workers. Application descriptors deployed into Kubernetes represent a declarative “desired state” of the application, which the scheduler detects and both deploys and maintains in a kind of closed loop automation.

Beyond Kubernetes

One look at the CNCF landscape drives home the message that Kubernetes by itself is unlikely to be sufficient for most needs. Kubernetes is a foundational platform for developing solutions; it isn’t a solution itself. Generally, Kubernetes becomes a part of a larger technology stack that has many interconnections with external services and internal services, both legacy on premises systems, conventional cloud hosted applications, SaaS offerings, hybrid cloud architectures and all the related security and networking configuration.

When looking at the big picture where Kubernetes is a piece of a technology strategy, it’s clear that automation with a broader scope is needed. Tools that take a cloud neutral declarative approach to automation like Cloudify and Terraform can provision Kubernetes clusters as well as external services and their interconnections. Kubernetes by itself is a complex beast, and operating it in concert with external systems and networking is more complex still. Automation at every level is a must for overall operational success.

The Future of Kubernetes

Kubernetes is one of the most popular open source projects on the internet, and it continues to evolve rapidly. Initially focused primarily on foundational container scheduling, it soon added more capabilities to address production concerns like security, stateful applications, cloud integration, and batch processing to name a few.

As the platform has matured, the rate of fundamental change has slowed. While improvements in scalability and availability will continue indefinitely, the basic platform shape has become quite stable. The platform has become highly extensible, and most of the exciting work in the future will be based on Kubernetes, but not “inside” Kubernetes itself. This is a sign of success. For many, Kubernetes will essentially disappear, or be taken for granted as essential plumbing.

Exciting Kubernetes based systems in networking, serverless, IOT, and edge computing are being explored and built now that exploit the scalability, agility, and efficiency of microservice/container based architectures.

Image Credit: Piotr Swat / Shutterstock

Original Article