December 2, 2014

Day 2 - Running Applications at Scale with Kubernetes

Written by: Kelsey Hightower (@kelseyhightower)
Edited by: Tom Purl (@tompurl)

In this post we’ll take a look at Linux containers, built using the Docker image format, as an application packaging and distribution mechanism. This means utilizing containers as an alternative to RPMs, Debs, and tarballs. Then we’ll turn our attention to the workflow provided by Kubernetes for managing applications.

Normally when we talk about scale we tend to think about providing more compute resources to run applications, but today I’m talking about the ability to truly separate concerns between building applications and running them in production. Today I want to talk about scaling how we ship applications.

Containers for application packaging

The key to automation is to simplify things before you automate them. I’m a big fan of self-contained binaries like the ones produced by the Go compiler. It’s hard to argue against the convenience of distributing a single artifact vs the “deploy and pray” method offered by many of today’s packaging solutions.

Each language community has attempted to solve the packing problem in various ways specific to their platform needs. As a result we have gems, bundler, pip, virtualenv, npm, and the list goes on and on. The drawback to those solutions? They don’t allow you to express the OS dependencies required by many of the applications we build and ship today. With Docker containers we effectively gain the ability to build self-contained binaries for any application stack.

To be clear, Docker’s image format does not solve the packaging problem, it just moves it around a bit. The end results are pretty good. We can now rely on a single artifact to universally represent any application. The Docker container has become the new lego brick for building larger systems.

Now I can focus on more interesting problems.

The Datacenter is the Computer

With containers I can easily ship applications between machines and start to think of a cluster of machines as a single computer. Each machine acts as another CPU with the ability to execute applications and runs an operating system, but the goal is not to interact with the local OS directly. Instead we want to treat the local OS as firmware for the underlying hardware resources.

The only thing missing now is a scheduler.

The Linux kernel does a fantastic job of scheduling applications on a single system. Chances are if we run multiple applications on a single system the kernel will attempt to use as many CPU cores as possible to ensure that our applications run in parallel.

When it comes to a cluster of machines the job of scheduling applications becomes an exercise for the sysadmin. Today for many organizations scheduling is handled by the fine men and women running Ops. Unfortunately, human schedulers require humans to keep track of where applications can possibly run. Sometimes this means using spreadsheets or a configuration management tool. Either way these tools don’t offer robust scheduling that can react to real time events. This is where Kubernetes fits in.

If the datacenter is the computer then Kubernetes would be it’s operating system.

Application scheduling with Kubernetes

Kubernetes is a declarative system for scheduling Docker containers. First we need to get on the same page regarding what I mean by declarative. In Kubernetes we don’t write infrastructure as code. I know this goes against what many have been taught with regards to thinking about infrastructure, but let me try to explain.

Kubernetes introduces the concept of a Pod, a collection of Linux containers, which represents a single application. Kubernetes also provides a specification for declaring to the cluster how Pods should be deployed. As a result we only need to express our applications like this:

id: helloController
apiVersion: v1beta1
kind: ReplicationController
desiredState:
  replicas: 2
  # replicaSelector identifies the set of Pods that this
  # replicaController is responsible for managing
  replicaSelector:
    name: hello
  # podTemplate defines the 'cookie cutter' used for creating
  # new pods when necessary
  podTemplate:
    desiredState:
      manifest:
        version: v1beta1
        id: hello
        containers:
          - name: hello
            image: quay.io/kelseyhightower/hello:1.0.0
            ports:
              - containerPort: 80
    # Important: these labels need to match the selector above
    # The api server enforces this constraint.
    labels:
      name: hello

The above configuration represents the hello application as a Kubernetes pod composed of a single container, quay.io/kelseyhightower/hello:1.0.0, listening on port 80.

Once we have written this configuration and feed it to the Kubernetes API the various components that make up Kubernetes will take the following steps:

  • Locate nodes capable of running the Pod.
  • Chosen nodes will download and run the quay.io/kelseyhightower/hello:1.0.0 container.
  • A process will monitor the state of the Pods to ensure 2 copies are running at all times.

At this point we have declared to the system that we require 2 instances of the hello pod to be running at all times. If instead we required 1000 instances of the hello app, we could adjust the configuration, then add more machines to the cluster. Once the additional compute resources show up the Kubernetes scheduler would deploy our pods to them until the desired number of pods were running.

If one of the nodes running a pod were to go offline, then a new pod will be created and scheduled to the next eligible node in the cluster. Kubernetes enforces our definition at all times. This is what makes Kubernetes a declarative system.

It’s all about workflow

The dream work flow is that code gets checked in, some tests would run, and things would start magically running in production. For those that have tried pulling this off, you know that’s not an easy thing to do.

Kubernetes offers a step in the right direction. Code gets checked in and then containers are built and pushed to registries. Once this happens the team or person in charge for that service can simply update the Kubernetes configuration to pick up the new container. That workflow looks like this:

kubecfg --image "quay.io/kelseyhightower/hello:2.0.0" rollingupdate helloController

So far I’ve only provided a high level view of Kubernetes, but it only the beginning. Kubernetes provides integration with DNS, load balancers, and even centralized logging. The ideas behind Kubernetes are solid, and anything missing can be easily added to build the right system for your needs.

What’s in it for Sysadmins?

Tools like Kubernetes feel like the system I’ve always tried to build. Some see Kubernetes as a PaaS like Heroku, but that’s far from the case. Kubernetes is a pluggable system that provides a platform for managing application containers at scale.

I just want to be clear, Kubernetes is not some magical system that paints rainbows and trains ponies. It’s a well defined system that captures the experience of fellow sysadmins into an open source project we can all reuse and contribute to. The idea is that Ops would keep Kubernetes and the underlying machines up and running to provide a complete platform for running any application or job required by the team. I like to think of Kubernetes as Ops with an API.

If you would like to learn more about Kubernetes checkout the official website, or take it for a spin with the Intro to Kubernetes Tutorial

No comments :