Kubernetes 101 – Controllers

This is the second in a series of blog posts that will explain the different components of Kubernetes. Primarily because if I can explain it here, I’ll have learned it quite well myself.

The first part is about Pods and can be found here.

Why Controllers

Before we answer the why, we ought to think about what a controller is. The atomic unit in Kubernetes is the pod. You can create and manage them manually. A controller is a means by which an automated element can create and manage the pods.

So why controllers? The primary reason is if a pod is created manually, when a node dies, the pod is gone too. In the real world, you want something else to re-schedule that pod somewhere else – this is a controller.

Labels

A quick note on how a controller takes a pod under it’s wing. When a controller is created, a label selector is defined. Any pod that matches this label will be considered managed by the controller. If you manually unlabel a pod that has been created by say a deployment, the deployment controller will detect that it’s missing and create another.

Replication Controllers

A replication controller was the first method of deploying a number of the same type of pod and making sure they stay running. If a pod disappears or fails for any reason, another will be created. It also enables really easy horizontal scaling of these pods. A replication controller has three essential parts:

ReplicaSets

While at first a replication controller was the only means of managing pods, later on a ReplicaSet was introduced. Eventually it will fully replace replication controllers. While you can technically create a replica set directly, usually created by a deployment (the deployment construct will get it’s own blog post later).

Broadly speaking, a ReplicaSet is almost identical to a replication controller. The difference being that a ReplicaSet has a more expressive label selector. Expressive in that a ReplicaSet can match multiple labels at the same time. For example, pods with both env=dev and _app=appointment _could be matched and treated as a single group. A ReplicaSet can also match pods based on the presence of a key, rather than it needing a specific value.

DaemonSets

A DaemonSet is a means of deploying exactly one pod on every node. This is typically used for system type pods such as log collectors & monitoring agents. Another example is kube-proxy.

If you weren’t running Kubernetes, these are the kind of services that you would run on boot up via system init or systemd. Obviously using a DaemonSet instead gives you all the Kubernetes goodness!

If a node goes down, the pod that has gone with it is not recreated elsewhere. Equally if a node is joined to the cluster, the DaemonSet will create a pod on the new node.

The nodeSelector attribute can still be used on a DaemonSet to restrict the nodes the pods are deployed to. You might want to use this if you were running some GPU monitoring agents. By adding a GPU label to those nodes & to the DaemonSet it will be only be scheduled to those nodes.

Jobs

A Job resource is very simply – a pod that runs until the task it starts with is complete. Once the task is complete, the pod finishes, but it isn’t removed automatically. This helps with checking logs and so on at a later date unless you remove the pod of course!

In addition the pod is only restarted if the job it is running isn’t complete when the pod dies, by node failure or another reason.

CronJobs

Exactly the same as the Job resource but will run on a schedule. The schedule is defined much like a cronjob is, hence the name…

Liveness Probes

A few times I’ve mentioned that a pod will be killed if the process within it fails. What if the process doesn’t fail but the pod is still not returning the correct response?

Enter Liveness Probes. These are user configurable commands that run on a frequent schedule to test if the pod is up or down. These could be a command that is run inside the pod (checking a log file for instance) or a HTTP get command. It’s completely application dependant but they are a critical component in making sure your pods respond to failure how you want them to.

Remember a large part of this is to give you the tools to enable your Kubernetes cluster, and in turn your applications, to be automated so you don’t have to answer a pager at 0200 or worse, have your application down most of the night!