Kubernetes 101 – Pods

This is the start of a series of blog posts that will explain the different components of Kubernetes. Primarily because if I can explain it here, I’ll have learned it quite well myself.

Primer on Containers

I think most people are at least aware of the existence of containers. Fundamentally they’re a construct used to make an application component self contained & portable. It holds all the libraries and binaries required to run the component. Think of it as a virtual machine sans Operating System. If you don’t have an Operating System to run how much less resource is required to run it? How much faster will it start?

The answer to both of these is a lot.

It’s a natural evolution from physical servers to virtual machines to containers. It helps developers work in isolation without worrying about trying to run a code merge against the rest of the teams work. It enables fast feedback when the code is committed and the unit tests are run. Because all the libraries and binaries are held within the container, it helps the age old problem – “It worked on my laptop.”

Because of the faster start time, they’re easier to scale up when you need to and scale down when you don’t. Or even just start a container when you need a task performing and delete it when that task is finished.

Pods Vs Containers

Kubernetes doesn’t deal in containers exactly, it builds the containers within a pod. This makes a pod the lowest denominator, the atomic unit within Kubernetes.

A pod can hold multiple containers which is the reasoning behind why the pod was created instead of managing containers directly. It is a higher construct to enable multiple pods to be scheduled on the same machine and share the same network namespace.

Multiple pods are better than multiple containers in the same pod. When a container starts it’s main process will start as PID 1. If PID 1 dies then Kubernetes will kill the container. If it’s a managed container then another will restart somewhere else.

Just because you can run multiple containers in the same pod doesn’t mean you should. Scaling, scheduling, resources,

Creating a pod

Creating a pod is done by one of two methods, imperatively or declaratively. These both involve use of the CLI (usually) but one is created directly on the command line and the other is declared as a YAML or JSON file and then posted to the API server. The end result is broadly the same in that they directly or indirectly post JSON to the API server.

Imperatively

kubectl run test-pod --image=busybox --restart=Never

As you can see it’s not rocket science to perform this task and your pod will be up and running in no time. If you want to make any changes to that pod you edit the pod directly. Either through kubectl edit which brings up your text editor for you to edit the YAML directly before posting it back to the API server.

kubectl edit pod test-pod

Or by deleting and recreating the pod with your changes.

kubectl delete pod test-pod
kubectl run test-pod --image=busybox --restart=Never --command "sleep"

Once you’ve done this, where is the record of your configuration and the changes you’ve posted?

In my view using the CLI directly is great for quickly spinning up a new pod for testing or development but I’d avoid it for production use.

Declaratively

This involves creating a YAML (or JSON, but YAML seems to be the de-facto standard) file with all your configuration options in and posting the whole file to the API server.

apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - image: busybox
    name: c1
    command:
   - sleep

Once you’ve written and saved this, you post it to the API server using
kubectl apply -f test-pod.yaml

As this is stored in a file, this can be kept in the source control repository of your choice along with all the good stuff that brings. It also means if you need to edit the file, you simply edit the original and post it again.

Stop or Delete pods

Now we’ve talked about creating pods, it stands that you might need to delete pods sometimes. The easiest way to achieve this is by name:
kubectl delete pod test-pod
It can also be done by selecting by label (labels covered next):
kubectl delete pod -l app=test-app
Or by deleting the whole namespace (namespaces covered later):
kubectl delete namespace test-ns

Pod Labels

A label is a key-value pair and is an important construct in Kubernetes. In summary, Kubernetes uses labels to select resources to be managed by any number of higher level controllers. This enables scaling, self-healing and load balancing to name but a few.

There are a number of ways to label a pod. The most common (and you could argue the better way) is to the include the label(s) in the pods YAML descriptor.

...
metadata: 
  name: test-pod
  labels:
    app: test-app
    env: test
...

This can also be done imperatively:
kubectl label pod app=test-app
Once these labels are created you list your pods with all labels:
kubectl get pod --show-labels
Or you can filter by a specific label:
kubectl get pod -l app=test-app
Or can list all pods that have the label set:
kubectl get pod -l app
Or not set:
kubectl get pod -l '!app'
You can also label nodes, which can come in pretty handy if you want to highlight certain features that certain pods may need to run. Like a GPU, or SSD storage. To do this label the node:
kubectl label node node1 gpu-installed=true
Then add this label under a node selection attribute with the pods YAML:

...
spec:
  nodeSelector:
    gpuinstalled: "true"
...

In summary labels are an incredibly flexible way to organise your pods. As mentioned earlier they’re also incredibly important to the rest of the Kubernetes platform as well.

Annotations

Following on from labels, annotations are another key-value pair that can form part of an objects metadata. The main difference between the two is that annotations can’t be used to select multiple objects, and annotations can hold much larger (and therefore more descriptive) pieces of information.

I find the best use case for annotations is to add a description so that all users of a cluster understand what each component is. Sometimes annotations are also updated automatically by applications, or are used by alpha/beta features within Kubernetes.

Namespaces

Namespaces are a means to groups resources into a meaningful way. Most commonly used to separate the same cluster into development, test and sandbox. Or possibly giving each branch of code it’s own namespace.
Namespaces allow you to re-use the same name for pods, configmaps, secrets etc. This means that all objects can be created with the same YAML files but will be translated to the environment they’re in by the settings in the configmap within that namespace.
A number of other items can be applied to individual namespaces, think permissions and resource quotas.

Change of Direction

Sometimes at work priorities change.

Right now for me that means that my vRA studying is going to need to go on the back burner while I step into some Kubernetes sized shoes. This is both exciting and frustrating at the same time. On the one hand, learning Kubernetes is very exciting and full of a lot of diverse technologies. While on the other, I’ve spent a great deal of time working with an old version of vRA just for the purposes of the VCAP-CMA exam.

With any luck by the time I come back to it (if I do) the exam version will have been uplifted to something more recent!

So expect some Kubernetes content in the near future. Very probably a CKA study guide!

VCAP-CMA Deploy – Objective 8.2

Disclaimer: These are my notes from studying for the 3V0-31.18 exam. If something doesn’t make sense, please feel free to reach out.

The main goal for this objective is the security of vRealize Automation.

Objective 8.2 – Secure a vRealize Automation deployment in accordance with the VMware
hardening guide

References

This is very much around the appliance itself so familiarity with Linux hardening, particularly around SSH will be beneficial. Almost all these changes are made on individual hosts so will need to be made on each host.

Very roughly, this can be split into:

  • Client Access
  • Data at rest
  • Data in transit
  • Misc

I’m just going to run through a brief overview of each section. For further detail have a read of the very comprehensive documentation.

Client Access

To secure access to the appliance you need to think along the lines of creating a separate user to login to the appliance (VAMI, Console & SSH) and disabling direct root access. Once logged into a CLI you can su to root. Definitely only enable SSH when required. Also consider password policies and matching the local users password to the corporate policy.

You may also want to consider changing the default timeouts for vRA. The default is set to 30 minutes.

Data at Rest

This is to secure access to the data that is held on local disk. This is the database and application files. If you need access to the database for anything outside of the application you should create another user account for this purpose rather than using the default postgres user. There is also a list of commands in the hardening guide to ensure that the application files are secure, they are by default but this should give you an idea if something has been tampered with.

Data in Transit

Securing the data while it stored on the disk is no good unless the access to that data is also secure. You’ll want to disable SSL v3.0, TLS v1.0 & v1.1 and configure the accepted cipher suites as per your corporate policies on all the below services:

  • haproxy
  • lighttpd
  • vcac
  • vco
  • rabbitmq
  • IaaS Servers

You may also want to consider the response headers for these services to ensure that additional information is given away in this manner either.

 

VCAP-CMA Deploy – Objective 8.1

Disclaimer: These are my notes from studying for the 3V0-31.18 exam. If something doesn’t make sense, please feel free to reach out.

The main goal for this objective is the security of vRealize Automation.

Objective 8.1 – Renew, and/or replace security certificates on distributed vRealize Automation components

References

This is about replacing the certificates on these components:

  • vRA appliance
  • IaaS Manager Service Server
  • Web Server

Other certificates that are in use manage themselves through self signed certificates to communicate. An external vRO must be done separately but if you’re using the embedded one it will update automatically.

All of these can be updated from the VAMI page of the vRA appliance. The different certificates can be managed from two pages:

  • Host Settings page – vRealize Automation certificate
  • Certificates page – IaaS certificates

Both of these pages provide different options to complete the certificate replacement.

  • Generate – generate a self signed certificate to replace the existing certificate in situ
  • Import – Use an existing certificate
  • Provide thumbprint – Option to use a certificate if already imported into IaaS server certificate store. This just acts as a pointer, no certificate is physically transmitted

When you update a certificate, trust is re-initiated with other components.

Side note – If you use certificate chains, specify the certificates in the following order:

  1. Client/server certificate signed by the intermediate CA certificate

  2. One or more intermediate certificates

  3. A root CA certificate

If you offload SSL on your load balancer, you will need to SSH to the appliance to export the certificate to upload to your load balancer.

While updating the certificate, a list of recent actions and success/failure is show near the bottom of the page.

That’s all for this one, fairly straightforward. Although it’s always worth remembering that exam questions are going to be scenario based so you’ll be asked to achieve an objective that may well touch multiple parts of vRA.

 

VCAP-CMA Deploy – Objective 7.2

Disclaimer: These are my notes from studying for the 3V0-31.18 exam. If something doesn’t make sense, please feel free to reach out.

The main goal for this objective is the initial installation & configuration of vRB in line with vRA.

Objective 7.2 – Integrate vRealize Business with vRealize Automation

References

Pretty simple objective this one.

Once you’ve deployed the vRB appliance, browse to the vRB VAMI page, on the Register tab complete the details of the vRA appliance and hit register.

Unregistered_vRB

Once you’ve registered the appliance successfully, you see the red text at the top & the SSO status change.

Registered_vRB

Once this is done, login to vRA. You’ll notice that there are a few extra roles, once these have been granted you’ll see the Business Management tab and the Business Management section with the Administration tab.

The latter is the place to start as this is vRB data collection is configured. You’ll need to configure vRB to point to the required endpoints. In my case, a vCenter and NSX manager.

All done!

 

VCAP-CMA Deploy – Objective 7.1

Disclaimer: These are my notes from studying for the 3V0-31.18 exam. If something doesn’t make sense, please feel free to reach out.

The main goal for this objective is scaling vRealize Automation

Objective 7.1 – Scale vRealize Automation components to a highly-available configuration

References

There are a few ways to scaling a vRA installation. The simplest of which involves installing IaaS components on your Windows servers & using the VAMI to add another vRA node to the cluster.

The automatable alternatives are vra-command and the API. They’re detailed really well in a blog post series (part 2 & part 3) from the Cloud Management BU over at VMware.

For this post (based on my assumptions about the exam) we’ll be using the manual method. Let’s say we’ve got any environment setup like the below:

Minimal_Deployment_Diagram

Later down the line, we want to make this setup resilient, looking like the below:

Minimal_Resilient_Deployment_Diagram

Firstly we’ll need to deploy another vRA appliance and another two Windows servers.

vRA Appliance

Browse to the active appliance’s VAMI and open the cluster page to confirm the component parts of your existing cluster.

Cluster_Status_Before

You can see the current status of the node is not in cluster mode and the three boxes that are part of the environment currently.

On the new vRA appliance’s VAMI page, log in and cancel the setup wizard. Browse to the cluster page, confirm the node is not part of a cluster. Fill in the details of the active node and hit the Join Cluster button. You’ll be asked to verify the certificate if you’re using self-signed certs.

Once complete, you can check that both nodes are visible on the cluster tab, messaging tab and on the database tab.

IaaS Nodes

Little bit more complicated for the IaaS nodes as you’ll need to satisfy the pre-requisites manually. I’m only covering this for one of the two boxes I’m adding, but the process is the same for each. Log into the IaaS box and download the IaaS installer from one of the vRA appliances. Run this, connect to a vRA box, select the IaaS role and run the pre-requisite checker. This will very quickly highlight if you’ve missed any of the pre-requisites! Assuming this passes, continue through the install. Once it’s finished, add the server to your load balanced server pool and all done!

VCAP-CMA Deploy – Objective 6.2

Disclaimer: These are my notes from studying for the 3V0-31.18 exam. If something doesn’t make sense, please feel free to reach out.

The main goal for the whole of section 6 is to understand the tenant administration required.

Objective 6.2 – Add additional tenants and/or business groups to existing ones

References

This should be a fairly quick objective to cover off.

Add additional tenants

Creating additional tenants is done by logging in as a System Administrator to the default tenant. From the Tenants tab, hit the new button to be presented with a fairly self explanatory form:

New_Tenant

Once you’ve completed this page and hit the next button, you’ll be asked to create any local users that you need. I typically create the bare minimum, to be used to configure Active Directory authentication. Once that is done, keep it as a Break Glass type password.

Then add the user(s) you’ve just created to the appropriate group(s) – IaaS Administrators or Tenant Administrators.

Add new business groups

Logged in as a tenant administrator, go to Administration, Users & Groups, Business Groups. Hit the new button (surprise!) and fill in the first page of the form. On the next page, you need to add users to each role. For a definition of what each group does, hover over the i button. You can also check out my post from the VCP for what permissions each group gets here. Once that is done, you can allocate a default machine prefix and an Active Directory container if required.