Bootstrapping Tenant Clusters
There’s a couple of different methods to fully bootstrap a Kubernetes cluster using Cluster-API (CAPI). One is to use the newly defined ClusterResourceSet, which enables you to define a configmap on the management cluster, this is then applied to the tenant cluster as a resource. It’s a very effective method, the two blogs I’ve seen on this recently are Sam Perrin’s blog for the CAPV provider (vSphere) and Scott Lowe’s blog for the CAPA provider (AWS). In fact it’s also the default way the CAPV provider will install the vSphere cloud controller.
The second method I’m going to cover today, that is using Flux v2 and the GitOps Toolkit to provision to a remote cluster. I’m going to assume some familiarity with CAPI and with Flux as I won’t be covering those.
Flux v2 can be used to deploy to remote clusters. It does this by using a kubeconfig stored as secret on the management cluster. This secret is generated as part of the CAPI cluster deployment process, but of course this can be manually created if needed.
This is the repo layout I’m using for this example, obviously this can be structured however you decide.
|-- capi
|-- clusterconfig
| |-- tenant-cluster
| |-- stage1
| |-- stage2
|-- clusters
| |-- blogadmin
| | |-- flux-system
| |-- tenant-cluster
|-- infrastructure
Lets have a quick review of what is contained in each folder.
capi
In here we’ve got the capi deployment manifests, this is accessed by the management cluster.
clusterconfig
In here we’ve got the manifests that are to be deployed from the management cluster to remote cluster. In this instance this is just calico & flux, but you could add anything you required in here.
You’ll also notice each cluster is split into two ‘stage’ folders. This is so we can deploy the Custom Resource Definitions, and then deploy the Custom Resources separately.
clusters
This is the main cluster directory. Each cluster’s flux deployment will sync with the relevant folder in this directory. This is where we’ll specify kustomization resources to deploy other resources.
infrastructure
I’ve got a collection of application manifests in here that I would deploy to any cluster. In this instance it’s just got prometheus and grafana, but it could be anything.
When applying manifests via Flux, a kustomization resource is created. Within this file we can add a remote secretRef. Below you can see the kustomization I’m applying to the management cluster to deploy resources to the remote cluster for stage 1 of the process.
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: tenant-cluster-1
namespace: clusters
spec:
interval: 10m0s
sourceRef:
kind: GitRepository
name: flux-system
namespace: flux-system
path: ./clusterconfig/tenant-cluster/stage1
prune: true
validation: client
kubeConfig: # kubeconfig to use for authentication against remote cluster
secretRef:
name: tenant-cluster-kubeconfig
decryption: # Configuration to enable secrets to be decrypted before deploying as k8s secret
provider: sops
secretRef:
name: sops-gpg
This will create a Flux v2 kustomization resource that will deploy the contents of the path
element using the kubeConfig specified. As I want to deploy a kubernetes secret to enable authentication with the remote git repository, I’ve added a secret within this repository. This is encrypted using Mozilla SOPS to be stored in git. To enable decryption prior to deployment, there is the private key stored on the management cluster. So the management cluster will decrypt the key and deploy a standard kubernetes secret to the remote cluster.
Once this is fully deployed, the stage 2 kustomization will begin to reconcile as it will recognise the custom resources that are located with that directory.
After stage 1 and stage 2 are fully deployed to the remote cluster, it should have a CNI installed, along with flux. Flux should then be synchronising the resources within the appropriate git directory and installing the remaining resources.
Using this method, clusters can become more ephemeral. Creating a new fully-functional cluster is as simple as a Pull Request, with all the goodness that brings.