Creating Your Cluster's Kube Config


Now that we have created and dynamically populated our rancher-cluster.yml file, we can use the rke (Rancher Kubernetes Engine) to deploy the cluster with a single command.

You can opt to download and install rke on your local computer.

However, considering we're big on Docker, and that Rancher is all about managing lots of Docker containers, I find it really odd that the installation for rke involves installing 'stuff' on to a host machine.

Never the less, if we look hard enough, we can find a Dockerised version of rke.

A helpful chap by the name of Simon "raynigon" Schneider has created a Docker image that contains rke. It's a tiny image, coming in at about ~10mb in size.

Here is the Dockerfile in full:

FROM alpine:latest

ENV RANCHER_REPOSITORY rancher/rke

WORKDIR /app

RUN echo "Installing Curl" && \
    apk --no-cache add curl > /dev/null && \
    echo "Using Repository: $RANCHER_REPOSITORY" && \
    RANCHER_VERSION=$(curl --silent "https://api.github.com/repos/$RANCHER_REPOSITORY/releases/latest" | grep '"tag_name":' | sed -E 's/.*"([^"]+)".*/\1/') && \
    echo "Using Rancher Version: $RANCHER_VERSION" && \
    curl --silent -L "https://github.com/$RANCHER_REPOSITORY/releases/download/$RANCHER_VERSION/rke_linux-amd64" --output rke_linux-amd64 && \
    chmod +x rke_linux-amd64 && \
    export PATH=$PATH:/app/ && \
    rke_linux-amd64 --version

CMD ["/app/rke_linux-amd64"]

We don't need to build this ourselves. We can get access to this image via Docker Hub.

A potential improvement to this Dockerfile may be to use an Entrypoint instead of the CMD. This would allow us to extend the base /app/rke_linux-amd64 with --version directly.

As a result of using CMD, this image isn't the most intuitive to use, but we can see it works right enough:

docker run --rm raynigon/rke-docker /app/rke_linux-amd64 --version

rke version v0.1.8

I'll never remember this the next time I need to run it. Looks like a great candidate for the Makefile :)

rke:
    @docker run --rm \
        raynigon/rke-docker \
        /app/rke_linux-amd64 $(cmd)

And then we can:

make rke cmd=--version

rke version v0.1.8

Even though we have used the file name of rancher-cluster.yml, by default rke up will expect a file called cluster.yml. We don't need to change our file name on our local disk (although we could).

Instead, we will mount the file inside the resulting Docker container with the right name.

It is possible to change the filename, if needed, by using the --config flag. You can find a bunch more flags with the --help command. We cover that a little further in the video.

In order to allow the resulting Docker container to talk to our various Kubernetes cluster nodes, we will need to add in the SSH key volumes, as we have done previously in the run_playbook role.

And just like in the previous video, if we don't create / touch a kube_config_rancher-cluster.yml before running this command, we will get some unexpected volume bindings.

I'm going to create another shell script to do the dirty work for us:

touch bin/rke.sh
chmod +x bin/rke.sh

And into that file:

#!/bin/sh

# Create the file if it doesn't exist
touch kube_config_rancher-cluster.yml

docker run --rm -it \
        -v ~/.ssh/id_rsa:/root/.ssh/id_rsa \
        -v ~/.ssh/id_rsa.pub:/root/.ssh/id_rsa.pub \
        -v $(pwd)/kube_config_rancher-cluster.yml:/app/kube_config_rancher-cluster.yml \
        -v $(pwd)/rancher-cluster.yml:/app/cluster.yml \
        raynigon/rke-docker \
        /app/rke_linux-amd64 $1

If at all uncertain, watch the video for further explanation on why this file looks the way it does.

Which means our Makefile ends up looking like this:

create_rancher_cluster_yaml: 
    bin/create_rancher_cluster_yaml.sh

rke:
    bin/rke.sh "up --config /app/rancher-cluster.yml"

And running make rke:

➜  rancher-2 make rke
bin/rke.sh "up --config /app/rancher-cluster.yml"
rm: kube_config_rancher-cluster.yml: No such file or directory
INFO[0000] Building Kubernetes cluster
INFO[0000] [dialer] Setup tunnel for host [134.209.23.208]
INFO[0000] [dialer] Setup tunnel for host [134.209.23.209]
INFO[0001] [dialer] Setup tunnel for host [134.209.23.201]
INFO[0001] [dialer] Setup tunnel for host [134.209.23.202]
INFO[0002] [dialer] Setup tunnel for host [134.209.23.203]
INFO[0002] [dialer] Setup tunnel for host [134.209.20.61]
INFO[0003] [state] Found local kube config file, trying to get state from cluster
INFO[0003] [reconcile] Local config is not vaild, rebuilding admin config
INFO[0003] [reconcile] Rebuilding and updating local kube config
WARN[0003] Failed to initiate new Kubernetes Client: invalid configuration: no configuration has been provided
INFO[0003] [network] Deploying port listener containers
INFO[0003] [network] Successfully started [rke-etcd-port-listener] container on host [134.209.23.201]
INFO[0003] [network] Successfully started [rke-etcd-port-listener] container on host [134.209.23.202]
INFO[0004] [network] Successfully started [rke-cp-port-listener] container on host [134.209.20.61]
INFO[0004] [network] Successfully started [rke-cp-port-listener] container on host [134.209.23.203]
INFO[0005] [network] Successfully started [rke-worker-port-listener] container on host [134.209.23.208]
INFO[0005] [network] Successfully started [rke-worker-port-listener] container on host [134.209.23.209]
INFO[0005] [network] Port listener containers deployed successfully
INFO[0005] [network] Running etcd <-> etcd port checks
INFO[0006] [network] Successfully started [rke-port-checker] container on host [134.209.23.202]
INFO[0006] [network] Successfully started [rke-port-checker] container on host [134.209.23.201]
INFO[0006] [network] Running control plane -> etcd port checks
INFO[0006] [network] Successfully started [rke-port-checker] container on host [134.209.20.61]
INFO[0006] [network] Successfully started [rke-port-checker] container on host [134.209.23.203]
INFO[0006] [network] Running control plane -> worker port checks
INFO[0007] [network] Successfully started [rke-port-checker] container on host [134.209.20.61]
INFO[0007] [network] Successfully started [rke-port-checker] container on host [134.209.23.203]
INFO[0007] [network] Running workers -> control plane port checks
INFO[0008] [network] Successfully started [rke-port-checker] container on host [134.209.23.208]
INFO[0009] [network] Successfully started [rke-port-checker] container on host [134.209.23.209]
INFO[0009] [network] Checking KubeAPI port Control Plane hosts
INFO[0009] [network] Removing port listener containers
INFO[0009] [remove/rke-etcd-port-listener] Successfully removed container on host [134.209.23.202]
INFO[0009] [remove/rke-etcd-port-listener] Successfully removed container on host [134.209.23.201]
INFO[0010] [remove/rke-cp-port-listener] Successfully removed container on host [134.209.20.61]
INFO[0010] [remove/rke-cp-port-listener] Successfully removed container on host [134.209.23.203]
INFO[0010] [remove/rke-worker-port-listener] Successfully removed container on host [134.209.23.208]
INFO[0012] [remove/rke-worker-port-listener] Successfully removed container on host [134.209.23.209]
INFO[0012] [network] Port listener containers removed successfully
INFO[0012] [certificates] Attempting to recover certificates from backup on [etcd,controlPlane] hosts
INFO[0012] [certificates] Successfully started [cert-fetcher] container on host [134.209.23.201]
INFO[0023] [certificates] Certificate backup found on [etcd,controlPlane] hosts
INFO[0023] [reconcile] Rebuilding and updating local kube config
INFO[0023] Successfully Deployed local admin kubeconfig at [/app/kube_config_rancher-cluster.yml]
INFO[0023] Successfully Deployed local admin kubeconfig at [/app/kube_config_rancher-cluster.yml]
INFO[0023] [reconcile] Reconciling cluster state
INFO[0023] [reconcile] This is newly generated cluster
INFO[0023] [certificates] Deploying kubernetes certificates to Cluster nodes
INFO[0034] Successfully Deployed local admin kubeconfig at [/app/kube_config_rancher-cluster.yml]
INFO[0034] [certificates] Successfully deployed kubernetes certificates to Cluster nodes
INFO[0034] Pre-pulling kubernetes images
INFO[0034] [pre-deploy] Pulling image [rancher/hyperkube:v1.10.5-rancher1] on host [134.209.23.203]
INFO[0034] [pre-deploy] Pulling image [rancher/hyperkube:v1.10.5-rancher1] on host [134.209.23.209]
INFO[0034] [pre-deploy] Pulling image [rancher/hyperkube:v1.10.5-rancher1] on host [134.209.20.61]
INFO[0034] [pre-deploy] Pulling image [rancher/hyperkube:v1.10.5-rancher1] on host [134.209.23.202]
INFO[0034] [pre-deploy] Pulling image [rancher/hyperkube:v1.10.5-rancher1] on host [134.209.23.201]
INFO[0034] [pre-deploy] Pulling image [rancher/hyperkube:v1.10.5-rancher1] on host [134.209.23.208]
INFO[0058] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.10.5-rancher1] on host [134.209.23.203]
INFO[0060] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.10.5-rancher1] on host [134.209.20.61]
INFO[0074] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.10.5-rancher1] on host [134.209.23.201]
INFO[0083] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.10.5-rancher1] on host [134.209.23.202]
INFO[0084] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.10.5-rancher1] on host [134.209.23.209]
INFO[0103] [pre-deploy] Successfully pulled image [rancher/hyperkube:v1.10.5-rancher1] on host [134.209.23.208]
INFO[0103] Kubernetes images pulled successfully
INFO[0103] [etcd] Building up etcd plane..
INFO[0103] [etcd] Pulling image [rancher/coreos-etcd:v3.1.12] on host [134.209.23.201]
INFO[0106] [etcd] Successfully pulled image [rancher/coreos-etcd:v3.1.12] on host [134.209.23.201]
INFO[0106] [etcd] Successfully started [etcd] container on host [134.209.23.201]
INFO[0106] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [134.209.23.201]
INFO[0107] [etcd] Successfully started [etcd-rolling-snapshots] container on host [134.209.23.201]
INFO[0107] [certificates] Successfully started [rke-bundle-cert] container on host [134.209.23.201]
INFO[0108] [etcd] Successfully started [rke-log-linker] container on host [134.209.23.201]
INFO[0108] [remove/rke-log-linker] Successfully removed container on host [134.209.23.201]
INFO[0108] [etcd] Pulling image [rancher/coreos-etcd:v3.1.12] on host [134.209.23.202]
INFO[0111] [etcd] Successfully pulled image [rancher/coreos-etcd:v3.1.12] on host [134.209.23.202]
INFO[0111] [etcd] Successfully started [etcd] container on host [134.209.23.202]
INFO[0111] [etcd] Saving snapshot [etcd-rolling-snapshots] on host [134.209.23.202]
INFO[0111] [etcd] Successfully started [etcd-rolling-snapshots] container on host [134.209.23.202]
INFO[0112] [certificates] Successfully started [rke-bundle-cert] container on host [134.209.23.202]
INFO[0113] [etcd] Successfully started [rke-log-linker] container on host [134.209.23.202]
INFO[0113] [remove/rke-log-linker] Successfully removed container on host [134.209.23.202]
INFO[0113] [etcd] Successfully started etcd plane..
INFO[0113] [controlplane] Building up Controller Plane..
INFO[0114] [controlplane] Successfully started [kube-apiserver] container on host [134.209.23.203]
INFO[0114] [controlplane] Successfully started [kube-apiserver] container on host [134.209.20.61]
INFO[0114] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [134.209.20.61]
INFO[0114] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [134.209.23.203]
INFO[0127] [healthcheck] service [kube-apiserver] on host [134.209.20.61] is healthy
INFO[0127] [healthcheck] service [kube-apiserver] on host [134.209.23.203] is healthy
INFO[0127] [controlplane] Successfully started [rke-log-linker] container on host [134.209.23.203]
INFO[0127] [controlplane] Successfully started [rke-log-linker] container on host [134.209.20.61]
INFO[0127] [remove/rke-log-linker] Successfully removed container on host [134.209.23.203]
INFO[0127] [remove/rke-log-linker] Successfully removed container on host [134.209.20.61]
INFO[0128] [controlplane] Successfully started [kube-controller-manager] container on host [134.209.23.203]
INFO[0128] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [134.209.23.203]
INFO[0128] [controlplane] Successfully started [kube-controller-manager] container on host [134.209.20.61]
INFO[0128] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [134.209.20.61]
INFO[0128] [healthcheck] service [kube-controller-manager] on host [134.209.23.203] is healthy
INFO[0128] [healthcheck] service [kube-controller-manager] on host [134.209.20.61] is healthy
INFO[0129] [controlplane] Successfully started [rke-log-linker] container on host [134.209.23.203]
INFO[0129] [controlplane] Successfully started [rke-log-linker] container on host [134.209.20.61]
INFO[0129] [remove/rke-log-linker] Successfully removed container on host [134.209.23.203]
INFO[0129] [remove/rke-log-linker] Successfully removed container on host [134.209.20.61]
INFO[0129] [controlplane] Successfully started [kube-scheduler] container on host [134.209.23.203]
INFO[0129] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [134.209.23.203]
INFO[0129] [controlplane] Successfully started [kube-scheduler] container on host [134.209.20.61]
INFO[0129] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [134.209.20.61]
INFO[0130] [healthcheck] service [kube-scheduler] on host [134.209.23.203] is healthy
INFO[0130] [healthcheck] service [kube-scheduler] on host [134.209.20.61] is healthy
INFO[0130] [controlplane] Successfully started [rke-log-linker] container on host [134.209.23.203]
INFO[0130] [controlplane] Successfully started [rke-log-linker] container on host [134.209.20.61]
INFO[0130] [remove/rke-log-linker] Successfully removed container on host [134.209.23.203]
INFO[0130] [remove/rke-log-linker] Successfully removed container on host [134.209.20.61]
INFO[0130] [controlplane] Successfully started Controller Plane..
INFO[0130] [authz] Creating rke-job-deployer ServiceAccount
INFO[0131] [authz] rke-job-deployer ServiceAccount created successfully
INFO[0131] [authz] Creating system:node ClusterRoleBinding
INFO[0131] [authz] system:node ClusterRoleBinding created successfully
INFO[0131] [certificates] Save kubernetes certificates as secrets
INFO[0131] [certificates] Successfully saved certificates as kubernetes secret [k8s-certs]
INFO[0131] [state] Saving cluster state to Kubernetes
INFO[0131] [state] Successfully Saved cluster state to Kubernetes ConfigMap: cluster-state
INFO[0131] [worker] Building up Worker Plane..
INFO[0131] [sidekick] Sidekick container already created on host [134.209.23.203]
INFO[0131] [sidekick] Sidekick container already created on host [134.209.20.61]
INFO[0131] [worker] Successfully started [nginx-proxy] container on host [134.209.23.202]
INFO[0131] [worker] Successfully started [kubelet] container on host [134.209.23.203]
INFO[0131] [healthcheck] Start Healthcheck on service [kubelet] on host [134.209.23.203]
INFO[0131] [worker] Successfully started [kubelet] container on host [134.209.20.61]
INFO[0131] [healthcheck] Start Healthcheck on service [kubelet] on host [134.209.20.61]
INFO[0131] [worker] Successfully started [nginx-proxy] container on host [134.209.23.201]
INFO[0132] [worker] Successfully started [rke-log-linker] container on host [134.209.23.202]
INFO[0132] [worker] Successfully started [nginx-proxy] container on host [134.209.23.208]
INFO[0132] [worker] Successfully started [rke-log-linker] container on host [134.209.23.201]
INFO[0132] [remove/rke-log-linker] Successfully removed container on host [134.209.23.202]
INFO[0132] [remove/rke-log-linker] Successfully removed container on host [134.209.23.201]
INFO[0132] [worker] Successfully started [kubelet] container on host [134.209.23.202]
INFO[0132] [healthcheck] Start Healthcheck on service [kubelet] on host [134.209.23.202]
INFO[0133] [worker] Successfully started [rke-log-linker] container on host [134.209.23.208]
INFO[0134] [remove/rke-log-linker] Successfully removed container on host [134.209.23.208]
INFO[0135] [worker] Successfully started [kubelet] container on host [134.209.23.208]
INFO[0135] [healthcheck] Start Healthcheck on service [kubelet] on host [134.209.23.208]
INFO[0137] [healthcheck] service [kubelet] on host [134.209.23.203] is healthy
INFO[0137] [healthcheck] service [kubelet] on host [134.209.20.61] is healthy
INFO[0138] [worker] Successfully started [rke-log-linker] container on host [134.209.23.203]
INFO[0138] [worker] Successfully started [rke-log-linker] container on host [134.209.20.61]
INFO[0138] [remove/rke-log-linker] Successfully removed container on host [134.209.23.203]
INFO[0138] [remove/rke-log-linker] Successfully removed container on host [134.209.20.61]
INFO[0138] [worker] Successfully started [kube-proxy] container on host [134.209.23.203]
INFO[0138] [healthcheck] Start Healthcheck on service [kube-proxy] on host [134.209.23.203]
INFO[0138] [worker] Successfully started [kube-proxy] container on host [134.209.20.61]
INFO[0138] [healthcheck] Start Healthcheck on service [kube-proxy] on host [134.209.20.61]
INFO[0139] [healthcheck] service [kubelet] on host [134.209.23.202] is healthy
INFO[0139] [worker] Successfully started [nginx-proxy] container on host [134.209.23.209]
INFO[0139] [worker] Successfully started [rke-log-linker] container on host [134.209.23.202]
INFO[0139] [remove/rke-log-linker] Successfully removed container on host [134.209.23.202]
INFO[0140] [worker] Successfully started [rke-log-linker] container on host [134.209.23.209]
INFO[0140] [worker] Successfully started [kube-proxy] container on host [134.209.23.202]
INFO[0140] [healthcheck] Start Healthcheck on service [kube-proxy] on host [134.209.23.202]
INFO[0140] [remove/rke-log-linker] Successfully removed container on host [134.209.23.209]
INFO[0140] [worker] Successfully started [kubelet] container on host [134.209.23.201]
INFO[0140] [healthcheck] Start Healthcheck on service [kubelet] on host [134.209.23.201]
INFO[0141] [worker] Successfully started [kubelet] container on host [134.209.23.209]
INFO[0141] [healthcheck] Start Healthcheck on service [kubelet] on host [134.209.23.209]
INFO[0142] [healthcheck] service [kubelet] on host [134.209.23.208] is healthy
INFO[0143] [worker] Successfully started [rke-log-linker] container on host [134.209.23.208]
INFO[0144] [remove/rke-log-linker] Successfully removed container on host [134.209.23.208]
INFO[0144] [worker] Successfully started [kube-proxy] container on host [134.209.23.208]
INFO[0144] [healthcheck] Start Healthcheck on service [kube-proxy] on host [134.209.23.208]
INFO[0146] [healthcheck] service [kube-proxy] on host [134.209.23.202] is healthy
INFO[0146] [worker] Successfully started [rke-log-linker] container on host [134.209.23.202]
INFO[0146] [healthcheck] service [kubelet] on host [134.209.23.201] is healthy
INFO[0147] [remove/rke-log-linker] Successfully removed container on host [134.209.23.202]
INFO[0147] [healthcheck] service [kubelet] on host [134.209.23.209] is healthy
INFO[0147] [worker] Successfully started [rke-log-linker] container on host [134.209.23.201]
INFO[0147] [remove/rke-log-linker] Successfully removed container on host [134.209.23.201]
INFO[0147] [worker] Successfully started [kube-proxy] container on host [134.209.23.201]
INFO[0147] [healthcheck] Start Healthcheck on service [kube-proxy] on host [134.209.23.201]
INFO[0149] [healthcheck] service [kube-proxy] on host [134.209.23.203] is healthy
INFO[0150] [healthcheck] service [kube-proxy] on host [134.209.20.61] is healthy
INFO[0150] [worker] Successfully started [rke-log-linker] container on host [134.209.23.203]
INFO[0150] [worker] Successfully started [rke-log-linker] container on host [134.209.20.61]
INFO[0151] [remove/rke-log-linker] Successfully removed container on host [134.209.23.203]
INFO[0151] [worker] Successfully started [rke-log-linker] container on host [134.209.23.209]
INFO[0151] [remove/rke-log-linker] Successfully removed container on host [134.209.20.61]
INFO[0151] [remove/rke-log-linker] Successfully removed container on host [134.209.23.209]
INFO[0151] [healthcheck] service [kube-proxy] on host [134.209.23.208] is healthy
INFO[0152] [worker] Successfully started [kube-proxy] container on host [134.209.23.209]
INFO[0152] [healthcheck] Start Healthcheck on service [kube-proxy] on host [134.209.23.209]
INFO[0152] [worker] Successfully started [rke-log-linker] container on host [134.209.23.208]
INFO[0152] [healthcheck] service [kube-proxy] on host [134.209.23.209] is healthy
INFO[0153] [remove/rke-log-linker] Successfully removed container on host [134.209.23.208]
INFO[0153] [worker] Successfully started [rke-log-linker] container on host [134.209.23.209]
INFO[0153] [remove/rke-log-linker] Successfully removed container on host [134.209.23.209]
INFO[0153] [healthcheck] service [kube-proxy] on host [134.209.23.201] is healthy
INFO[0154] [worker] Successfully started [rke-log-linker] container on host [134.209.23.201]
INFO[0154] [remove/rke-log-linker] Successfully removed container on host [134.209.23.201]
INFO[0154] [worker] Successfully started Worker Plane..
INFO[0154] [sync] Syncing nodes Labels and Taints
INFO[0163] [sync] Successfully synced nodes Labels and Taints
INFO[0163] [network] Setting up network plugin: canal
INFO[0163] [addons] Saving addon ConfigMap to Kubernetes
INFO[0163] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-network-plugin
INFO[0163] [addons] Executing deploy job..
INFO[0182] [addons] Setting up KubeDNS
INFO[0182] [addons] Saving addon ConfigMap to Kubernetes
INFO[0182] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-kubedns-addon
INFO[0182] [addons] Executing deploy job..
INFO[0187] [addons] KubeDNS deployed successfully..
INFO[0187] [ingress] Setting up nginx ingress controller
INFO[0187] [addons] Saving addon ConfigMap to Kubernetes
INFO[0187] [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-ingress-controller
INFO[0187] [addons] Executing deploy job..
INFO[0193] [ingress] ingress controller nginx is successfully deployed
INFO[0193] [addons] Setting up user addons
INFO[0193] [addons] no user addons defined
INFO[0193] Finished building Kubernetes cluster successfully

Once the process finishes successfully, we should be left with a populated kube_config_cluster.yml file.

apiVersion: v1
kind: Config
clusters:
- cluster:
    api-version: v1
    certificate-authority-data: ...
    server: "https://134.209.20.61:6443"
  name: "local"
contexts:
- context:
    cluster: "local"
    user: "kube-admin"
  name: "Default"
current-context: "Default"
users:
- name: "kube-admin"
  user:
    client-certificate-data: ...
    client-key-data: ...

Which is good, because we'll need that file in the very next video.

Episodes