Quick and easy: A multi-node Kubernetes cluster on CentOS 7 + QEMU/KVM (libvirt) kubernetes multi node cluster

Quick and easy: A multi-node Kubernetes cluster on CentOS 7 + QEMU/KVM (libvirt)

Kubernetes is a popular container orchestration platform. As a developer understanding the environment in which your application is going to run is important since this can help you use available services of the platform and fix issues.

There are several options to run Kubernetes locally to get some experience with Kubernetes as developer. For example MinikubeMicroK8s and MiniShift. These options however are not representative for a real environment. They for example usually do not have master and slave nodes. Running locally requires quite different configuration compared to running multiple nodes on different machines. Think for example about how to deal with storage and a container registry which you want to share. Installing a full blown environment requires a lot of work and resources. Using a cloud service usually is not free and you usually have less to no control over the environment Kubernetes is running in.

In this blog I’ll describe a ‘middle way’. Get an easy to manage small multi node Kubernetes environment running in different VMs. You can use this environment for example to learn what the challenges of clusters are and how to deal with them efficiently.

It uses the work done here with some minor additions to get a dashboard ready.

Getting the host ready

As host OS I used Cent OS 7 (on bare metal). CentOS 8 introduces some major changes such as Podman instead of Docker so I did not want to take any risks and decided to stick with this commonly used open source OS compiled from Red Hat sources. I do recommend sticking to a single partition for a local development environment to make it more easy for yourself. Also I used a minimal desktop environment with administrative tools.

Also create a user which can do sudo to execute various commands in the following steps.

Install QEMU/KVM + libvirt

We are going to use QEMU/KVM and access it through libvirt. Why? Because I want to approach bare metal performance as much as I can and QEMU/KVM does a good job at that. See for example this performance comparison of bare metal vs KVM vs Virtualbox. KVM greatly outperforms Virtualbox and approaches bare metal speeds in quite some tests. I do like the Virtualbox GUI though but I can live with the Virtual Machine Manager.

The following will do the trick on CentOS 7

sudo yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer bridge-utils gcc git make

Install Vagrant and required plugins

Vagrant is used to create the virtual machines for the master and nodes. Vagrant can easily be installed from here. It even has a CentOS specific RPM which is nice.

With Vagrant I’m going to use two plugins. vagrant-libvirt and vagrant-sshfs. The first plugin allows vagrant to manage QEMU/KVM VMs through libvirt. The second plugin will be used for shared folders. Why sshfs? Mainly because libvirt shared folder alternatives such as NFS and 9p were more difficult to set-up and I wanted to be able to provide the same shared storage to all VMs.

vagrant plugin install vagrant-libvirt
vagrant plugin install vagrant-sshfs

Install Kubernetes

Install kubectl

First install kubectl on the host. This is described in detail here.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
sudo yum install -y kubectl

Create the VMs

Execute the following under a normal user (you also installed the Vagrant plugins under this user).

git clone https://github.com/MaartenSmeets/k8s-vagrant-multi-node.git
cd k8s-vagrant-multi-node
mkdir data/shared

make up -j 3 BOX_OS=centos VAGRANT_DEFAULT_PROVIDER=libvirt NODE_CPUS=1 NODE_COUNT=2 MASTER_CPUS=2

This process will take a while. You can follow the progress by looking in the Virtual Machine Manager and in the console.

Quick and easy: A multi-node Kubernetes cluster on CentOS 7 + QEMU/KVM (libvirt) Screenshot from 2020 04 30 11 33 43 1

The command will also ask a couple of times for the user password. Be ready to input this since if you wait too long, it will timeout and the build will fail. If it fails, clean up using

make clean VAGRANT_DEFAULT_PROVIDER=libvirt

If at the end you only see a single node, you can do the following to create a second node

make start-node-2 NODE_CPUS=1 VAGRANT_DEFAULT_PROVIDER=libvirt BOX_OS=centos

The Makefile has a lot of other easy to use commands and parameters. Read the documentation here

You can check if the nodes are up and kubectl is configured correctly by:

kubectl get nodes

NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   15h   v1.18.2
node1    Ready    <none>   15h   v1.18.2
node2    Ready    <none>   15h   v1.18.2

Configure Kubernetes

We now have a master and two nodes running.

The environment does not have an out of the box dashboard like OKD (open source OpenShift). Even though the make scripts allow you to add the dashboard during the creation of the nodes, I prefer to this afterwards so I know what I’m doing.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

Now you have a dashboard but no user who can browse resources. In order to give an admin user the required privileges, I did the following (based on this).

cat > dashboard-adminuser.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
EOF

kubectl apply -f dashboard-adminuser.yaml

cat > admin-role-binding.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
EOF

kubectl apply -f admin-role-binding.yaml

Now you can obtain a token by:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Start a local proxy:

kubectl proxy

Now you can use the token to login to:

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

and open the dashboard

Quick and easy: A multi-node Kubernetes cluster on CentOS 7 + QEMU/KVM (libvirt) Screenshot from 2020 04 30 13 28 16

Additional notes

This environment is not done yet.

  • Every node uses its own local registry. The Makefile used does provide ways to load the same image to the different registries but what I actually want is all the nodes to use the same registry.
  • There is no shared PersistentVolume available within the Kubernetes environment which can be used. Shared storage. Preparations for that have been made though since /shared in every VM is mounted to the same host folder.
  • We have not installed anything in the environment yet but a small dashboard. I want to have Jenkins running inside and be able to deploy applications.

Still some work to be done so stay tuned!