There are various options to install a production-like Kubernetes distribution on your laptop. Previously I tried out using the Canonical stack (Juju, MAAS, Charmed Kubernetes) for this. This worked nicely but it gave me the feeling that it was a bit Canonical specific and it felt a bit heavy on resources at times. I decided to take a look at another way to install Kubernetes in such a way that it would approximate a production environment but was more independent on a specific provider and more lightweight. Of course first I needed to get my (virtual infrastructure) ready (KVM hosts) before I could deploy Kubernetes (using Kubespray).
My main inspirations for this were two blog posts here and here. Like with Charmed Kubernetes, the installed distribution is bare. It does not contain things like a private registry, distributed storage (read here) or load balancer (read here). You can find my scripts here (which are suitable for Ubuntu 20.04) and will probably work on other Linux distro’s with minor changes.
Provisioning infrastructure
This time I’m not depending one external tools such as Vagrant or MAAS to provide me with machines but I’m doing it ‘manually’ with some simple scripts. The idea is relatively simple. Use virt-install to create KVM VMs and install them using a Kickstart script which creates an ansible user and registers a public key so you can login using that user. Kubespray can then login and use ansible to install Kubernetes inside the VMs.
As indicated before, I mainly used the scripts provided and described here but created my own versions to fix some challenges I encountered. The scripts are thin wrappers around KVM related commands so not much to worry about in terms of maintenance. You can execute the virt-install command multiple times to create more hosts.
What did I change in the scripts I used as base?
- I used Ubuntu 20.04 as a base OS instead of 18.04 on which the scripts and blog post was based. Specifying the ISO file in the virt-install command in the create-vm script did not work for me. I decided to install by specifying a remote URL which did the trick: ‘http://archive.ubuntu.com/ubuntu/dists/focal/main/installer-amd64/’.
- The Kickstart file contained a public key for which I did not have the private key and an encrypted password of which I did not have an unencrypted version so I inserted my own public key (of course generated specifically for this purpose) and encrypted my own password.
- It appeared virt-manager (KVM/QEMU GUI) and the virt-install command uses LIBVIRT_DEFAULT_URI=”qemu:///system” while virsh commands use “qemu:///session”. This caused some of the scripts to fail and VMs not to be visible. I added setting the parameter to qemu:///system in the scripts to avoid this.
- I’ve added some additional thin wrapper scripts (like start-vm.sh, call_create_vm.sh, call_delete_vm.sh) to start the machines, create multiple machines with a single command and remove them again. Just to make life a little bit easier.
Creating KVM machines
First install required packages on the host. The below commands work on Ubuntu 18.04 and 20.04. Other OSs require different commands/packages to install KVM/QEMU and some other related things.
Install some packages
sudo apt-get update
sudo apt-get -y install bridge-utils qemu-kvm qemu virt-manager net-tools openssh-server mlocate libvirt-clients libvirt-daemon libvirt-daemon-driver-storage-zfs python3-libvirt virt-manager virtinst
Make sure the user you want to use to create the VMs is in the libvirt group, allowing the user to create and manage VMs.
Clone the scripts
git clone https://github.com/MaartenSmeets/k8s-prov.git
cd k8s-prov
Create a public and private key pair
ssh-keygen -t rsa -C ansible@host -f id_rsa
Create an encrypted password for the user ansible
python encrypt-pw.py
Update ubuntu.ks
Update the ubuntu.ks file with the encrypted password and the generated public key. The Kickstart file already contains a public key for which the private key is provided and an encrypted password of which the plaintext is Welcome01. As indicated, these are for example purposes. Do not use them for production!
Start creating VMs!
Evaluate call_create_vm.sh for the number of VMs to create and resources per VM. By default it creates 4 VMs with each 2 cores assigned and 4Gb of memory. If you change the number of VMs, it is a good idea to also update the call_delete_vm.sh script and start-vm.sh script to reflect the change.
Next execute it.
call_create_vm.sh
You can monitor progress by opening the virt-manager and specific machines. After the script is complete, the machines will be shutdown.
You can start them with
start-vm.sh
Installing Kubernetes using Kubespray
Now your infrastructure is ready but how to get Kubernetes on it? Kubespray is a composition of Ansible playbooks, inventory, provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. It can be run from various Linux distributions and allows installing Kubernetes on various other distributions. Terraform scripts for various cloud environments are also provided, should you want to use those instead of providing your own KVM infra. Kubespray has quite a lot of Github stars, contributors and has been around for quite a while. It is part of the CNCF (here). I’ve also seen large customers using it to deploy and maintain their Kubernetes environment.
In order to use Kubespray, you need a couple of things such as some Python packages, a way to access your infrastructure and Ansible but (of course by sheer coincidence), you already fixed that in the previous step.
Clone the repository in a subdirectory of k8s-prov which you created earlier (so the commands can access the keys and scripts)
git clone https://github.com/kubernetes-sigs/kubespray.git
cd kubespray
Install requirements
pip install -r requirements.txt
Create an inventory
rm -Rf inventory/mycluster/
cp -rfp inventory/sample inventory/mycluster
Use a script to obtain the KVM host IP addresses. These will be used to generate a hosts.yml file indicating what should be installed where.
declare -a IPS=($(for n in $(seq 1 4); do ../get-vm-ip.sh node$n; done))
echo ${IPS[@]}
CONFIG_FILE=inventory/mycluster/hosts.yml \
python3 contrib/inventory_builder/inventory.py ${IPS[@]}
Make life easy by letting it copy an admin.conf to a host directory which can be used as ~/.kube/config
echo ' vars:' >> inventory/mycluster/hosts.yml
echo ' kubeconfig_localhost: true' >> inventory/mycluster/hosts.yml
Execute Ansible to provision the machines using the previously generated key.
export ANSIBLE_REMOTE_USER=ansible
ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml --private-key=../id_rsa
Create your config file so kubectl can do its thing
mkdir -p ~/.kube/
cp -rip inventory/mycluster/artifacts/admin.conf ~/.kube/config
Install kubectl (for Kubernetes)
sudo snap install kubectl --classic
The dashboard URL. First do kubectl proxy to be able to access it at localhost:8001
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#/login
Allow the kube-system:clusterrole-aggregation-controller to access the dashboard
kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=kube-system:clusterrole-aggregation-controller
Get a token to access the dashboard
kubectl -n kube-system describe secrets `kubectl -n kube-system get secrets | awk '/clusterrole-aggregation-controller/ {print $1}'` | awk '/token:/ {print $2}'
Login and enjoy!
Perfect setup. Thanks a million.
Please add
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.1/aio/deploy/alternative.yaml
and the URL to be opened should be
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/#/overview?namespace=default
(mind the http in the middle, in stead of https)
Doesn’t spawning VMs defeat the purpose of containers?
Kubernetes is more than just running containers. You can test an application in a container but it will say little about how it will work togeter with other containers in a Kubernetes environment or how it will react to Kubernetes container management. This is a setup to get Kubernetes running similar to a production environment; thus distributed over several hosts. The only way to achieve this on a single machine is by using virtual machines. Also, I’ve seen several customers running their Kubernetes production environment on virtualized hosts.