Rapidly spinning up a VM with Ubuntu and k3s (with the Kubernetes Dashboard) on my Windows laptop using Vagrant and Oracle VirtualBox lameriks 2020 01 1f

Rapidly spinning up a VM with Ubuntu and k3s (with the Kubernetes Dashboard) on my Windows laptop using Vagrant and Oracle VirtualBox

In November of last year, my colleague Lucas Jellema, wrote an article with the title “Ultra fast, ultra small Kubernetes on Linux – K3S beating minikube”.
[https://technology.amis.nl/2019/11/12/ultra-fast-ultra-small-kubernetes-on-linux-k3s-beating-minikube/]

For training and demo purposes, on my Windows laptop, I already had an environment with a guest Operating System, Docker and Minikube available within an Oracle VirtualBox appliance. This demo environment uses a Vagrantfile, scripts and Kubernetes manifest (yaml) files. But now I wanted to try out k3s also.
[https://technology.amis.nl/2019/02/12/rapidly-spinning-up-a-vm-with-ubuntu-docker-and-minikube-using-the-vm-drivernone-option-on-my-windows-laptop-using-vagrant-and-oracle-virtualbox/]

In this article, I will share with you the steps I took, to get k3s installed (with the Kubernetes Dashboard) on top of an Ubuntu guest Operating System within an Oracle VirtualBox appliance, with the help of Vagrant.

k3s

Lightweight Kubernetes. Easy to install, half the memory, all in a binary less than 40mb.

K3s is a fully compliant Kubernetes distribution with the following enhancements:

  • An embedded SQLite database has replaced etcd as the default datastore. External datastores such as PostgreSQL, MySQL, and etcd are also supported.
  • Simple but powerful “batteries-included” features have been added, such as: a local storage provider, a service load balancer, a Helm controller, and the Traefik ingress controller.
  • Operation of all Kubernetes control plane components is encapsulated in a single binary and process. This allows K3s to automate and manage complex cluster operations like distributing certificates.
  • In-tree cloud providers and storage plugins have been removed.
  • External dependencies have been minimized (just a modern kernel and cgroup mounts needed). K3s packages required dependencies, including:
    • containerd
    • Flannel
    • CoreDNS
    • Host utilities (iptables, socat, etc)

[https://rancher.com/docs/k3s/latest/en/]

Installing k3s

According to the website, installing k3s won’t take long.

  curl -sfL https://get.k3s.io | sh -
  # Check for Ready node, 
  takes maybe 30 seconds
  k3s kubectl get node

Rapidly spinning up a VM with Ubuntu and k3s (with the Kubernetes Dashboard) on my Windows laptop using Vagrant and Oracle VirtualBox lameriks 2020 01 1
[https://k3s.io/]

I had a look at the documentation and used the following command (using environment variable INSTALL_K3S_VERSION) in order to specify a particular version of K3s to download from github:

curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.0.1 sh -

[https://rancher.com/docs/k3s/latest/en/installation/install-options/]

Before setting up my demo environment I had a look at the k3s requirements.

  • Operating Systems
    k3s should run on just about any flavor of Linux. However, k3s is tested on the following operating systems and their subsequent non-major releases.

    • Ubuntu 16.04 (amd64)
    • Ubuntu 18.04 (amd64)
    • Raspbian Buster (armhf)
  • Hardware
    Hardware requirements scale based on the size of your deployments. Minimum recommendations are outlined here.

    • RAM: 512MB Minimum
    • CPU: 1 Minimum

[https://rancher.com/docs/k3s/latest/en/installation/node-requirements/]

For the version of k3s I looked at: https://github.com/rancher/k3s/releases
I chose: Latest release, v1.0.1

Rapidly spinning up a VM with Ubuntu and k3s (with the Kubernetes Dashboard) on my Windows laptop using Vagrant and Oracle VirtualBox lameriks 2020 01 2
[https://github.com/rancher/k3s/releases]

Vagrantfile

Based on the k3s operating system requirements I used the Vagrant Box search page to search for an Ubuntu 18.04 Vagrant Box (for VirtualBox).
[https://app.vagrantup.com/boxes/search]

Rapidly spinning up a VM with Ubuntu and k3s (with the Kubernetes Dashboard) on my Windows laptop using Vagrant and Oracle VirtualBox lameriks 2020 01 3

I chose: ubuntu/bionic64
[https://app.vagrantup.com/ubuntu/boxes/bionic64]

In my existing demo environment, I changed the content of the Vagrantfile to:
[https://technology.amis.nl/2019/02/12/rapidly-spinning-up-a-vm-with-ubuntu-docker-and-minikube-using-the-vm-drivernone-option-on-my-windows-laptop-using-vagrant-and-oracle-virtualbox/]

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/bionic64"
  
  config.vm.define "ubuntu_k3s" do |ubuntu_k3s|
  
    config.vm.network "forwarded_port",
      guest: 8001,
      host:  8001,
      auto_correct: true
     
    config.vm.network "forwarded_port",
      guest: 9110,
      host:  9110,
      auto_correct: true
      
    config.vm.provider "virtualbox" do |vb|
        vb.name = "Ubuntu k3s"
        vb.memory = "8192"
        vb.cpus = "1"
        
      args = []
      config.vm.provision "shell",
          path: "scripts/k3s.sh",
          args: args
    end
    
  end

end

In the scripts directory I created a file k3s.sh with the following content:

#!/bin/bash
echo "**** Begin installing k3s"

#Install
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.0.1 sh -

echo "**** End installing k3s"

From the subdirectory named env on my Windows laptop, I opened a Windows Command Prompt (cmd) and typed: vagrant up

This command creates and configures guest machines according to your Vagrantfile.
[https://www.vagrantup.com/docs/cli/up.html]

With the following output (only showing the part about k3s):

ubuntu_k3s: **** Begin installing k3s

ubuntu_k3s: **** End installing k3s
ubuntu_k3s: **** Begin installing k3s

ubuntu_k3s: **** End installing k3s
ubuntu_k3s: **** Begin installing k3s

ubuntu_k3s: **** End installing k3s

Provisioning shell script was running multiple times!

I noticed that the provisioning shell script was running multiple times!

I recently upgraded vagrant to version 2.2.6, so the problem could be related to that upgrade.
After some search on the Internet I found a solution, that worked for me:

Provisioning scripts always run twice?
The bug itself is due to your provision block not having a name. If you don’t want them running twice, you can fix it by giving it a name like this:

`config.vm.provision “my shell script”, type: “shell”, ….`
[https://groups.google.com/forum/#!topic/vagrant-up/Ue11v3BmBN4]

So, I changed the content of Vagrantfile to:
[in bold, I highlighted the changes]

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/bionic64"
  
  config.vm.define "ubuntu_k3s" do |ubuntu_k3s|
  
    config.vm.network "forwarded_port",
      guest: 8001,
      host:  8001,
      auto_correct: true
      
    config.vm.provider "virtualbox" do |vb|
        vb.name = "Ubuntu k3s"
        vb.memory = "8192"
        vb.cpus = "1"
        
      args = []
      config.vm.provision "k3s shell script", type: "shell",
          path: "scripts/k3s.sh",
          args: args
    end
    
  end

end

In order to stop the running machine and destroy its resources, I used the following command on the Windows Command Prompt: vagrant destroy

With the following output:

    ubuntu_k3s: Are you sure you want to destroy the ‘ubuntu_k3s’ VM? [y/N] y
==> ubuntu_k3s: Forcing shutdown of VM…
==> ubuntu_k3s: Destroying VM and associated drives…

This command stops the running machine Vagrant is managing and destroys all resources that were created during the machine creation process. After running this command, your computer should be left at a clean state, as if you never created the guest machine in the first place.
[https://www.vagrantup.com/docs/cli/destroy.html]

From the subdirectory named env on my Windows laptop, again, I opened a Windows Command Prompt (cmd) and typed: vagrant up

With the following output with regard to the version of ubuntu/bionic64 (of course related to the moment I wrote this article):

==> ubuntu_k3s: Checking if box ‘ubuntu/bionic64’ version ‘20191218.0.0’ is up to date…
==> ubuntu_k3s: A newer version of the box ‘ubuntu/bionic64’ for provider ‘virtualbox’ is
==> ubuntu_k3s: available! You currently have version ‘20191218.0.0’. The latest is version
==> ubuntu_k3s: ‘20200107.0.0’. Run `vagrant box update` to update.

With the following output (only showing the part about k3s):

    ubuntu_k3s: **** Begin installing k3s
    ubuntu_k3s: [INFO]  Using v1.0.1 as release
    ubuntu_k3s: [INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v1.0.1/sha256sum-amd64.txt
    ubuntu_k3s: [INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v1.0.1/k3s
    ubuntu_k3s: [INFO]  Verifying binary download
    ubuntu_k3s: [INFO]  Installing k3s to /usr/local/bin/k3s
    ubuntu_k3s: [INFO]  Creating /usr/local/bin/kubectl symlink to k3s
    ubuntu_k3s: [INFO]  Creating /usr/local/bin/crictl symlink to k3s
    ubuntu_k3s: [INFO]  Creating /usr/local/bin/ctr symlink to k3s
    ubuntu_k3s: [INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
    ubuntu_k3s: [INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
    ubuntu_k3s: [INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
    ubuntu_k3s: [INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
    ubuntu_k3s: [INFO]  systemd: Enabling k3s unit
    ubuntu_k3s: Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service ? /etc/systemd/system/k3s.service.
    ubuntu_k3s: [INFO]  systemd: Starting k3s
    ubuntu_k3s: **** End installing k3s

Because of the warning with regard to the version of ubuntu/bionic64, I used the mentioned command in a Windows Command Prompt: vagrant box update

With the following output:

==> ubuntu_k3s: Checking for updates to ‘ubuntu/bionic64’
    ubuntu_k3s: Latest installed version: 20191218.0.0
    ubuntu_k3s: Version constraints:
    ubuntu_k3s: Provider: virtualbox
==> ubuntu_k3s: Updating ‘ubuntu/bionic64’ with provider ‘virtualbox’ from version
==> ubuntu_k3s: ‘20191218.0.0’ to ‘20200107.0.0’…
==> ubuntu_k3s: Loading metadata for box ‘https://vagrantcloud.com/ubuntu/bionic64’
==> ubuntu_k3s: Adding box ‘ubuntu/bionic64’ (v20200107.0.0) for provider: virtualbox
    ubuntu_k3s: Downloading: https://vagrantcloud.com/ubuntu/boxes/bionic64/versions/20200107.0.0/providers/virtualbox.box
    ubuntu_k3s: Download redirected to host: cloud-images.ubuntu.com
    ubuntu_k3s:
==> ubuntu_k3s: Successfully added box ‘ubuntu/bionic64’ (v20200107.0.0) for ‘virtualbox’!

I used vagrant ssh to connect into the running VM and start doing stuff.

Next, I used the following command on the Linux Command Prompt:

kubectl get nodes

With the following output:

WARN[2020-01-12T13:36:33.705394309Z] Unable to read /etc/rancher/k3s/k3s.yaml,
please start server with –write-kubeconfig-mode to modify kube config permissions
error: error loading config file “/etc/rancher/k3s/k3s.yaml”: open /etc/rancher/k3s/k3s.yaml: permission denied

Remark:
The command mentioned on the start page of k3s (k3s kubectl get node) leads to the same error message.

This is because the current user (via the whoami command), in my case, is: vagrant

Once k3s was installed, I used the following command (as can also be found in the documentation):
[https://github.com/rancher/k3s/blob/master/README.md]

sudo kubectl get nodes

With the following output:

NAME            STATUS   ROLES    AGE   VERSION
ubuntu-bionic   Ready    master   10m   v1.16.3-k3s.2

According to the documentation:

A kubeconfig file is written to /etc/rancher/k3s/k3s.yaml and the service is automatically started or restarted. The install script will install k3s and additional utilities, such as kubectl, crictl, k3s-killall.sh, and k3s-uninstall.sh.
[https://github.com/rancher/k3s/blob/master/README.md]

Next, I used the following command:

cd /etc/rancher/k3s

ls -latr

With the following output:

total 12
-rw——- 1 root root 1052 Jan 12 10:16 k3s.yaml
drwxr-xr-x 2 root root 4096 Jan 12 10:16 .
drwxr-xr-x 4 root root 4096 Jan 12 10:16 ..

Next, I used the following command to see the content of file k3s.yaml:

sudo cat k3s.yaml

With the following output:

apiVersion: v1
clusters:
– cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJWekNCL3FBREFnRUNBZ0VBTUFvR0NDcUdTTTQ5QkFNQ01DTXhJVEFmQmdOVkJBTU1HR3N6Y3kxelpYSjIKWlhJdFkyRkFNVFUzT0RneU5ERTVNekFlRncweU1EQXhNVEl4TURFMk16TmFGdzB6TURBeE1Ea3hNREUyTXpOYQpNQ014SVRBZkJnTlZCQU1NR0dzemN5MXpaWEoyWlhJdFkyRkFNVFUzT0RneU5ERTVNekJaTUJNR0J5cUdTTTQ5CkFnRUdDQ3FHU000OUF3RUhBMElBQk12b3V1YjZTR3N6UVl2LzVyb0lpSE5xbXZ0aUxub2gyQTZzR1hIQyt2OWQKSzkwTVlmV2J2bkozVFhyeEg2Mm5LTDhEU05wcmN4eC9rRXNXM2FpZTV3Q2pJekFoTUE0R0ExVWREd0VCL3dRRQpBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSUJSUmlrd0FPcjFVCmJtTlhOcEw3Y1cxaDhRSGg4QnZJQmJKc2RqdGU3Myt4QWlFQXROUG9MTjliVFZpYmxlYW5SNFpKcStKNUxDMmsKeUUwN2daWlk1NURlc25RPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://127.0.0.1:6443
  name: default
contexts:
– context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
– name: default
  user:
    password: 1f0b266cfdd8e11a9af1a6e262b09746
    username: admin

Kubectl configuration

Let’s focus on the configuration.

By default, kubectl looks for a file named config in the $HOME/.kube directory. You can specify other kubeconfig files by setting the KUBECONFIG environment variable or by setting the –kubeconfig flag.
[https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/]

With regard to the k3s kubectl command, the following applies:
Run an embedded kubectl CLI. If the KUBECONFIG environment variable is not set, this will automatically attempt to use the config file that is created at /etc/rancher/k3s/k3s.yaml when launching a K3s server node.
[https://rancher.com/docs/k3s/latest/en/installation/install-options/]

In order for a none root user to use kubectl with a certain configuration, according to the warning we got earlier:


Unable to read /etc/rancher/k3s/k3s.yaml,
please start server with –write-kubeconfig-mode to modify kube config permissions

we have to start the k3s server with a particular kubeconfig mode.

We can use the K3s Server option write-kubeconfig-mode (client) Write kubeconfig with this mode [$K3S_KUBECONFIG_MODE])
[https://rancher.com/docs/k3s/latest/en/installation/install-options/]

I had a look at the documentation about using environment variable K3S_KUBECONFIG_MODE, and came across the following example:


curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE=”644″ sh -s –

[https://github.com/rancher/k3s/issues/389]

Remark about chmod 644:
Chmod 644 (chmod a+rwx,u-x,g-wx,o-wx) sets permissions so that, (U)ser / owner can read, can write and can’t execute. (G)roup can read, can’t write and can’t execute. (O)thers can read, can’t write and can’t execute.

Rapidly spinning up a VM with Ubuntu and k3s (with the Kubernetes Dashboard) on my Windows laptop using Vagrant and Oracle VirtualBox lameriks 2020 01 4

Remark:
In a previous article I described a similar approach for a non-root user (vagrant) to use the kubectl command and configuration.
[https://technology.amis.nl/2019/02/12/rapidly-spinning-up-a-vm-with-ubuntu-docker-and-minikube-using-the-vm-drivernone-option-on-my-windows-laptop-using-vagrant-and-oracle-virtualbox/]

In the scripts directory I changed file k3s.sh to the following content:
[in bold, I highlighted the changes]

#!/bin/bash
echo "**** Begin installing k3s"

#Install
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.0.1 K3S_KUBECONFIG_MODE="644" sh -
echo "**** End installing k3s"

From here on in this blog, for simplicity, I will no longer mention the vagrant destroy command preceding the vagrant up command.

From the subdirectory named env on my Windows laptop, I opened a Windows Command Prompt (cmd) and typed: vagrant up

With the following output (only showing the part about k3s):

    ubuntu_k3s: **** Begin installing k3s
    ubuntu_k3s: [INFO]  Using v1.0.1 as release
    ubuntu_k3s: [INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v1.0.1/sha256sum-amd64.txt
    ubuntu_k3s: [INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v1.0.1/k3s
    ubuntu_k3s: [INFO]  Verifying binary download
    ubuntu_k3s: [INFO]  Installing k3s to /usr/local/bin/k3s
    ubuntu_k3s: [INFO]  Creating /usr/local/bin/kubectl symlink to k3s
    ubuntu_k3s: [INFO]  Creating /usr/local/bin/crictl symlink to k3s
    ubuntu_k3s: [INFO]  Creating /usr/local/bin/ctr symlink to k3s
    ubuntu_k3s: [INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
    ubuntu_k3s: [INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
    ubuntu_k3s: [INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
    ubuntu_k3s: [INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
    ubuntu_k3s: [INFO]  systemd: Enabling k3s unit
    ubuntu_k3s: Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service ? /etc/systemd/system/k3s.service.
    ubuntu_k3s: [INFO]  systemd: Starting k3s
    ubuntu_k3s: **** End installing k3s

So once k3s was installed, I used vagrant ssh to open a Linux Command Prompt where I used the following command:

kubectl get nodes

With the following output:

NAME            STATUS   ROLES    AGE   VERSION
ubuntu-bionic   Ready    master   49s   v1.16.3-k3s.2

Next, I used the following command:

kubectl get pods --all-namespaces

With the following output:

NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   local-path-provisioner-58fb86bdfd-g68v5   1/1     Running     0          76s
kube-system   metrics-server-6d684c7b5-4zrgx            1/1     Running     0          75s
kube-system   coredns-d798c9dd-szfg7                    1/1     Running     0          76s
kube-system   helm-install-traefik-xg2zd                0/1     Completed   0          76s
kube-system   svclb-traefik-frjb9                       3/3     Running     0          32s
kube-system   traefik-65bccdc4bd-rxlv4                  1/1     Running     0          32s

Then, I used the following command:

cd /etc/rancher/k3s

ls -latr

With the following output:

total 12
-rw-r–r– 1 root root 1052 Jan 12 14:40 k3s.yaml
drwxr-xr-x 2 root root 4096 Jan 12 14:40 .
drwxr-xr-x 4 root root 4096 Jan 12 14:40 ..

Here we can see the changed permissions for file k3s.yaml.

Kubernetes Web UI (Dashboard)

Now let’s try to interact with the Kubernetes Cluster via the Dashboard.

The Dashboard UI is not deployed by default. To deploy it, run the following command:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

You can access Dashboard using the kubectl command-line tool by running the following command:

kubectl proxy

Kubectl will make Dashboard available at:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

The UI can only be accessed from the machine where the command is executed. See kubectl proxy –help for more options.
[https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/]

Because of the setup of my demo environment, simple using kubectl proxy wouldn’t work, so again I used:
[https://technology.amis.nl/2019/02/12/rapidly-spinning-up-a-vm-with-ubuntu-docker-and-minikube-using-the-vm-drivernone-option-on-my-windows-laptop-using-vagrant-and-oracle-virtualbox/]

kubectl proxy --address='0.0.0.0' </dev/null &>/dev/null &

In the scripts directory I created a file dashboard.sh with the following content:

#!/bin/bash

echo "**** Begin preparing dashboard"

echo "**** Install Kubernetes Dashboard"
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
kubectl proxy --address='0.0.0.0' /dev/null &

echo "**** End preparing dashboard"

I changed the content of Vagrantfile to:
[in bold, I highlighted the changes]

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/bionic64"
  
  config.vm.define "ubuntu_k3s" do |ubuntu_k3s|
  
    config.vm.network "forwarded_port",
      guest: 8001,
      host:  8001,
      auto_correct: true
      
    config.vm.provider "virtualbox" do |vb|
        vb.name = "Ubuntu k3s"
        vb.memory = "8192"
        vb.cpus = "1"
        
      args = []
      config.vm.provision "k3s shell script", type: "shell",
          path: "scripts/k3s.sh",
          args: args
        
      args = []
      config.vm.provision "dashboard shell script", type: "shell",
          path: "scripts/dashboard.sh",
          args: args
    end
    
  end

end

In the Linux Command Prompt, I typed: exit

Then, I opened a Windows Command Prompt (cmd) and typed: vagrant up

With the following output (only showing the part about dashboard):

    ubuntu_k3s: **** Begin preparing dashboard
    ubuntu_k3s: **** Install Kubernetes Dashboard
    ubuntu_k3s: namespace/kubernetes-dashboard created
    ubuntu_k3s: serviceaccount/kubernetes-dashboard created
    ubuntu_k3s: service/kubernetes-dashboard created
    ubuntu_k3s: secret/kubernetes-dashboard-certs created
    ubuntu_k3s: secret/kubernetes-dashboard-csrf created
    ubuntu_k3s: secret/kubernetes-dashboard-key-holder created
    ubuntu_k3s: configmap/kubernetes-dashboard-settings created
    ubuntu_k3s: role.rbac.authorization.k8s.io/kubernetes-dashboard created
    ubuntu_k3s: clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
    ubuntu_k3s: rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
    ubuntu_k3s: clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
    ubuntu_k3s: deployment.apps/kubernetes-dashboard created
    ubuntu_k3s: service/dashboard-metrics-scraper created
    ubuntu_k3s: deployment.apps/dashboard-metrics-scraper created
    ubuntu_k3s: **** End preparing dashboard

On the Linux Command Prompt, I used the following command:

kubectl get pods --all-namespaces

With the following output:

NAMESPACE              NAME                                         READY   STATUS      RESTARTS   AGE
kube-system            local-path-provisioner-58fb86bdfd-g68v5      1/1     Running     0          13m
kube-system            metrics-server-6d684c7b5-4zrgx               1/1     Running     0          13m
kube-system            coredns-d798c9dd-szfg7                       1/1     Running     0          13m
kube-system            helm-install-traefik-xg2zd                   0/1     Completed   0          13m
kube-system            svclb-traefik-frjb9                          3/3     Running     0          12m
kube-system            traefik-65bccdc4bd-rxlv4                     1/1     Running     0          12m
kubernetes-dashboard   dashboard-metrics-scraper-566cddb686-5wvcx   1/1     Running     0          9m38s
kubernetes-dashboard   kubernetes-dashboard-7b5bf5d559-tn4rh        1/1     Running     0          9m38s

In a Web Browser I entered the URL:

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

And I got the result I was looking for:

Rapidly spinning up a VM with Ubuntu and k3s (with the Kubernetes Dashboard) on my Windows laptop using Vagrant and Oracle VirtualBox lameriks 2020 01 5

Here, I wanted to use a Token, so first I had a look at the documentation
[https://kubernetes.io/docs/reference/access-authn-authz/authentication/]

Users in Kubernetes
All Kubernetes clusters have two categories of users: service accounts managed by Kubernetes, and normal users.

Normal users are assumed to be managed by an outside, independent service.

In contrast, service accounts are users managed by the Kubernetes API. They are bound to specific namespaces, and created automatically by the API server or manually through API calls. Service accounts are tied to a set of credentials stored as Secrets, which are mounted into pods allowing in-cluster processes to talk to the Kubernetes API.

API requests are tied to either a normal user or a service account, or are treated as anonymous requests. This means every process inside or outside the cluster, from a human user typing kubectl on a workstation, to kubelets on nodes, to members of the control plane, must authenticate when making requests to the API server, or be treated as an anonymous user.
[https://kubernetes.io/docs/reference/access-authn-authz/authentication/]

I found an example (with regard to the Dashboard) for creating a ServiceAccount and ClusterRoleBinding manifest file. A service user is created and a role binding is done to role cluster-admin which does not exist by default in k3s.
[https://github.com/rancher/k3s/issues/233]

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

The example also provided information about how to get the token that allowed me to login to dashboard:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Based on the examples above, I added to the yaml directory a file serviceaccount-k3s.yaml with the following content:
[in bold, I highlighted the changes in namespace I made]

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

I added to the yaml directory a file clusterrolebinding-k3s.yaml with the following content:
[in bold, I highlighted the changes in namespace I made]

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

Remark about the namespace:
The command kubectl -n kube-system get secret has a long list of scecrets as a result. So, I wanted to use another namespace, in order to make it easier to determine the token that allowed me to login to dashboard. I chose to use the namespace kubernetes-dashboard, because that namespace was created when the Kubernetes Dashboard was installed. See output further above.

In the scripts directory I changed file dashboard.sh to the following content:
[in bold, I highlighted the changes]

#!/bin/bash

echo "**** Begin preparing dashboard"

echo "**** Install Kubernetes Dashboard"
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

#Create Helm chart
echo "**** Create Helm chart"
cd /vagrant
cd helmcharts
rm -rf /vagrant/helmcharts/k3s-chart/*
helm create k3s-chart

rm -rf /vagrant/helmcharts/k3s-chart/templates/*
cp /vagrant/yaml/*k3s.yaml /vagrant/helmcharts/k3s-chart/templates

# Install Helm chart
cd /vagrant
cd helmcharts
echo "**** Install Helm chart k3s-chart"
helm install k3s-release ./k3s-chart

# Wait 30 seconds
echo "**** Waiting 30 seconds ..."
sleep 30

#List helm releases
echo "**** List helm releases"
helm list -d

#List secrets
echo "**** List secrets with namespace kubernetes-dashboard"
kubectl get secrets --namespace kubernetes-dashboard

echo "**** Describe secret with namespace kubernetes-dashboard"
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

kubectl proxy --address='0.0.0.0' /dev/null &

echo "**** End preparing dashboard"

Remark about Helm:
In a previous article I already described how I used Helm.
[https://technology.amis.nl/2019/03/12/using-helm-the-package-manager-for-kubernetes-to-install-two-versions-of-a-restful-web-service-spring-boot-application-within-minikube/]

I had to make some changes however because now Helm version 3.0.2 was used. To determine the version, I used the following command:

helm version

With the following output:

version.BuildInfo{Version:”v3.0.2″, GitCommit:”19e47ee3283ae98139d98460de796c1be1e3975f”, GitTreeState:”clean”, GoVersion:”go1.13.5″}

  • Notable changes since Helm v2:
    The helm init command has been removed. It performed two primary functions. First, it installed Tiller. This is no longer needed. Second, it setup directories and repositories where Helm configuration lived. This is now automated. If the directory is not present it will be created.
    [https://helm.sh/blog/helm-v3-beta/]

Because I wanted to use Helm, I changed the content of Vagrantfile to:
[in bold, I highlighted the changes]

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/bionic64"
  
  config.vm.define "ubuntu_k3s" do |ubuntu_k3s|
  
    config.vm.network "forwarded_port",
      guest: 8001,
      host:  8001,
      auto_correct: true
      
    config.vm.provider "virtualbox" do |vb|
        vb.name = "Ubuntu k3s"
        vb.memory = "8192"
        vb.cpus = "1"
        
      args = []
      config.vm.provision "k3s shell script", type: "shell",
          path: "scripts/k3s.sh",
          args: args
        
      args = []
      config.vm.provision "helm shell script", type: "shell",
          path: "scripts/helm.sh",
          args: args
        
      args = []
      config.vm.provision "dashboard shell script", type: "shell",
          path: "scripts/dashboard.sh",
          args: args
    end
    
  end

end

Remark about helm.sh:
In a previous article I already described how I used Helm and the script file helm.sh.
[https://technology.amis.nl/2019/04/23/using-vagrant-and-shell-scripts-to-further-automate-setting-up-my-demo-environment-from-scratch-including-elasticsearch-fluentd-and-kibana-efk-within-minikube/]]

Again, I opened a Windows Command Prompt (cmd) and typed: vagrant up

With the following output (only showing the part about dashboard):

    ubuntu_k3s: **** Begin preparing dashboard
    ubuntu_k3s: **** Install Kubernetes Dashboard
    ubuntu_k3s: namespace/kubernetes-dashboard created
    ubuntu_k3s: serviceaccount/kubernetes-dashboard created
    ubuntu_k3s: service/kubernetes-dashboard created
    ubuntu_k3s: secret/kubernetes-dashboard-certs created
    ubuntu_k3s: secret/kubernetes-dashboard-csrf created
    ubuntu_k3s: secret/kubernetes-dashboard-key-holder created
    ubuntu_k3s: configmap/kubernetes-dashboard-settings created
    ubuntu_k3s: role.rbac.authorization.k8s.io/kubernetes-dashboard created
    ubuntu_k3s: clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
    ubuntu_k3s: rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
    ubuntu_k3s: clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
    ubuntu_k3s: deployment.apps/kubernetes-dashboard created
    ubuntu_k3s: service/dashboard-metrics-scraper created
    ubuntu_k3s: deployment.apps/dashboard-metrics-scraper created
    ubuntu_k3s: **** Create Helm chart
    ubuntu_k3s: Creating k3s-chart
    ubuntu_k3s: **** Install Helm chart k3s-chart
    ubuntu_k3s: NAME: k3s-release
    ubuntu_k3s: LAST DEPLOYED: Tue Jan 14 19:53:24 2020
    ubuntu_k3s: NAMESPACE: default
    ubuntu_k3s: STATUS: deployed
    ubuntu_k3s: REVISION: 1
    ubuntu_k3s: TEST SUITE: None
    ubuntu_k3s: **** Waiting 30 seconds …
    ubuntu_k3s: **** List helm releases
    ubuntu_k3s: NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
    ubuntu_k3s: k3s-release     default         1               2020-01-14 19:53:24.329429114 +0000 UTC deployed        k3s-chart-0.1.0 1.16.0
    ubuntu_k3s: **** List secrets with namespace kubernetes-dashboard
    ubuntu_k3s: NAME
    ubuntu_k3s:
    ubuntu_k3s:
    ubuntu_k3s:
    ubuntu_k3s:
    ubuntu_k3s: TYPE
    ubuntu_k3s:
    ubuntu_k3s:
    ubuntu_k3s:
    ubuntu_k3s:
    ubuntu_k3s:   DATA   AGE
    ubuntu_k3s: default-token-l2nr4                kubernetes.io/service-account-token   3      34s
    ubuntu_k3s: kubernetes-dashboard-token-54p9k   kubernetes.io/service-account-token   3      34s
    ubuntu_k3s: kubernetes-dashboard-certs         Opaque                                0      34s
    ubuntu_k3s: admin-user-token-trfdn             kubernetes.io/service-account-token   3      31s
    ubuntu_k3s: kubernetes-dashboard-csrf          Opaque                                1      34s
    ubuntu_k3s: kubernetes-dashboard-key-holder    Opaque                                2      34s
    ubuntu_k3s: **** Describe secret with namespace kubernetes-dashboard
    ubuntu_k3s: Name:         admin-user-token-trfdn
    ubuntu_k3s: Namespace:    kubernetes-dashboard
    ubuntu_k3s: Labels:       
    ubuntu_k3s: Annotations:  kubernetes.io/service-account.name: admin-user
    ubuntu_k3s:               kubernetes.io/service-account.uid: b65dc46c-0833-4fcf-b833-cfec45139764
    ubuntu_k3s:
    ubuntu_k3s: Type:  kubernetes.io/service-account-token
    ubuntu_k3s:
    ubuntu_k3s: Data
    ubuntu_k3s: ====
    ubuntu_k3s: token:
      eyJhbGciOiJSUzI1NiIsImtpZCI6IlhyREtIa21HdlhBQVd2Nm9kTGtJU3RUTnlWWTNJaHI2blNPb3J5eWRwR2cifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXRyZmRuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiNjVkYzQ2Yy0wODMzLTRmY2YtYjgzMy1jZmVjNDUxMzk3NjQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.bJBCZmV7oIUuljz9-I1oO71js-mAOZHc4wLaUwPayYAqAzx_kTM_oFwSEBtieFxmwYP2CTP2QJZM6G8OBGvLyUiQyRumaTavFo51Rh-eW9wSXO24p6Sf7BdQRaJsjS4lnInDGd1Ksrv-Az6LI10rrIJXHgI7jz1wNmSdSqk3OHGXgioKZL0qjlrwgS6UviTe-0geMFxvdGUogUWvShmQkR-sGRSfACYX8-RZdFSc3wRWsoIVo_4NME-q8uNm79BaP5RbPAC-z-2amVHJQUUtgs_88pY-Qu-iiDqUpC823pHYkjB65w5RICjjqlKIrWqAptT35fBFSOfrUKf_Oy483A
    ubuntu_k3s: ca.crt:     526 bytes
    ubuntu_k3s: namespace:  20 bytes
    ubuntu_k3s: **** End preparing dashboard

In the Web Browser on my Windows laptop, I entered the value for the token (seen above) and clicked on button “Sign in”.

Rapidly spinning up a VM with Ubuntu and k3s (with the Kubernetes Dashboard) on my Windows laptop using Vagrant and Oracle VirtualBox lameriks 2020 01 6

The Kubernetes Dashboard was opened with the default namespace selected.

Rapidly spinning up a VM with Ubuntu and k3s (with the Kubernetes Dashboard) on my Windows laptop using Vagrant and Oracle VirtualBox lameriks 2020 01 7

Next, I navigated to Nodes. Here you can see that the Kubernetes Cluster consists of one Node.

Rapidly spinning up a VM with Ubuntu and k3s (with the Kubernetes Dashboard) on my Windows laptop using Vagrant and Oracle VirtualBox lameriks 2020 01 8

Finally, I changed the namespace to kube-system and navigated to the Pods, with the following result, similar to the list we saw earlier.

Rapidly spinning up a VM with Ubuntu and k3s (with the Kubernetes Dashboard) on my Windows laptop using Vagrant and Oracle VirtualBox lameriks 2020 01 9

So now it’s time to conclude this article. In this article I described how I used Vagrant and shell scripts to automate setting up my demo environment from scratch, including k3s, Helm and Kubernetes Dashboard on top of an Ubuntu guest Operating System within an Oracle VirtualBox appliance. It is indeed relatively easy to install. For me, the next step is to actually start using it, to find out how it compares, for example, to Minikube.