For a demo, I needed an environment including Elasticsearch and Kibana (Elastic Stack).
Lucky for me, I had the configuration for such an environment using Vagrant and Oracle VirtualBox. In the past, I already set up such a demo environment, available within an Oracle VirtualBox appliance, as I described in a series of articles.
[https://technology.amis.nl/2019/09/15/using-elastic-stack-filebeat-for-log-aggregation/]
This demo environment was built with quite some files, and included:
- guest Operating System (Ubuntu)
- Docker
- Minikube
- Kubectl
- Helm
- Elasticsearch
- Filebeat
- Logstash
- Kibana
- MySQL
After starting everything up, I did run into quite some errors.
In this article, I will share with you the steps I took, to get my demo environment working again.
Vagrantfile and shell scripts
My starting point was the Vagrantfile (and shell scripts), I used before for setting up such a demo environment.
Vagrant.configure("2") do |config| config.vm.box = "ubuntu/xenial64" config.vm.define "ubuntu_minikube_helm_elastic" do |ubuntu_minikube_helm_elastic| config.vm.network "forwarded_port", guest: 8001, host: 8001, auto_correct: true config.vm.network "forwarded_port", guest: 5601, host: 5601, auto_correct: true config.vm.network "forwarded_port", guest: 9200, host: 9200, auto_correct: true config.vm.network "forwarded_port", guest: 9010, host: 9010, auto_correct: true config.vm.network "forwarded_port", guest: 9020, host: 9020, auto_correct: true config.vm.network "forwarded_port", guest: 9110, host: 9110, auto_correct: true config.vm.provider "virtualbox" do |vb| vb.name = "Ubuntu Minikube Helm Elastic Stack" vb.memory = "8192" vb.cpus = "1" args = [] config.vm.provision "shell", path: "scripts/docker.sh", args: args args = [] config.vm.provision "shell", path: "scripts/minikube.sh", args: args args = [] config.vm.provision "shell", path: "scripts/kubectl.sh", args: args args = [] config.vm.provision "shell", path: "scripts/helm.sh", args: args args = [] config.vm.provision "shell", path: "scripts/namespaces.sh", args: args args = [] config.vm.provision "shell", path: "scripts/elasticsearch.sh", args: args args = [] config.vm.provision "shell", path: "scripts/kibana.sh", args: args args = [] config.vm.provision "shell", path: "scripts/logstash.sh", args: args args = [] config.vm.provision "shell", path: "scripts/filebeat.sh", args: args args = [] config.vm.provision "shell", path: "scripts/mysql.sh", args: args args = [] config.vm.provision "shell", path: "scripts/booksservices.sh", args: args end end end
I just started everything up and looked at the output from vagrant up.
I did find some errors, I had seen before, and I knew how to fix them.
Remark:
In order to stop the running machine and destroy its resources, I used the following command on the Windows Command Prompt: vagrant destroy
I won’t mention this step for readability, but after each problem fix, I used it.
Installing Helm, Error: unknown command “init” for “helm”
Output from vagrant up command:
ubuntu_minikube_helm_elastic: **** Begin installing Helm … ubuntu_minikube_helm_elastic: helm 3.7.0 from Snapcrafters installed ubuntu_minikube_helm_elastic: Error: unknown command "init" for "helm" ubuntu_minikube_helm_elastic: ubuntu_minikube_helm_elastic: Did you mean this? ubuntu_minikube_helm_elastic: lint ubuntu_minikube_helm_elastic: ubuntu_minikube_helm_elastic: Run 'helm --help' for usage. … ubuntu_minikube_helm_elastic: **** End installing Helm
In order to fix this, I removed the helm init command from my helm.sh script:
#Install Tiller (Helm server-side) helm init
For my convenience I added the following to my helm.sh script:
#Show version helm version
Notable changes since Helm v2:
The helm init command has been removed. It performed two primary functions. First, it installed Tiller. This is no longer needed. Second, it setup directories and repositories where Helm configuration lived. This is now automated. If the directory is not present it will be created.
[https://helm.sh/blog/helm-v3-beta/]
Installing a Helm chart, Error: unknown flag: –name
Output from vagrant up command:
ubuntu_minikube_helm_elastic: **** Begin installing namespaces ubuntu_minikube_helm_elastic: **** Create Helm chart ubuntu_minikube_helm_elastic: Creating namespace-chart ubuntu_minikube_helm_elastic: WARNING: File "/vagrant/helmcharts/namespace-chart/.helmignore" already exists. Overwriting. ubuntu_minikube_helm_elastic: **** Install Helm chart namespace-chart ubuntu_minikube_helm_elastic: Error: unknown flag: --name ubuntu_minikube_helm_elastic: **** Waiting 30 seconds ... ubuntu_minikube_helm_elastic: **** List helm releases ubuntu_minikube_helm_elastic: NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ubuntu_minikube_helm_elastic: **** List namespaces ubuntu_minikube_helm_elastic: NAME STATUS AGE ubuntu_minikube_helm_elastic: default Active 4m3s ubuntu_minikube_helm_elastic: kube-public Active 3m58s ubuntu_minikube_helm_elastic: kube-system Active 4m2s ubuntu_minikube_helm_elastic: **** End installing namespaces
In order to fix this, I had to change all my shell scripts that used the helm install command.
For example, the helm install command in my namespaces.sh script, looked like:
# Install Helm chart cd /vagrant cd helmcharts echo "**** Install Helm chart namespace-chart" helm install ./namespace-chart --name namespace-release
And I changed it to:
# Install Helm chart cd /vagrant cd helmcharts echo "**** Install Helm chart namespace-chart" helm install namespace-release ./namespace-chart
In Helm v3, the release name is now mandatory as part of the command, see helm install –help:
helm install [NAME] [CHART] [flags]
[https://helm.sh/docs/helm/helm_install/]
Provisioning shell script was running multiple times!
I noticed that the provisioning shell scripts were running multiple times!
This also, was an error, I had seen before, and I knew how to fix it.
Provisioning scripts always run twice?
The bug itself is due to your provision block not having a name. If you don’t want them running twice, you can fix it by giving it a name like this:
`config.vm.provision “my shell script”, type: “shell”, ….`
[https://groups.google.com/forum/#!topic/vagrant-up/Ue11v3BmBN4]
So, I changed the content of the Vagrantfile for all my calls to a shell script, for example from:
[in bold, I highlighted the changes]
args = [] config.vm.provision "shell", path: "scripts/namespaces.sh", args: args
To:
args = [] config.vm.provision "namespaces shell script", type: "shell", path: "scripts/namespaces.sh", args: args
Using the latest version of Ubuntu
I wanted to use the latest version of Ubuntu.
So, I changed the version:
So, I changed the content of Vagrantfile from:
[in bold, I highlighted the changes]
Vagrant.configure("2") do |config| config.vm.box = "ubuntu/xenial64"
To:
Vagrant.configure("2") do |config| config.vm.box = "ubuntu/focal64"
Using the latest version of Docker
I also wanted to use the latest version of Docker engine.
So, I changed the version:
Ubuntu 16.04 LTS “Xenial Xerus” end-of-life
Ubuntu Linux 16.04 LTS reached the end of its five-year LTS window on April 30th 2021 and is no longer supported. Docker no longer releases packages for this distribution (including patch- and security releases). Users running Docker on Ubuntu 16.04 are recommended to update their system to a currently supported LTS version of Ubuntu.
[https://docs.docker.com/engine/install/ubuntu/]
In order to use this version, I followed the “Install Docker Engine on Ubuntu” instructions from:
https://docs.docker.com/engine/install/ubuntu/
I changed my docker.sh script to:
sudo apt-get remove docker docker-engine docker.io containerd runc #Set up the repository ##Update the apt package index sudo apt-get update ##Install packages to allow apt to use a repository over HTTPS sudo apt-get install ca-certificates sudo apt-get install curl sudo apt-get install gnupg sudo apt-get install lsb-release ##Add Docker’s official GPG key curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg ##Set up the stable repository echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null #Install Docker Engine ##Update the apt package index sudo apt-get update -qq #Install a specific version of Docker Engine sudo apt-get install -yqq docker-ce=5:20.10.10~3-0~ubuntu-focal docker-ce-cli=5:20.10.10~3-0~ubuntu-focal containerd.io #Verify that Docker Engine is installed correctly by running the hello-world image sudo docker run hello-world #use Docker as a non-root user sudo usermod -aG docker vagrant echo "**** End installing Docker Engine"
Installing minikube, [ERROR SystemVerification]: unsupported docker version: 20.10.10
This was the minikube.sh script, I used:
#!/bin/bash echo "**** Begin downloading minikube" #Download a static binary curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.32.0/minikube-linux-amd64 chmod +x minikube #Add the Minikube executable to your path sudo cp minikube /usr/local/bin/ rm minikube echo "**** End downloading minikube" echo "**** Begin starting a Cluster" #Start a Cluster minikube start --vm-driver=none echo "**** End starting a Cluster"
With the following output from vagrant up command:
ubuntu_minikube_helm_elastic: **** Begin downloading minikube ubuntu_minikube_helm_elastic: % Total % Received % Xferd Average Speed Time Time Time Current ubuntu_minikube_helm_elastic: Dload Upload Total Spent Left Speed 100 37.3M 100 37.3M 0 0 7635k 0 0:00:05 0:00:05 --:--:-- 8602k ubuntu_minikube_helm_elastic: **** End downloading minikube ubuntu_minikube_helm_elastic: **** Begin starting a Cluster ubuntu_minikube_helm_elastic: There is a newer version of minikube available (v1.24.0). Download it here: ubuntu_minikube_helm_elastic: https://github.com/kubernetes/minikube/releases/tag/v1.24.0 ubuntu_minikube_helm_elastic: … ubuntu_minikube_helm_elastic: [WARNING Hostname]: hostname "minikube" could not be reached ubuntu_minikube_helm_elastic: [WARNING Hostname]: hostname "minikube" lookup minikube on 127.0.0.53:53: server misbehaving ubuntu_minikube_helm_elastic: [preflight] Some fatal errors occurred: ubuntu_minikube_helm_elastic: [ERROR SystemVerification]: unsupported docker version: 20.10.10 … ubuntu_minikube_helm_elastic: **** End starting a Cluster
So, this version of minikube was not supported for using Docker version 20.10.10.
I changed the version:
So, I changed the content of my minikube.sh script from:
[in bold, I highlighted the changes]
#Download a static binary curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.32.0/minikube-linux-amd64 chmod +x minikube
To:
#Download a static binary curl -Lo minikube https://storage.googleapis.com/minikube/releases/v1.24.0/minikube-linux-amd64 chmod +x minikube
Installing minikube, syntax error near unexpected token `newline’
Output from vagrant up command:
ubuntu_minikube_helm_elastic: **** Begin downloading minikube ubuntu_minikube_helm_elastic: % Total % Received % Xferd Average Speed Time Time Time Current ubuntu_minikube_helm_elastic: Dload Upload Total Spent Left Speed 100 187k 0 187k 0 0 328k 0 --:--:-- --:--:-- --:--:-- 327k ubuntu_minikube_helm_elastic: **** End downloading minikube ubuntu_minikube_helm_elastic: **** Begin starting a Cluster ubuntu_minikube_helm_elastic: /usr/local/bin/minikube: line 7: syntax error near unexpected token `newline' ubuntu_minikube_helm_elastic: /usr/local/bin/minikube: line 7: `<!DOCTYPE html>' ubuntu_minikube_helm_elastic: **** End starting a Cluster
It looked like something went wrong with the download.
Using the old version, the downloaded file was about 37.3M in size.
-L, –location
(HTTP) If the server reports that the requested page has moved to a different location (indicated with a Location: header and a 3XX response code), this option will make curl redo the request on the new place.
[https://curl.se/docs/manpage.html]
So again, I changed the content of my minikube.sh script from:
[in bold, I highlighted the changes]
#Download a static binary curl -Lo minikube https://storage.googleapis.com/minikube/releases/v1.24.0/minikube-linux-amd64 chmod +x minikube
To:
#Download a static binary curl -o minikube https://storage.googleapis.com/minikube/releases/v1.24.0/minikube-linux-amd64 chmod +x minikube
Installing minikube, X Exiting due to RSRC_INSUFFICIENT_CORES: Requested cpu count 2 is greater than the available cpus of 1
Output from vagrant up command:
ubuntu_minikube_helm_elastic: **** Begin downloading minikube ubuntu_minikube_helm_elastic: % Total % Received % Xferd Average Speed Time Time Time Current ubuntu_minikube_helm_elastic: Dload Upload Total Spent Left Speed 100 66.3M 100 66.3M 0 0 8660k 0 0:00:07 0:00:07 --:--:-- 8774k ubuntu_minikube_helm_elastic: **** End downloading minikube ubuntu_minikube_helm_elastic: **** Begin starting a Cluster ubuntu_minikube_helm_elastic: * minikube v1.24.0 on Ubuntu 20.04 (vbox/amd64) ubuntu_minikube_helm_elastic: * Using the none driver based on user configuration ubuntu_minikube_helm_elastic: ubuntu_minikube_helm_elastic: X Exiting due to RSRC_INSUFFICIENT_CORES: Requested cpu count 2 is greater than the available cpus of 1 ubuntu_minikube_helm_elastic: ubuntu_minikube_helm_elastic: **** End starting a Cluster
In order to change the requested cpu count to 2, I changed the content of Vagrantfile from:
[in bold, I highlighted the changes]
config.vm.provider "virtualbox" do |vb| vb.name = "Ubuntu Minikube Helm Elastic Stack" vb.memory = "8192" vb.cpus = "1"
To:
config.vm.provider "virtualbox" do |vb| vb.name = "Ubuntu Minikube Helm Elastic Stack" vb.memory = "8192" vb.cpus = "2"
Installing minikube, X Exiting due to GUEST_MISSING_CONNTRACK: Sorry, Kubernetes 1.22.3 requires conntrack to be installed in root’s path
Output from vagrant up command:
ubuntu_minikube_helm_elastic: **** Begin downloading minikube ubuntu_minikube_helm_elastic: % Total % Received % Xferd Average Speed Time Time Time Current ubuntu_minikube_helm_elastic: Dload Upload Total Spent Left Speed 100 66.3M 100 66.3M 0 0 8430k 0 0:00:08 0:00:08 --:--:-- 8683k ubuntu_minikube_helm_elastic: **** End downloading minikube ubuntu_minikube_helm_elastic: **** Begin starting a Cluster ubuntu_minikube_helm_elastic: * minikube v1.24.0 on Ubuntu 20.04 (vbox/amd64) ubuntu_minikube_helm_elastic: * Using the none driver based on user configuration ubuntu_minikube_helm_elastic: ubuntu_minikube_helm_elastic: X Exiting due to GUEST_MISSING_CONNTRACK: Sorry, Kubernetes 1.22.3 requires conntrack to be installed in root's path ubuntu_minikube_helm_elastic: ubuntu_minikube_helm_elastic: **** End starting a Cluster
In order to install conntrack (needed by Kubernetes), I changed the content of my minikube.sh script to:
[in bold, I highlighted the changes]
#!/bin/bash echo "**** Begin downloading minikube" #Kubernetes 1.22.3 requires conntrack to be installed in root's path sudo apt install -y conntrack #Download a static binary curl -o minikube https://storage.googleapis.com/minikube/releases/v1.24.0/minikube-linux-amd64 chmod +x minikube #Add the Minikube executable to your path sudo cp minikube /usr/local/bin/ rm minikube echo "**** End downloading minikube" echo "**** Begin starting a Cluster" #Start a Cluster minikube start --vm-driver=none echo "**** End starting a Cluster"
The conntrack utilty provides a full featured userspace interface to the Netfilter connection tracking system that is intended to replace the old /proc/net/ip_conntrack interface. This tool can be used to search, list, inspect and maintain the connection tracking subsystem of the Linux kernel.
Using conntrack, you can dump a list of all (or a filtered selection of) currently tracked connections, delete connections from the state table, and even add new ones.
In addition, you can also monitor connection tracking events, e.g. show an event message (one line) per newly established connection.
[http://manpages.ubuntu.com/manpages/focal/man8/conntrack.8.html]
Installing minikube, remark about how to use kubectl or minikube commands as your own user
Output from vagrant up command:
ubuntu_minikube_helm_elastic: **** Begin downloading minikube ubuntu_minikube_helm_elastic: ubuntu_minikube_helm_elastic: WARNING: apt does not have a stable CLI interface. Use with caution in scripts. ubuntu_minikube_helm_elastic: ubuntu_minikube_helm_elastic: Reading package lists... ubuntu_minikube_helm_elastic: Building dependency tree... ubuntu_minikube_helm_elastic: Reading state information... ubuntu_minikube_helm_elastic: Suggested packages: ubuntu_minikube_helm_elastic: nftables ubuntu_minikube_helm_elastic: The following NEW packages will be installed: ubuntu_minikube_helm_elastic: conntrack ubuntu_minikube_helm_elastic: 0 upgraded, 1 newly installed, 0 to remove and 52 not upgraded. ubuntu_minikube_helm_elastic: Need to get 30.3 kB of archives. ubuntu_minikube_helm_elastic: After this operation, 104 kB of additional disk space will be used. ubuntu_minikube_helm_elastic: Get:1 http://archive.ubuntu.com/ubuntu focal/main amd64 conntrack amd64 1:1.4.5-2 [30.3 kB] ubuntu_minikube_helm_elastic: dpkg-preconfigure: unable to re-open stdin: No such file or directory ubuntu_minikube_helm_elastic: Fetched 30.3 kB in 0s (323 kB/s) ubuntu_minikube_helm_elastic: Selecting previously unselected package conntrack. (Reading database ... 69169 files and directories currently installed.) ubuntu_minikube_helm_elastic: Preparing to unpack .../conntrack_1%3a1.4.5-2_amd64.deb ... ubuntu_minikube_helm_elastic: Unpacking conntrack (1:1.4.5-2) ... ubuntu_minikube_helm_elastic: Setting up conntrack (1:1.4.5-2) ... ubuntu_minikube_helm_elastic: Processing triggers for man-db (2.9.1-1) ... ubuntu_minikube_helm_elastic: % Total % Received % Xferd Average Speed Time Time Time Current ubuntu_minikube_helm_elastic: Dload Upload Total Spent Left Speed 100 66.3M 100 66.3M 0 0 8752k 0 0:00:07 0:00:07 --:--:-- 8935k ubuntu_minikube_helm_elastic: **** End downloading minikube ubuntu_minikube_helm_elastic: **** Begin starting a Cluster ubuntu_minikube_helm_elastic: * minikube v1.24.0 on Ubuntu 20.04 (vbox/amd64) ubuntu_minikube_helm_elastic: * Using the none driver based on user configuration ubuntu_minikube_helm_elastic: * Starting control plane node minikube in cluster minikube ubuntu_minikube_helm_elastic: * Running on localhost (CPUs=2, Memory=7962MB, Disk=39642MB) ... ubuntu_minikube_helm_elastic: * OS release is Ubuntu 20.04.3 LTS ubuntu_minikube_helm_elastic: * Preparing Kubernetes v1.22.3 on Docker 20.10.10 ... ubuntu_minikube_helm_elastic: - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf ubuntu_minikube_helm_elastic: - Generating certificates and keys ... ubuntu_minikube_helm_elastic: - Booting up control plane ... ubuntu_minikube_helm_elastic: - Configuring RBAC rules ... ubuntu_minikube_helm_elastic: * Configuring local host environment ... ubuntu_minikube_helm_elastic: * ubuntu_minikube_helm_elastic: ! The 'none' driver is designed for experts who need to integrate with an existing VM ubuntu_minikube_helm_elastic: * Most users should use the newer 'docker' driver instead, which does not require root! ubuntu_minikube_helm_elastic: * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/ ubuntu_minikube_helm_elastic: * ubuntu_minikube_helm_elastic: ! kubectl and minikube configuration will be stored in /root ubuntu_minikube_helm_elastic: ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run: ubuntu_minikube_helm_elastic: * ubuntu_minikube_helm_elastic: - sudo mv /root/.kube /root/.minikube $HOME ubuntu_minikube_helm_elastic: - sudo chown -R $USER $HOME/.kube $HOME/.minikube ubuntu_minikube_helm_elastic: * ubuntu_minikube_helm_elastic: * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true ubuntu_minikube_helm_elastic: * Verifying Kubernetes components... ubuntu_minikube_helm_elastic: - Using image gcr.io/k8s-minikube/storage-provisioner:v5 ubuntu_minikube_helm_elastic: * Enabled addons: default-storageclass, storage-provisioner ubuntu_minikube_helm_elastic: * kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A' ubuntu_minikube_helm_elastic: * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default ubuntu_minikube_helm_elastic: **** End starting a Cluster
In order to use kubectl or minikube commands as your own user, and to relocate them, I changed the content of my minikube.sh script (replacing the $USER and $HOME variables, mentioned in the output above) to:
[in bold, I highlighted the changes]
#!/bin/bash echo "**** Begin downloading minikube" #Kubernetes 1.22.3 requires conntrack to be installed in root's path sudo apt install -y conntrack #Download a static binary curl -o minikube https://storage.googleapis.com/minikube/releases/v1.24.0/minikube-linux-amd64 chmod +x minikube #Add the Minikube executable to your path sudo cp minikube /usr/local/bin/ rm minikube echo "**** End downloading minikube" echo "**** Begin starting a Cluster" #Start a Cluster minikube start --vm-driver=none #To use kubectl or minikube commands as your own user, you may need to relocate them. sudo mv /root/.kube /root/.minikube /home/vagrant sudo chown -R vagrant /home/vagrant/.kube /home/vagrant/.minikube minikube kubectl -- get pods -A echo "**** End starting a Cluster"
Remark:
Before running this script, I temporary added some commands to get some extra information.
However, when I use vagrant ssh to open a Linux Command Prompt and execute the same commands, this is the output:
Installing minikube, Error caching kubectl: failed to acquire lock “/root/.minikube/cache/linux/v1.22.3/kubectl.lock”
Output from vagrant up command:
… ubuntu_minikube_helm_elastic: * Verifying Kubernetes components... ubuntu_minikube_helm_elastic: - Using image gcr.io/k8s-minikube/storage-provisioner:v5 ubuntu_minikube_helm_elastic: * Enabled addons: storage-provisioner, default-storageclass ubuntu_minikube_helm_elastic: * kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A' ubuntu_minikube_helm_elastic: * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default ubuntu_minikube_helm_elastic: Error caching kubectl: failed to acquire lock "/root/.minikube/cache/linux/v1.22.3/kubectl.lock": {Name:mk8d58bec4adbae20d75e048f90b0be65470e900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}: unable to open /tmp/juju-mk8d58bec4adbae20d75e048f90b0be65470e900: permission denied ubuntu_minikube_helm_elastic: **** End starting a Cluster
It looked like minikube was holding a lock on file kubectl.lock. So, I tried to stop minikube, before the mv command.
I changed the content of my minikube.sh script to:
[in bold, I highlighted the changes]
#!/bin/bash echo "**** Begin downloading minikube" #Kubernetes 1.22.3 requires conntrack to be installed in root's path sudo apt install -y conntrack #Download a static binary curl -o minikube https://storage.googleapis.com/minikube/releases/v1.24.0/minikube-linux-amd64 chmod +x minikube #Add the Minikube executable to your path sudo cp minikube /usr/local/bin/ rm minikube echo "**** End downloading minikube" echo "**** Begin starting a Cluster" #Start a Cluster minikube start --vm-driver=none #Stops a running local Kubernetes cluster minikube stop #Gets the status of a local Kubernetes cluster minikube status #To use kubectl or minikube commands as your own user, you may need to relocate them. sudo mv /root/.kube /root/.minikube /home/vagrant sudo chown -R vagrant /home/vagrant/.kube /home/vagrant/.minikube #Start a Cluster minikube start --vm-driver=none #Gets the status of a local Kubernetes cluster minikube status minikube kubectl -- get pods -A echo "**** End starting a Cluster"
Installing minikube, X Exiting due to HOST_JUJU_LOCK_PERMISSION: writing kubeconfig: Error writing file /root/.kube/config: failed to acquire lock for /root/.kube/config
Output from vagrant up command:
…
ubuntu_minikube_helm_elastic: * Done! kubectl is now configured to use “minikube” cluster and “default” namespace by default
ubuntu_minikube_helm_elastic: * Stopping node “minikube” …
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: X Exiting due to HOST_JUJU_LOCK_PERMISSION: writing kubeconfig: Error writing file /root/.kube/config: failed to acquire lock for /root/.kube/config: {Name:mk72a1487fd2da23da9e8181e16f352a6105bd56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}: unable to open /tmp/juju-mk72a1487fd2da23da9e8181e16f352a6105bd56: permission denied
ubuntu_minikube_helm_elastic: * Suggestion: Run ‘sudo sysctl fs.protected_regular=0’, or try a driver which does not require root, such as ‘–driver=docker’
ubuntu_minikube_helm_elastic: * Related issue: https://github.com/kubernetes/minikube/issues/6391
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: minikube
ubuntu_minikube_helm_elastic: type: Control Plane
ubuntu_minikube_helm_elastic: host: Stopped
ubuntu_minikube_helm_elastic: kubelet: Stopped
ubuntu_minikube_helm_elastic: apiserver: Stopped
ubuntu_minikube_helm_elastic: kubeconfig: Stopped
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: * minikube v1.24.0 on Ubuntu 20.04 (vbox/amd64)
ubuntu_minikube_helm_elastic: * Using the none driver based on user configuration
ubuntu_minikube_helm_elastic: * Starting control plane node minikube in cluster minikube
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: X Exiting due to HOST_JUJU_LOCK_PERMISSION: Failed to save config: failed to acquire lock for /root/.minikube/profiles/minikube/config.json: {Name:mk270d1b5db5965f2dc9e9e25770a63417031943 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}: unable to open /tmp/juju-mk270d1b5db5965f2dc9e9e25770a63417031943: permission denied
ubuntu_minikube_helm_elastic: * Suggestion: Run ‘sudo sysctl fs.protected_regular=0’, or try a driver which does not require root, such as ‘–driver=docker’
ubuntu_minikube_helm_elastic: * Related issue: https://github.com/kubernetes/minikube/issues/6391
ubuntu_minikube_helm_elastic:
ubuntu_minikube_helm_elastic: * Profile “minikube” not found. Run “minikube profile list” to view all profiles.
ubuntu_minikube_helm_elastic: To start a cluster, run: “minikube start”
ubuntu_minikube_helm_elastic: Error caching kubectl: failed to acquire lock “/root/.minikube/cache/linux/v1.22.3/kubectl.lock”: {Name:mk8d58bec4adbae20d75e048f90b0be65470e900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}: unable to open /tmp/juju-mk8d58bec4adbae20d75e048f90b0be65470e900: permission denied
ubuntu_minikube_helm_elastic: **** End starting a Cluster
So, I added the suggestion (mentioned in the output above) to fix the problem.
I changed the content of my minikube.sh script to:
[in bold, I highlighted the changes]
#!/bin/bash echo "**** Begin downloading minikube" #Kubernetes 1.22.3 requires conntrack to be installed in root's path sudo apt install -y conntrack #Download a static binary curl -o minikube https://storage.googleapis.com/minikube/releases/v1.24.0/minikube-linux-amd64 chmod +x minikube #Add the Minikube executable to your path sudo cp minikube /usr/local/bin/ rm minikube echo "**** End downloading minikube" echo "**** Begin starting a Cluster" sudo sysctl fs.protected_regular=0 #Start a Cluster minikube start --vm-driver=none #Stops a running local Kubernetes cluster minikube stop #Gets the status of a local Kubernetes cluster minikube status #To use kubectl or minikube commands as your own user, you may need to relocate them. sudo mv /root/.kube /root/.minikube /home/vagrant sudo chown -R vagrant /home/vagrant/.kube /home/vagrant/.minikube #Start a Cluster minikube start --vm-driver=none #Gets the status of a local Kubernetes cluster minikube status minikube kubectl -- get pods -A echo "**** End starting a Cluster"
User root can’t write to file in /tmp owned by someone else in 20.04, but can in 18.04
[https://askubuntu.com/questions/1250974/user-root-cant-write-to-file-in-tmp-owned-by-someone-else-in-20-04-but-can-in]
Installing minikube, The connection to the server 10.0.2.15:8443 was refused – did you specify the right host or port?
Instead of showing the output from the vagrant up command, for my own convenience, I mention here the commands one at a time and the output related to each one:
minikube stop
With the following output:
ubuntu_minikube_helm_elastic: * Stopping node "minikube" ... ubuntu_minikube_helm_elastic: * 1 node stopped.
Command:
minikube status
With the following output:
ubuntu_minikube_helm_elastic: minikube ubuntu_minikube_helm_elastic: type: Control Plane ubuntu_minikube_helm_elastic: host: Stopped ubuntu_minikube_helm_elastic: kubelet: Stopped ubuntu_minikube_helm_elastic: apiserver: Stopped ubuntu_minikube_helm_elastic: kubeconfig: Stopped
Command:
#Start a Cluster minikube start --vm-driver=none
With the following output:
ubuntu_minikube_helm_elastic: * minikube v1.24.0 on Ubuntu 20.04 (vbox/amd64) ubuntu_minikube_helm_elastic: * Using the none driver based on user configuration ubuntu_minikube_helm_elastic: * Starting control plane node minikube in cluster minikube ubuntu_minikube_helm_elastic: * Running on localhost (CPUs=2, Memory=7962MB, Disk=39642MB) ... ubuntu_minikube_helm_elastic: * OS release is Ubuntu 20.04.3 LTS ubuntu_minikube_helm_elastic: * Preparing Kubernetes v1.22.3 on Docker 20.10.10 ... ubuntu_minikube_helm_elastic: - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf ubuntu_minikube_helm_elastic: E0112 08:29:32.671368 22260 kubeadm.go:680] sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml failed - will try once more: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml": exit status 1 ubuntu_minikube_helm_elastic: stdout: ubuntu_minikube_helm_elastic: [certs] Using certificateDir folder "/var/lib/minikube/certs" ubuntu_minikube_helm_elastic: [certs] Using existing ca certificate authority ubuntu_minikube_helm_elastic: [certs] Using existing apiserver certificate and key on disk ubuntu_minikube_helm_elastic: ubuntu_minikube_helm_elastic: stderr: ubuntu_minikube_helm_elastic: error execution phase certs/apiserver-kubelet-client: [certs] certificate apiserver-kubelet-client not signed by CA certificate ca: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "minikubeCA") ubuntu_minikube_helm_elastic: To see the stack trace of this error execute with --v=5 or higher ubuntu_minikube_helm_elastic: ! Unable to restart cluster, will reset it: run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml": exit status 1 ubuntu_minikube_helm_elastic: stdout: ubuntu_minikube_helm_elastic: [certs] Using certificateDir folder "/var/lib/minikube/certs" ubuntu_minikube_helm_elastic: [certs] Using existing ca certificate authority ubuntu_minikube_helm_elastic: [certs] Using existing apiserver certificate and key on disk ubuntu_minikube_helm_elastic: ubuntu_minikube_helm_elastic: stderr: ubuntu_minikube_helm_elastic: error execution phase certs/apiserver-kubelet-client: [certs] certificate apiserver-kubelet-client not signed by CA certificate ca: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "minikubeCA") ubuntu_minikube_helm_elastic: To see the stack trace of this error execute with --v=5 or higher ubuntu_minikube_helm_elastic: ubuntu_minikube_helm_elastic: - Generating certificates and keys ... ubuntu_minikube_helm_elastic: ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": exit status 1 ubuntu_minikube_helm_elastic: stdout: ubuntu_minikube_helm_elastic: [init] Using Kubernetes version: v1.22.3 ubuntu_minikube_helm_elastic: [preflight] Running pre-flight checks ubuntu_minikube_helm_elastic: [preflight] Pulling images required for setting up a Kubernetes cluster ubuntu_minikube_helm_elastic: [preflight] This might take a minute or two, depending on the speed of your internet connection ubuntu_minikube_helm_elastic: [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' ubuntu_minikube_helm_elastic: [certs] Using certificateDir folder "/var/lib/minikube/certs" ubuntu_minikube_helm_elastic: [certs] Using existing ca certificate authority ubuntu_minikube_helm_elastic: [certs] Using existing apiserver certificate and key on disk ubuntu_minikube_helm_elastic: ubuntu_minikube_helm_elastic: stderr: ubuntu_minikube_helm_elastic: [WARNING FileExisting-socat]: socat not found in system path ubuntu_minikube_helm_elastic: [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' ubuntu_minikube_helm_elastic: error execution phase certs/apiserver-kubelet-client: [certs] certificate apiserver-kubelet-client not signed by CA certificate ca: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "minikubeCA") ubuntu_minikube_helm_elastic: To see the stack trace of this error execute with --v=5 or higher ubuntu_minikube_helm_elastic: ubuntu_minikube_helm_elastic: - Generating certificates and keys ... ubuntu_minikube_helm_elastic: * ubuntu_minikube_helm_elastic: X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": exit status 1 ubuntu_minikube_helm_elastic: stdout: ubuntu_minikube_helm_elastic: [init] Using Kubernetes version: v1.22.3 ubuntu_minikube_helm_elastic: [preflight] Running pre-flight checks ubuntu_minikube_helm_elastic: [preflight] Pulling images required for setting up a Kubernetes cluster ubuntu_minikube_helm_elastic: [preflight] This might take a minute or two, depending on the speed of your internet connection ubuntu_minikube_helm_elastic: [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' ubuntu_minikube_helm_elastic: [certs] Using certificateDir folder "/var/lib/minikube/certs" ubuntu_minikube_helm_elastic: [certs] Using existing ca certificate authority ubuntu_minikube_helm_elastic: [certs] Using existing apiserver certificate and key on disk ubuntu_minikube_helm_elastic: ubuntu_minikube_helm_elastic: stderr: ubuntu_minikube_helm_elastic: [WARNING FileExisting-socat]: socat not found in system path ubuntu_minikube_helm_elastic: [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' ubuntu_minikube_helm_elastic: error execution phase certs/apiserver-kubelet-client: [certs] certificate apiserver-kubelet-client not signed by CA certificate ca: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "minikubeCA") ubuntu_minikube_helm_elastic: To see the stack trace of this error execute with --v=5 or higher ubuntu_minikube_helm_elastic: ubuntu_minikube_helm_elastic: * ubuntu_minikube_helm_elastic: ╭─────────────────────────────────────────────────────────────────────────────────────────────╮ ubuntu_minikube_helm_elastic: │ │ ubuntu_minikube_helm_elastic: │ * If the above advice does not help, please let us know: │ ubuntu_minikube_helm_elastic: │ https://github.com/kubernetes/minikube/issues/new/choose │ ubuntu_minikube_helm_elastic: │ │ ubuntu_minikube_helm_elastic: │ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │ ubuntu_minikube_helm_elastic: │ │ ubuntu_minikube_helm_elastic: ╰─────────────────────────────────────────────────────────────────────────────────────────────╯ ubuntu_minikube_helm_elastic: ubuntu_minikube_helm_elastic: X Exiting due to GUEST_START: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": exit status 1 ubuntu_minikube_helm_elastic: stdout: ubuntu_minikube_helm_elastic: [init] Using Kubernetes version: v1.22.3 ubuntu_minikube_helm_elastic: [preflight] Running pre-flight checks ubuntu_minikube_helm_elastic: [preflight] Pulling images required for setting up a Kubernetes cluster ubuntu_minikube_helm_elastic: [preflight] This might take a minute or two, depending on the speed of your internet connection ubuntu_minikube_helm_elastic: [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' ubuntu_minikube_helm_elastic: [certs] Using certificateDir folder "/var/lib/minikube/certs" ubuntu_minikube_helm_elastic: [certs] Using existing ca certificate authority ubuntu_minikube_helm_elastic: [certs] Using existing apiserver certificate and key on disk ubuntu_minikube_helm_elastic: ubuntu_minikube_helm_elastic: stderr: ubuntu_minikube_helm_elastic: [WARNING FileExisting-socat]: socat not found in system path ubuntu_minikube_helm_elastic: [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' ubuntu_minikube_helm_elastic: error execution phase certs/apiserver-kubelet-client: [certs] certificate apiserver-kubelet-client not signed by CA certificate ca: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "minikubeCA") ubuntu_minikube_helm_elastic: To see the stack trace of this error execute with --v=5 or higher ubuntu_minikube_helm_elastic: ubuntu_minikube_helm_elastic: * ubuntu_minikube_helm_elastic: ╭─────────────────────────────────────────────────────────────────────────────────────────────╮ ubuntu_minikube_helm_elastic: │ │ ubuntu_minikube_helm_elastic: │ * If the above advice does not help, please let us know: │ ubuntu_minikube_helm_elastic: │ https://github.com/kubernetes/minikube/issues/new/choose │ ubuntu_minikube_helm_elastic: │ │ ubuntu_minikube_helm_elastic: │ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │ ubuntu_minikube_helm_elastic: │ │ ubuntu_minikube_helm_elastic: ╰─────────────────────────────────────────────────────────────────────────────────────────────╯
The command:
#Gets the status of a local Kubernetes cluster minikube status
With the following output:
ubuntu_minikube_helm_elastic: minikube ubuntu_minikube_helm_elastic: type: Control Plane ubuntu_minikube_helm_elastic: host: Stopped ubuntu_minikube_helm_elastic: kubelet: Stopped ubuntu_minikube_helm_elastic: apiserver: Stopped ubuntu_minikube_helm_elastic: kubeconfig: Stopped
And the remainder of the commands:
minikube kubectl -- get pods -A echo "**** End starting a Cluster"
ubuntu_minikube_helm_elastic: The connection to the server 10.0.2.15:8443 was refused - did you specify the right host or port? ubuntu_minikube_helm_elastic: **** End starting a Cluster
This wasn’t going very well. But I recognized the error and knew how to fix it:
If you see a message similar to the following, kubectl is not configured correctly or is not able to connect to a Kubernetes cluster.
[https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/]
So, in order to get kubectl to connect to the minikube cluster, I needed the kubeconfig file.
A file that is used to configure access to clusters is called a kubeconfig file. This is a generic way of referring to configuration files. It does not mean that there is a file named kubeconfig.
By default, kubectl looks for a file named config in the $HOME/.kube directory. You can specify other kubeconfig files by setting the KUBECONFIG environment variable or by setting the –kubeconfig flag.
[https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/]
Remember, my minikube.sh script contains the following commands:
#To use kubectl or minikube commands as your own user, you may need to relocate them. sudo mv /root/.kube /root/.minikube /home/vagrant sudo chown -R vagrant /home/vagrant/.kube /home/vagrant/.minikube
So, the commands above make the config file available in the $HOME/.kube directory of user vagrant. Therefor kubectl should be working when executed as user vagrant.
Before I tried this out, I first wanted to get some extra information.
In order to list. I changed the content of my minikube.sh script to (only showing a part):
[in bold, I highlighted the changes]
echo "**** Begin starting a Cluster" sudo sysctl fs.protected_regular=0 sudo ls -latr /root/.kube sudo ls -latr /root/.minikube #Start a Cluster minikube start --vm-driver=none #To use kubectl or minikube commands as your own user, you may need to relocate them. sudo mv /root/.kube /root/.minikube /home/vagrant sudo chown -R vagrant /home/vagrant/.kube /home/vagrant/.minikube minikube kubectl -- get pods -A echo "**** End starting a Cluster"
With the following output:
ubuntu_minikube_helm_elastic: ls: cannot access '/root/.kube': No such file or directory ubuntu_minikube_helm_elastic: ls: cannot access '/root/.minikube': No such file or directory
As I already expected, I had to start minikube first, in order to get the kubeconfig file. Also, I added some extra commands to list the content of the kubeconfig file and to list some directories of user vagrant.
I changed the content of my minikube.sh script to (only showing a part):
[in bold, I highlighted the changes]
echo "**** Begin starting a Cluster" sudo sysctl fs.protected_regular=0 #Start a Cluster minikube start --vm-driver=none #To use kubectl or minikube commands as your own user, you may need to relocate them. sudo cp -R /root/.kube /root/.minikube /home/vagrant sudo chown -R vagrant /home/vagrant/.kube /home/vagrant/.minikube sudo ls -latr /root/.kube sudo cat /root/.kube/config sudo ls -latr /root/.minikube sudo ls -latr /home/vagrant/.kube sudo ls -latr /home/vagrant/.minikube minikube kubectl -- get pods -A echo "**** End starting a Cluster"
By the way, I opted for cp -R instead of mv to keep the original files (that still will be used as user root from the shell scripts).
Instead of showing the output from the vagrant up command, for my own convenience, I mention here the commands one at a time and the output related to it:
sudo ls -latr /root/.kube
With the following output:
ubuntu_minikube_helm_elastic: total 16 ubuntu_minikube_helm_elastic: drwx------ 6 root root 4096 Jan 12 09:24 .. ubuntu_minikube_helm_elastic: drwxr-x--- 4 root root 4096 Jan 12 09:24 cache ubuntu_minikube_helm_elastic: -rw------- 1 root root 803 Jan 12 09:24 config ubuntu_minikube_helm_elastic: drwxr-x--- 3 root root 4096 Jan 12 09:24 .
The command:
sudo cat /root/.kube/config
With the following output:
ubuntu_minikube_helm_elastic: apiVersion: v1 ubuntu_minikube_helm_elastic: clusters: ubuntu_minikube_helm_elastic: - cluster: ubuntu_minikube_helm_elastic: certificate-authority: /root/.minikube/ca.crt ubuntu_minikube_helm_elastic: extensions: ubuntu_minikube_helm_elastic: - extension: ubuntu_minikube_helm_elastic: last-update: Wed, 12 Jan 2022 09:24:30 UTC ubuntu_minikube_helm_elastic: provider: minikube.sigs.k8s.io ubuntu_minikube_helm_elastic: version: v1.24.0 ubuntu_minikube_helm_elastic: name: cluster_info ubuntu_minikube_helm_elastic: server: https://10.0.2.15:8443 ubuntu_minikube_helm_elastic: name: minikube ubuntu_minikube_helm_elastic: contexts: ubuntu_minikube_helm_elastic: - context: ubuntu_minikube_helm_elastic: cluster: minikube ubuntu_minikube_helm_elastic: extensions: ubuntu_minikube_helm_elastic: - extension: ubuntu_minikube_helm_elastic: last-update: Wed, 12 Jan 2022 09:24:30 UTC ubuntu_minikube_helm_elastic: provider: minikube.sigs.k8s.io ubuntu_minikube_helm_elastic: version: v1.24.0 ubuntu_minikube_helm_elastic: name: context_info ubuntu_minikube_helm_elastic: namespace: default ubuntu_minikube_helm_elastic: user: minikube ubuntu_minikube_helm_elastic: name: minikube ubuntu_minikube_helm_elastic: current-context: minikube ubuntu_minikube_helm_elastic: kind: Config ubuntu_minikube_helm_elastic: preferences: {} ubuntu_minikube_helm_elastic: users: ubuntu_minikube_helm_elastic: - name: minikube ubuntu_minikube_helm_elastic: user: ubuntu_minikube_helm_elastic: client-certificate: /root/.minikube/profiles/minikube/client.crt ubuntu_minikube_helm_elastic: client-key: /root/.minikube/profiles/minikube/client.key
The command:
sudo ls -latr /root/.minikube
With the following output:
ubuntu_minikube_helm_elastic: total 56 ubuntu_minikube_helm_elastic: drwxr-xr-x 3 root root 4096 Jan 12 09:22 profiles ubuntu_minikube_helm_elastic: drwxr-xr-x 2 root root 4096 Jan 12 09:22 files ubuntu_minikube_helm_elastic: drwxr-xr-x 2 root root 4096 Jan 12 09:22 config ubuntu_minikube_helm_elastic: drwxr-xr-x 2 root root 4096 Jan 12 09:22 addons ubuntu_minikube_helm_elastic: -rw------- 1 root root 0 Jan 12 09:22 machine_client.lock ubuntu_minikube_helm_elastic: drwxr-xr-x 3 root root 4096 Jan 12 09:22 machines ubuntu_minikube_helm_elastic: drwxr-xr-x 2 root root 4096 Jan 12 09:22 certs ubuntu_minikube_helm_elastic: drwxr-xr-x 3 root root 4096 Jan 12 09:23 cache ubuntu_minikube_helm_elastic: -rw------- 1 root root 1675 Jan 12 09:23 ca.key ubuntu_minikube_helm_elastic: -rw-r--r-- 1 root root 1111 Jan 12 09:23 ca.crt ubuntu_minikube_helm_elastic: -rw------- 1 root root 1675 Jan 12 09:23 proxy-client-ca.key ubuntu_minikube_helm_elastic: -rw-r--r-- 1 root root 1119 Jan 12 09:23 proxy-client-ca.crt ubuntu_minikube_helm_elastic: drwxr-xr-x 10 root root 4096 Jan 12 09:23 . ubuntu_minikube_helm_elastic: drwx------ 6 root root 4096 Jan 12 09:24 .. ubuntu_minikube_helm_elastic: drwxr-xr-x 2 root root 4096 Jan 12 09:24 logs
The command:
sudo ls -latr /home/vagrant/.kube
With the following output:
ubuntu_minikube_helm_elastic: total 16 ubuntu_minikube_helm_elastic: drwxr-x--- 4 vagrant root 4096 Jan 12 09:24 cache ubuntu_minikube_helm_elastic: -rw------- 1 vagrant root 803 Jan 12 09:24 config ubuntu_minikube_helm_elastic: drwxr-xr-x 6 vagrant vagrant 4096 Jan 12 09:24 .. ubuntu_minikube_helm_elastic: drwxr-x--- 3 vagrant root 4096 Jan 12 09:24 .
Here you can see that user vagrant has become the owner of the directories and files.
The command:
sudo ls -latr /home/vagrant/.minikube
With the following output:
ubuntu_minikube_helm_elastic: total 56 ubuntu_minikube_helm_elastic: drwxr-xr-x 3 vagrant root 4096 Jan 12 09:24 machines ubuntu_minikube_helm_elastic: drwxr-xr-x 2 vagrant root 4096 Jan 12 09:24 logs ubuntu_minikube_helm_elastic: drwxr-xr-x 2 vagrant root 4096 Jan 12 09:24 certs ubuntu_minikube_helm_elastic: drwxr-xr-x 3 vagrant root 4096 Jan 12 09:24 cache ubuntu_minikube_helm_elastic: drwxr-xr-x 6 vagrant vagrant 4096 Jan 12 09:24 .. ubuntu_minikube_helm_elastic: -rw------- 1 vagrant root 1675 Jan 12 09:24 proxy-client-ca.key ubuntu_minikube_helm_elastic: -rw-r--r-- 1 vagrant root 1119 Jan 12 09:24 proxy-client-ca.crt ubuntu_minikube_helm_elastic: drwxr-xr-x 3 vagrant root 4096 Jan 12 09:24 profiles ubuntu_minikube_helm_elastic: -rw------- 1 vagrant root 0 Jan 12 09:24 machine_client.lock ubuntu_minikube_helm_elastic: drwxr-xr-x 2 vagrant root 4096 Jan 12 09:24 files ubuntu_minikube_helm_elastic: drwxr-xr-x 2 vagrant root 4096 Jan 12 09:24 config ubuntu_minikube_helm_elastic: -rw------- 1 vagrant root 1675 Jan 12 09:24 ca.key ubuntu_minikube_helm_elastic: -rw-r--r-- 1 vagrant root 1111 Jan 12 09:24 ca.crt ubuntu_minikube_helm_elastic: drwxr-xr-x 2 vagrant root 4096 Jan 12 09:24 addons ubuntu_minikube_helm_elastic: drwxr-xr-x 10 vagrant root 4096 Jan 12 09:24 .
Here you can see that user vagrant has become the owner of the directories and files.
I used vagrant ssh to open a Linux Command Prompt where I used the following command:
kubectl cluster-info
With the following output:
Error in configuration: * unable to read client-cert /root/.minikube/profiles/minikube/client.crt for minikube due to open /root/.minikube/profiles/minikube/client.crt: permission denied * unable to read client-key /root/.minikube/profiles/minikube/client.key for minikube due to open /root/.minikube/profiles/minikube/client.key: permission denied * unable to read certificate-authority /root/.minikube/ca.crt for minikube due to open /root/.minikube/ca.crt: permission denied vagrant@ubuntu-focal:~$
Here we can see that the /root/.minikube directory is used. So, the content of the kubeconfig file still contained references to some files in the root directory and user vagrant does not have the right permissions.
In order to list that directory contents, I used the following command on the Linux Command Prompt:
sudo ls -latr /root/.minikube/profiles/minikube
With the following output:
total 52 drwxr-xr-x 3 root root 4096 Jan 12 18:46 .. -rw------- 1 root root 1675 Jan 12 18:46 client.key -rw-r--r-- 1 root root 1147 Jan 12 18:46 client.crt -rw------- 1 root root 1675 Jan 12 18:46 apiserver.key.49504c3e -rw------- 1 root root 1675 Jan 12 18:46 apiserver.key -rw-r--r-- 1 root root 1399 Jan 12 18:46 apiserver.crt.49504c3e -rw-r--r-- 1 root root 1399 Jan 12 18:46 apiserver.crt -rw------- 1 root root 1675 Jan 12 18:46 proxy-client.key -rw-r--r-- 1 root root 1147 Jan 12 18:46 proxy-client.crt drwxr-xr-x 2 root root 4096 Jan 12 18:46 . -rw-r--r-- 1 root root 7165 Jan 12 18:47 events.json -rw------- 1 root root 2376 Jan 12 18:47 config.json
What happened so far, is that I copied the kubeconfig file from /root/.kube/config to /home/vagrant/.kube/config via command:
sudo cp /root/.kube /root/.minikube /home/vagrant
The content of the kubeconfig file /home/vagrant/.kube/config still contained references to some files in the root directory:
apiVersion: v1 clusters: - cluster: certificate-authority: /root/.minikube/ca.crt extensions: - extension: last-update: Wed, 12 Jan 2022 18:47:54 UTC provider: minikube.sigs.k8s.io version: v1.24.0 name: cluster_info server: https://10.0.2.15:8443 name: minikube contexts: - context: cluster: minikube extensions: - extension: last-update: Wed, 12 Jan 2022 18:47:54 UTC provider: minikube.sigs.k8s.io version: v1.24.0 name: context_info namespace: default user: minikube name: minikube current-context: minikube kind: Config preferences: {} users: - name: minikube user: client-certificate: /root/.minikube/profiles/minikube/client.crt client-key: /root/.minikube/profiles/minikube/client.key
For now, the easiest way was the change the content of the kubeconfig file.
So, I used a stream editor (sed) for this:
[https://www.gnu.org/software/sed/manual/sed.html]
sed -i 's/root/home\/vagrant/g' /home/vagrant/.kube/config cat /home/vagrant/.kube/config
With the following output:
[in bold, I highlighted the changes]
apiVersion: v1 clusters: - cluster: certificate-authority: /home/vagrant/.minikube/ca.crt extensions: - extension: last-update: Thu, 13 Jan 2022 07:01:49 UTC provider: minikube.sigs.k8s.io version: v1.24.0 name: cluster_info server: https://10.0.2.15:8443 name: minikube contexts: - context: cluster: minikube extensions: - extension: last-update: Thu, 13 Jan 2022 07:01:49 UTC provider: minikube.sigs.k8s.io version: v1.24.0 name: context_info namespace: default user: minikube name: minikube current-context: minikube kind: Config preferences: {} users: - name: minikube user: client-certificate: /home/vagrant/.minikube/profiles/minikube/client.crt client-key: /home/vagrant/.minikube/profiles/minikube/client.key
In order to check if it was working now, I used the following command on the Linux Command Prompt:
kubectl cluster-info
With the following output:
Kubernetes control plane is running at https://10.0.2.15:8443 CoreDNS is running at https://10.0.2.15:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
So, this problem was fixed. User vagrant can use kubectl or minikube commands.
I changed the content of my minikube.sh script to:
#!/bin/bash echo "**** Begin downloading minikube" #Kubernetes 1.22.3 requires conntrack to be installed in root's path sudo apt install -y conntrack #Download a static binary curl -o minikube https://storage.googleapis.com/minikube/releases/v1.24.0/minikube-linux-amd64 chmod +x minikube #Add the Minikube executable to your path sudo cp minikube /usr/local/bin/ rm minikube echo "**** End downloading minikube" echo "**** Begin starting a Cluster" sudo sysctl fs.protected_regular=0 #Start a Cluster minikube start --vm-driver=none #To use kubectl or minikube commands as your own user, you may need to relocate them. sudo cp -R /root/.kube /root/.minikube /home/vagrant sudo chown -R vagrant /home/vagrant/.kube /home/vagrant/.minikube sed -i 's/root/home\/vagrant/g' /home/vagrant/.kube/config minikube kubectl -- get pods -A echo "**** End starting a Cluster"
But as it turned out, my problems weren’t over yet.
Installing namespaces, The connection to the server localhost:8080 was refused – did you specify the right host or port?
Output from vagrant up command:
ubuntu_minikube_helm_elastic: cp: overwrite '/home/vagrant/.kube/config'?
Based on the Vagrantfile and the output above, these are the scripts that were executed so far:
- docker shell script
- minikube shell script
- kubectl shell script
This was the kubectl.sh script I used:
#!/bin/bash echo "**** Begin installing kubectl" #Install kubectl binary sudo apt-get update && sudo apt-get install -y apt-transport-https curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubectl #Check the kubectl configuration kubectl cluster-info #Make kubectl work for your non-root user named vagrant mkdir -p /home/vagrant/.kube sudo cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config sudo chown vagrant:vagrant /home/vagrant/.kube/config echo "**** End installing kubectl" echo "**** Begin preparing dashboard" kubectl proxy --address='0.0.0.0' </dev/null &>/dev/null & echo "**** End preparing dashboard"
I remembered that in the past, in this file, I also put some commands to make kubectl work for a non-root user named vagrant. Because this was now already being taking care of in another shell script, these lines of code could be removed from this shell script, to avoid:
cp: overwrite '/home/vagrant/.kube/config'?
So, I removed the following commands from my kubectl.sh script:
#Make kubectl work for your non-root user named vagrant mkdir -p /home/vagrant/.kube sudo cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config sudo chown vagrant:vagrant /home/vagrant/.kube/config
cp – copy files and directories
-i, –interactive
prompt before overwrite (overrides a previous -n option)
[http://manpages.ubuntu.com/manpages/trusty/man1/cp.1.html]
Installing Elasticsearch, Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: unable to recognize “”: no matches for kind “Deployment” in version “extensions/v1beta1”
Output from vagrant up command:
ubuntu_minikube_helm_elastic: **** Begin installing Elasticsearch ubuntu_minikube_helm_elastic: **** Create Helm chart ubuntu_minikube_helm_elastic: Creating elasticsearch-chart ubuntu_minikube_helm_elastic: WARNING: File "/vagrant/helmcharts/elasticsearch-chart/.helmignore" already exists. Overwriting. ubuntu_minikube_helm_elastic: **** Install Helm chart elasticsearch-chart ubuntu_minikube_helm_elastic: Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1" ubuntu_minikube_helm_elastic: **** Waiting 2,5 minute ...
As of Kubernetes v1.16 extensions/v1beta1 is a deprecated API version.
[https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/]
So, I changed the content of all my helm yaml files with apiVersion: extensions/v1beta1, for example from:
[in bold, I highlighted the changes]
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: elasticsearch namespace: nl-amis-logging
To:
apiVersion: apps/v1 kind: Deployment metadata: name: elasticsearch namespace: nl-amis-logging
Installing Elasticsearch, Error from server (NotFound): nodes “minikube” not found
Output from vagrant up command:
ubuntu_minikube_helm_elastic: **** Determine the IP of the minikube node ubuntu_minikube_helm_elastic: Error from server (NotFound): nodes "minikube" not found ubuntu_minikube_helm_elastic: ------ ubuntu_minikube_helm_elastic: **** Via socat forward local port 9200 to port 30200 on the minikube node () ubuntu_minikube_helm_elastic: **** Send a request to Elasticsearch ubuntu_minikube_helm_elastic: % Total % Received % Xferd Average Speed Time Time Time Current ubuntu_minikube_helm_elastic: Dload Upload Total Spent Left Speed ubuntu_minikube_helm_elastic: **** End installing Elasticsearch
Based on the Vagrantfile and the output above, these are the scripts that were executed so far:
- docker shell script
- minikube shell script
- kubectl shell script
- namespaces shell script
- elasticsearch shell script
This was the elasticsearch.sh script, I used:
#!/bin/bash echo "**** Begin installing Elasticsearch" #Create Helm chart echo "**** Create Helm chart" cd /vagrant cd helmcharts rm -rf /vagrant/helmcharts/elasticsearch-chart/* helm create elasticsearch-chart rm -rf /vagrant/helmcharts/elasticsearch-chart/templates/* cp /vagrant/yaml/*elasticsearch.yaml /vagrant/helmcharts/elasticsearch-chart/templates # Install Helm chart cd /vagrant cd helmcharts echo "**** Install Helm chart elasticsearch-chart" helm install elasticsearch-release ./elasticsearch-chart # Wait 2,5 minute echo "**** Waiting 2,5 minute ..." sleep 150 #List helm releases echo "**** List helm releases" helm list -d #List pods echo "**** List pods with namespace nl-amis-logging" kubectl get pods --namespace nl-amis-logging #List services echo "**** List services with namespace nl-amis-logging" kubectl get service --namespace nl-amis-logging echo "**** Determine the IP of the minikube node" nodeIP=$(kubectl get node minikube -o yaml | grep address: | grep -E -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}") echo "---$nodeIP---" echo "**** Via socat forward local port 9200 to port 30200 on the minikube node ($nodeIP)" socat tcp-listen:9200,fork tcp:$nodeIP:30200 & echo "**** Send a request to Elasticsearch" curl -XGET http://localhost:9200/_count?pretty #http://localhost:9200/_count?pretty echo "**** End installing Elasticsearch"
The error was:
Error from server (NotFound): nodes “minikube” not found
I used vagrant ssh to open a Linux Command Prompt where I used the following command:
kubectl get nodes
With the following output:
NAME STATUS ROLES AGE VERSION ubuntu-focal Ready control-plane,master 17m v1.22.3
Apparently, the node name wasn’t minikube as before but now it was ubuntu-focal.
This seems related to the name of the minikube VM being used.
[https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#setting-the-node-name]
I searched on the Internet and found a solution that worked for me.
[https://github.com/kubernetes/minikube/issues/4063]
I changed the content of my minikube.sh script from:
[in bold, I highlighted the changes]
#Start a Cluster minikube start --vm-driver=none
To:
#Start a Cluster minikube start \ --vm-driver=none \ --extra-config=kubeadm.node-name=minikube \ --extra-config=kubelet.hostname-override=minikube
Installing logstash, Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: unable to recognize “”: no matches for kind “Deployment” in version “apps/v1beta1”
Output from vagrant up command:
ubuntu_minikube_helm_elastic: **** Create Helm chart ubuntu_minikube_helm_elastic: Creating logstash-chart ubuntu_minikube_helm_elastic: WARNING: File "/vagrant/helmcharts/logstash-chart/.helmignore" already exists. Overwriting. ubuntu_minikube_helm_elastic: **** Install Helm chart logstash-chart ubuntu_minikube_helm_elastic: Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Deployment" in version "apps/v1beta1" ubuntu_minikube_helm_elastic: **** Waiting 2,5 minute ...
As of Kubernetes v1.16 apps/v1beta1 is a deprecated API version.
[https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/]
So, I changed the content of all my helm yaml files with apiVersion: apps/v1beta1, for example from:
[in bold, I highlighted the changes]
apiVersion: apps/v1beta1 kind: Deployment metadata: name: logstash namespace: nl-amis-logging
To:
apiVersion: apps/v1 kind: Deployment metadata: name: logstash namespace: nl-amis-logging
Installing logstash, Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: error validating “”: error validating data: ValidationError(Deployment.spec): missing required field “selector” in io.k8s.api.apps.v1.DeploymentSpec
Output from vagrant up command:
ubuntu_minikube_helm_elastic: **** Create Helm chart ubuntu_minikube_helm_elastic: Creating logstash-chart ubuntu_minikube_helm_elastic: WARNING: File "/vagrant/helmcharts/logstash-chart/.helmignore" already exists. Overwriting. ubuntu_minikube_helm_elastic: **** Install Helm chart logstash-chart ubuntu_minikube_helm_elastic: Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec ubuntu_minikube_helm_elastic: **** Waiting 2,5 minute ...
.spec.selector is a required field that specifies a label selector for the Pods targeted by this Deployment.
[https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#selector]
So, I changed the content of all my helm yaml files of kind Deployment, for example from:
[in bold, I highlighted the changes]
apiVersion: apps/v1 kind: Deployment metadata: name: logstash namespace: nl-amis-logging labels: app: logstash version: "1.0" environment: logging spec: replicas: 1
To:
apiVersion: apps/v1 kind: Deployment metadata: name: logstash namespace: nl-amis-logging labels: app: logstash version: "1.0" environment: logging spec: selector: matchLabels: app: logstash version: "1.0" environment: logging replicas: 1
Installing filebeat, Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: error validating “”: error validating data: ValidationError(DaemonSet.spec): missing required field “selector” in io.k8s.api.apps.v1.DaemonSetSpec
Output from vagrant up command:
ubuntu_minikube_helm_elastic: **** Create Helm chart ubuntu_minikube_helm_elastic: Creating filebeat-chart ubuntu_minikube_helm_elastic: WARNING: File "/vagrant/helmcharts/filebeat-chart/.helmignore" already exists. Overwriting. ubuntu_minikube_helm_elastic: **** Install Helm chart filebeat-chart ubuntu_minikube_helm_elastic: Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(DaemonSet.spec): missing required field "selector" in io.k8s.api.apps.v1.DaemonSetSpec ubuntu_minikube_helm_elastic: **** Waiting 1 minute ...
The .spec.selector field is a pod selector. It works the same as the .spec.selector of a Job.
As of Kubernetes 1.8, you must specify a pod selector that matches the labels of the .spec.template. The pod selector will no longer be defaulted when left empty.
[https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#pod-selector]
So, I changed the content of all my helm yaml files of kind DaemonSet, for example from:
[in bold, I highlighted the changes]
apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat-daemonset namespace: nl-amis-logging labels: app: filebeat version: "1.0" environment: logging spec: template: metadata: labels: app: filebeat version: "1.0" environment: logging
To:
apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat-daemonset namespace: nl-amis-logging labels: app: filebeat version: "1.0" environment: logging spec: selector: matchLabels: app: filebeat version: "1.0" environment: logging template: metadata: labels: app: filebeat version: "1.0" environment: logging
Installing filebeat, Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: [unable to recognize “”: no matches for kind “ClusterRole” in version “rbac.authorization.k8s.io/v1beta1”, error validating “”: error validating data: ValidationError(DaemonSet.spec): missing required field “selector” in io.k8s.api.apps.v1.DaemonSetSpec]
Output from vagrant up command:
ubuntu_minikube_helm_elastic: **** Create Helm chart ubuntu_minikube_helm_elastic: Creating filebeat-chart ubuntu_minikube_helm_elastic: WARNING: File "/vagrant/helmcharts/filebeat-chart/.helmignore" already exists. Overwriting. ubuntu_minikube_helm_elastic: **** Install Helm chart filebeat-chart ubuntu_minikube_helm_elastic: Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: [unable to recognize "": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1beta1", error validating "": error validating data: ValidationError(DaemonSet.spec): missing required field "selector" in io.k8s.api.apps.v1.DaemonSetSpec] ubuntu_minikube_helm_elastic: **** Waiting 1 minute ...
The rbac.authorization.k8s.io/v1beta1 API version of ClusterRole, ClusterRoleBinding, Role, and RoleBinding is no longer served as of v1.22.
[https://kubernetes.io/docs/reference/using-api/deprecation-guide/#rbac-resources-v122]
Remark:
The error message part about field “selector” in io.k8s.api.apps.v1.DaemonSetSpec I couldn’t quite understand, because I already fixed that in another shell script file (see further above).
So, I changed the content of all my helm yaml files with apiVersion: rbac.authorization.k8s.io/v1beta1, for example from:
[in bold, I highlighted the changes]
apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: filebeat-clusterrole namespace: nl-amis-logging
To:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: filebeat-clusterrole namespace: nl-amis-logging
So, it’s been quite a journey to get my demo environment working again. But in the end, I succeeded. The demo with focus on Elasticsearch and Kibana was a success and made the time and effort I spent on getting the demo environment working, worthwhile.