In my last article I described how two versions of a RESTful Web Service Spring Boot application were used in Minikube, together with an external “Dockerized” MySQL database.
[https://technology.amis.nl/2019/03/05/using-a-restful-web-service-spring-boot-application-in-minikube-together-with-an-external-dockerized-mysql-database/]
In this article I will describe how you can use Helm, the package manager for Kubernetes, to install two versions of a RESTful Web Service Spring Boot application, together with an external MySQL database, within Minikube.
Minikube
Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster for users looking to try out Kubernetes or develop with it day-to-day.
Helm
Helm is the package manager for Kubernetes.
With Helm you can find, share, and use software built for Kubernetes.
Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste.
The latest version of Helm is maintained by the CNCF – in collaboration with Microsoft, Google, Bitnami and the Helm contributor community.
With Helm a team can:
- Manage Complexity
- Easy Updates
- Simple Sharing
- Rollbacks
Charts describe even the most complex apps, provide repeatable application installation, and serve as a single point of authority.
Take the pain out of updates with in-place upgrades and custom hooks.
Charts are easy to version, share, and host on public or private servers.
Use helm rollback to roll back to an older version of a release with ease.
Using Helm
The following prerequisites are required for a successful and properly secured use of Helm.
- A Kubernetes cluster
- Deciding what security configurations to apply to your installation, if any.
- Installing and configuring Helm and Tiller, the cluster-side service.
You must have Kubernetes installed or have access to a cluster.
You should also have a local configured copy of kubectl.
In my case I am using Minikube.
If you’re using Helm on a cluster that you completely control, like Minikube or a cluster on a private network in which sharing is not a concern, the default installation – which applies no security configuration – is fine, and it’s definitely the easiest. To install Helm without additional security steps, install Helm and then initialize Helm.
Again, in my case I am using Minikube.
There are two parts to Helm: The Helm client (helm) and the Helm server (Tiller).
[https://helm.sh/docs/using_helm/#installing-helm]
For more information about Helm, I refer you to: https://helm.sh/docs/
Because on my Windows laptop, Minikube runs within an Oracle VirtualBox appliance, I will be using a Linux Command Prompt via ssh.
As described in a previous article, I created a subdirectory named env on my Windows laptop.
[https://technology.amis.nl/2019/03/05/using-a-restful-web-service-spring-boot-application-in-minikube-together-with-an-external-dockerized-mysql-database/]
I went to the env directory and opened a Windows Command Prompt (cmd) to access linux (within the VirtualBox Appliance) via ssh: vagrant ssh
Installing Helm
Linux Command Prompt: sudo snap install helm –classic
[https://helm.sh/docs/install/]
This command returned the following output:
helm 2.13.0 from ‘snapcrafters’ installed
Installing Tiller
Tiller, the server portion of Helm, typically runs inside of your Kubernetes cluster. But for development, it can also be run locally, and configured to talk to a remote Kubernetes cluster.
The easiest way to install Tiller into the cluster is simply to run helm init. This will validate that Helm’s local environment is set up correctly (and set it up if necessary). Then it will connect to whatever cluster kubectl connects to by default (kubectl config view). Once it connects, it will install Tiller into the kube-system namespace.
After helm init, you should be able to run kubectl get pods –namespace kube-system and see Tiller running.
[https://helm.sh/docs/install/]
To find out which cluster Tiller would install to, you can run kubectl config current-context or kubectl cluster-info.
[https://helm.sh/docs/using_helm/#quickstart]
Linux Command Prompt: kubectl config view
This command returned the following output:
apiVersion: v1
clusters:
– cluster:
certificate-authority-data: DATA+OMITTED
server: https://localhost:8443
name: kubernetes
contexts:
– context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
– name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
Linux Command Prompt: kubectl config current-context
This command returned the following output:
kubernetes-admin@kubernetes
Linux Command Prompt: kubectl cluster-info
This command returned the following output:
Kubernetes master is running at https://localhost:8443
KubeDNS is running at https://localhost:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.
Initialize Helm on both client and server:
helm init
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Linux Command Prompt: kubectl get pods –namespace kube-system
I confirmed from the Kubernetes dashboard, that Tiller has been installed (see: kube-system namespace)
Via the Kubernetes Web UI (Dashboard):
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/deployment?namespace=kube-system
Navigate to Workloads | Deployments:
Helm charts
Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
Charts are created as files laid out in a particular directory tree, then they can be packaged into versioned archives to be deployed.
The Chart File Structure
A chart is organized as a collection of files inside of a directory. The directory name is the name of the chart (without versioning information). Thus, a chart describing WordPress would be stored in the wordpress/ directory.
Inside of this directory, Helm will expect a structure that matches this:
wordpress/
Chart.yaml # A YAML file containing information about the chart
LICENSE # OPTIONAL: A plain text file containing the license for the chart
README.md # OPTIONAL: A human-readable README file
requirements.yaml # OPTIONAL: A YAML file listing dependencies for the chart
values.yaml # The default configuration values for this chart
charts/ # A directory containing any charts upon which this chart depends.
templates/ # A directory of templates that, when combined with values,
# will generate valid Kubernetes manifest files.
templates/NOTES.txt # OPTIONAL: A plain text file containing short usage notes
Helm reserves use of the charts/ and templates/ directories, and of the listed file names. Other files will be left as they are.
The Chart.yaml file is required for a chart. It contains, among others, the following required fields:
- apiVersion: The chart API version, always “v1” (required)
- name: The name of the chart (required)
- version: A SemVer 2 version (required)
[https://helm.sh/docs/developing_charts/]
For more information about the chart format, and basic guidance for building charts with Helm, please see: https://helm.sh/docs/developing_charts/
As described in a previous article, I created a subdirectory named env on my Windows laptop.
This directory is connected to the Shared Folder named vagrant in my Oracle VirtualBox appliance.
[https://technology.amis.nl/2019/03/05/using-a-restful-web-service-spring-boot-application-in-minikube-together-with-an-external-dockerized-mysql-database/]
In the env directory I created an helmcharts subdirectory.
Linux Command Prompt: cd /vagrant
Linux Command Prompt: cd helmcharts
Create a new namespace-chart helm chart:
helm create namespace-chart Creating namespace-chart
This command created the following directory structure:
The file Chart.yaml has the following content:
apiVersion: v1 appVersion: "1.0" description: A Helm chart for Kubernetes name: namespace-chart version: 0.1.0
The templates subdirectory has the following directory structure:
Create a new mysql-chart helm chart:
helm create mysql-chart Creating mysql-chart
This command created the following directory structure:
The file Chart.yaml has the following content:
apiVersion: v1 appVersion: "1.0" description: A Helm chart for Kubernetes name: mysql-chart version: 0.1.0
The templates subdirectory has the following directory structure:
Create a new booksservice-chart helm chart:
helm create booksservice-chart Creating booksservice-chart
This command created the following directory structure:
The file Chart.yaml has the following content:
apiVersion: v1 appVersion: "1.0" description: A Helm chart for Kubernetes name: booksservice-chart version: 0.1.0
The templates subdirectory has the following directory structure:
Updating the helm template folder
As described in my previous article, I created yaml files, in the yaml subdirectory.
[https://technology.amis.nl/2019/03/05/using-a-restful-web-service-spring-boot-application-in-minikube-together-with-an-external-dockerized-mysql-database/]
Copy the namespace yaml files to the helm template folder:
- Linux Command Prompt: rm -rf /vagrant/helmcharts/namespace-chart/templates/*
- Linux Command Prompt: cp /vagrant/yaml/namespace*.yaml /vagrant/helmcharts/namespace-chart/templates
The templates subdirectory now has the following directory structure:
Copy the mysql yaml files to the helm template folder:
- Linux Command Prompt: rm -rf /vagrant/helmcharts/mysql-chart/templates/*
- Linux Command Prompt: cp /vagrant/yaml/*mysql.yaml /vagrant/helmcharts/mysql-chart/templates
The templates subdirectory now has the following directory structure:
Copy the booksservice yaml files to the helm template folder:
- Linux Command Prompt: rm -rf /vagrant/helmcharts/booksservice-chart/templates/*
- Linux Command Prompt: cp /vagrant/yaml/*booksservice*.yaml /vagrant/helmcharts/booksservice-chart/templates
The templates subdirectory now has the following directory structure:
In my previous article, I described the “application landscape” I want to create:
Environment | Database | Booksservice version |
DEV | H2 in-memory | 1.0 |
2.0 | ||
TST | MySQL | 1.0 |
In the DEV environment the applications (version 1.0 and 2.0) will be using an H2 in-memory database.
In the TST environment the application (version 1.0) will be using an external MySQL database.
With the booksservice service you can add, update, delete and retrieve books from a catalog.
[https://technology.amis.nl/2019/03/05/using-a-restful-web-service-spring-boot-application-in-minikube-together-with-an-external-dockerized-mysql-database/]
Validating a chart
The helm lint command examines a chart for possible issues.
This command takes a path to a chart and runs a series of tests to verify that the chart is well-formed.
If the linter encounters things that will cause the chart to fail installation, it will emit [ERROR] messages. If it encounters issues that break with convention or recommendation, it will emit [WARNING] messages.
[https://helm.sh/docs/helm/#helm-lint]
Validate the namespace-chart helm chart:
helm lint ./namespace-chart ==> Linting ./namespace-chart [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures
Validate the mysql-chart helm chart:
helm lint ./mysql-chart ==> Linting ./mysql-chart [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures
Validate the booksservice-chart helm chart:
helm lint ./booksservice-chart ==> Linting ./booksservice-chart [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, no failures
Simulate installing a helm chart
The helm install command installs a chart archive.
The install argument must be a chart reference, a path to a packaged chart, a path to an unpacked chart directory or a URL.
To check the generated manifests of a release without installing the chart, the ‘–debug’ and ‘–dry-run’ flags can be combined. This will still require a round-trip to the Tiller server.
Options:
–debug enable verbose output
–dry-run simulate an install
[https://helm.sh/docs/helm/#helm-install]
Simulate installing the namespace-chart helm chart:
helm install --debug --dry-run ./namespace-chart
With the following output:
[debug] Created tunnel using local port: ‘39704’
[debug] SERVER: “127.0.0.1:39704”
[debug] Original chart version: “”
[debug] CHART PATH: /vagrant/helmcharts/namespace-chart
E0308 13:02:00.421284 13185 portforward.go:391] an error occurred forwarding 39704 -> 44134: error forwarding port 44134 to pod 1726b146172f05f1e12d1f094a19c39688ed65e1a02318d821aa3eefdb043ccc, uid : unable to do port forwarding: socat not found.
Apparently socat was needed, and I hadn’t installed it yet.
Socat is a command line based utility that establishes two bidirectional byte streams and transfers data between them. Because the streams can be constructed from a large set of different types of data sinks and sources (see address types), and because lots of address options may be applied to the streams, socat can be used for many different purposes.
[http://www.dest-unreach.org/socat/doc/socat.html]
Linux Command Prompt: sudo apt-get install socat
So, let’s try again.
Simulate installing the namespace-chart helm chart:
helm install --debug --dry-run ./namespace-chart
With the following output:
[debug] Created tunnel using local port: ‘34737’
[debug] SERVER: “127.0.0.1:34737”
[debug] Original chart version: “”
[debug] CHART PATH: /vagrant/helmcharts/namespace-chart
NAME: newbie-dingo
REVISION: 1
RELEASED: Fri Mar 8 13:04:10 2019
CHART: namespace-chart-0.1.0
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
affinity: {}
fullnameOverride: “”
image:
pullPolicy: IfNotPresent
repository: nginx
tag: stable
ingress:
annotations: {}
enabled: false
hosts:
– host: chart-example.local
paths: []
tls: []
nameOverride: “”
nodeSelector: {}
replicaCount: 1
resources: {}
service:
port: 80
type: ClusterIP
tolerations: []
HOOKS:
MANIFEST:
Simulate installing the mysql-chart helm chart:
helm install --debug --dry-run ./mysql-chart
With the following output:
[debug] Created tunnel using local port: ‘40385’
[debug] SERVER: “127.0.0.1:40385”
[debug] Original chart version: “”
[debug] CHART PATH: /vagrant/helmcharts/mysql-chart
NAME: lunging-cheetah
REVISION: 1
RELEASED: Fri Mar 8 13:27:49 2019
CHART: mysql-chart-0.1.0
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
affinity: {}
fullnameOverride: “”
image:
pullPolicy: IfNotPresent
repository: nginx
tag: stable
ingress:
annotations: {}
enabled: false
hosts:
– host: chart-example.local
paths: []
tls: []
nameOverride: “”
nodeSelector: {}
replicaCount: 1
resources: {}
service:
port: 80
type: ClusterIP
tolerations: []
HOOKS:
MANIFEST:
Simulate installing the booksservice-chart helm chart:
helm install --debug --dry-run ./booksservice-chart
With the following output:
[debug] Created tunnel using local port: ‘44953’
[debug] SERVER: “127.0.0.1:44953”
[debug] Original chart version: “”
[debug] CHART PATH: /vagrant/helmcharts/booksservice-chart
NAME: wrapping-quail
REVISION: 1
RELEASED: Fri Mar 8 13:30:41 2019
CHART: booksservice-chart-0.1.0
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
affinity: {}
fullnameOverride: “”
image:
pullPolicy: IfNotPresent
repository: nginx
tag: stable
ingress:
annotations: {}
enabled: false
hosts:
– host: chart-example.local
paths: []
tls: []
nameOverride: “”
nodeSelector: {}
replicaCount: 1
resources: {}
service:
port: 80
type: ClusterIP
tolerations: []
HOOKS:
MANIFEST:
When we take a look at the output of the simulation of installing the helm charts, we can see that the releases get a generated name.
For example, with the simulation of installing the namespace-chart helm chart, the release is named: newbie-dingo.
This is a generated name and not a good name for a release.
Luckily, we can use a name flag to give a release a name.
[https://helm.sh/docs/helm/#helm-install]
Creating Docker images
Before installing the helm charts, we must make sure the involved docker images are present. See my previous article.
[https://technology.amis.nl/2019/03/05/using-a-restful-web-service-spring-boot-application-in-minikube-together-with-an-external-dockerized-mysql-database/]
Linux Command Prompt: cd /vagrant
Linux Command Prompt: cd applications
Linux Command Prompt: cd books_service_1.0
Linux Command Prompt: docker build -t booksservice:v1.0 .
This command returned the following output:
Sending build context to Docker daemon 37.84MB
…
Successfully built 64d9ee081fa9
Successfully tagged booksservice:v1.0
Linux Command Prompt: cd ..
Linux Command Prompt: cd books_service_2.0
Linux Command Prompt: docker build -t booksservice:v2.0 .
This command returned the following output:
Sending build context to Docker daemon 37.84MB
…
Successfully built 070939e221c2
Successfully tagged booksservice:v2.0
Installing a helm chart (namespace-chart)
Install the namespace-chart helm chart:
helm install ./namespace-chart --name namespace-release
With the following output:
NAME: namespace-release
LAST DEPLOYED: Sun Mar 10 15:06:46 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Namespace
NAME STATUS AGE
nl-amis-development Active 0s
nl-amis-testing Active 0s
Check via the Kubernetes Web UI (Dashboard) that the namespaces are created:
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/deployment?namespace=default
Navigate to Cluster | Namespaces:
Installing a helm chart (mysql-chart)
Install the mysql-chart helm chart:
helm install ./mysql-chart --name mysql-release
With the following output:
NAME: mysql-release
LAST DEPLOYED: Sun Mar 10 15:07:41 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mysql 0/1 1 0 0s
==> v1/PersistentVolume
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mysql-persistent-volume 1Gi RWO Retain Bound nl-amis-testing/mysql-pv-claim manual 0s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim Bound mysql-persistent-volume 1Gi RWO manual 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
mysql-64846c7974-gt7x4 0/1 ContainerCreating 0 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql-service ClusterIP None 3306/TCP 0s
Check via the Kubernetes Web UI (Dashboard) that the deployment is created:
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/node?namespace=nl-amis-testing
Navigate to Workloads | Deployments:
Check via the Kubernetes Web UI (Dashboard) that the service is created:
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/node?namespace=nl-amis-testing
Navigate to Discovery and Load Balancing| Services:
Mysql-client
In my previous article, in the application-testing.properties you can see that a MySQL database named test is used.
[https://technology.amis.nl/2019/02/26/building-a-restful-web-service-with-spring-boot-using-an-h2-in-memory-database-and-also-an-external-mysql-database/]
So, I first created that database. For this I used a mysql-client.
kubectl --namespace=nl-amis-testing run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql-service.nl-amis-testing -ppassword If you don't see a command prompt, try pressing enter.
I had to click on the Enter button.
mysql> show databases;
With the following output:
mysql> create database test; Query OK, 1 row affected (0.00 sec)
mysql> show databases;
With the following output:
mysql> exit Bye pod "mysql-client" deleted
Installing a helm chart (booksservice-chart)
Install the booksservice-chart helm chart:
helm install ./booksservice-chart --name booksservice-release
With the following output:
NAME: booksservice-release
LAST DEPLOYED: Mon Mar 11 08:03:56 2019
NAMESPACE: default
STATUS: DEPLOYED
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
booksservice-v1.0-5bcd5fddbd-dj7pw 0/1 Pending 0 0s
booksservice-v1.0-5bcd5fddbd-q2v6d 0/1 Pending 0 0s
booksservice-v1.0-68785bc6ff-dbm5r 0/1 ContainerCreating 0 0s
booksservice-v1.0-68785bc6ff-vq7p2 0/1 ContainerCreating 0 0s
booksservice-v2.0-869c5bb47d-hgsgk 0/1 ContainerCreating 0 0s
booksservice-v2.0-869c5bb47d-ztb2m 0/1 ContainerCreating 0 0s
Check via the Kubernetes Web UI (Dashboard) that the deployments are created (in the nl-amis-development namespace):
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/node?namespace=nl-amis-development
Navigate to Workloads | Deployments:
Check via the Kubernetes Web UI (Dashboard) that the services are created (in the nl-amis-development namespace):
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/node?namespace=nl-amis-development
Navigate to Discovery and Load Balancing| Services:
Check via the Kubernetes Web UI (Dashboard) that the deployment is created (in the nl-amis-testing namespace):
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/node?namespace=nl-amis-testing
Navigate to Workloads | Deployments:
Check via the Kubernetes Web UI (Dashboard) that the service is created (in the nl-amis-testing namespace):
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/node?namespace=nl-amis-testing
Navigate to Discovery and Load Balancing| Services:
Click on the booksservice-v1-0-service service:
Via the Details you can see the following:
- The Cluster IP address of this service is: 10.105.241.75
- The Internal endpoints of this service are:
- booksservice-v1-0-service.nl-amis-testing:9191 TCP
- booksservice-v1-0-service.nl-amis-testing:30110 TCP
- The Endpoints of the Pods/Containers are:
- 172.17.0.11:9091 TCP
- 172.17.0.12:9091 TCP
Releases
List releases:
helm list
By default, it lists only releases that are deployed or failed. Flags like ‘–deleted’ and ‘–all’ will alter this behavior. Such flags can be combined: ‘–deleted –failed’.
By default, items are sorted alphabetically. Use the ‘-d’ flag to sort by release date.
[https://helm.sh/docs/helm/#helm-list]
List releases:
helm list -d
Calling the booksservice application
Alright, so now the application should be running two Docker containers. Let’s test how the application responds by firing some requests.
First, I added some books with POST-requests to the booksservice application:
curl --header "Content-Type: application/json" --request POST --data '{"id": 1, "title": "The Threat: How the FBI Protects America in the Age of Terror and Trump", "author": "Andrew G. McCabe", "type": "Hardcover", "price": 17.99, "numOfPages": 288, "language": "English", "isbn13": "978-1250207579"}' http://10.105.241.75:9191/books
curl --header "Content-Type: application/json" --request POST --data '{"id": 2, "title": "Becoming", "publishDate": "2018-11-13", "author": "Michelle Obama", "type": "Hardcover", "price": 17.88, "numOfPages": 448, "publisher": "Crown Publishing Group; First Edition edition", "language": "English", "isbn13": "978-1524763138"}' http://10.105.241.75:9191/books
As you can see, I used the Cluster IP address of the service.
Next, we should be able to see our updated list at the /books path with a GET request. First I tried the service IP address:
curl http://10.105.241.75:9191/books [{"id":"1","title":"The Threat: How the FBI Protects America in the Age of Terror and Trump","author":"Andrew G. McCabe","type":"Hardcover","price":17.99,"numOfPages":288,"language":"English","isbn13":"978-1250207579"},{"id":"2","title":"Becoming","author":"Michelle Obama","type":"Hardcover","price":17.88,"numOfPages":448,"language":"English","isbn13":"978-1524763138"}]
I also tried, the first Pod IP address:
curl http://172.17.0.11:9091/books [{"id":"1","title":"The Threat: How the FBI Protects America in the Age of Terror and Trump","author":"Andrew G. McCabe","type":"Hardcover","price":17.99,"numOfPages":288,"language":"English","isbn13":"978-1250207579"},{"id":"2","title":"Becoming","author":"Michelle Obama","type":"Hardcover","price":17.88,"numOfPages":448,"language":"English","isbn13":"978-1524763138"}]
And then I tried, the second Pod IP address:
curl http://172.17.0.12:9091/books [{"id":"1","title":"The Threat: How the FBI Protects America in the Age of Terror and Trump","author":"Andrew G. McCabe","type":"Hardcover","price":17.99,"numOfPages":288,"language":"English","isbn13":"978-1250207579"},{"id":"2","title":"Becoming","author":"Michelle Obama","type":"Hardcover","price":17.88,"numOfPages":448,"language":"English","isbn13":"978-1524763138"}]
Remember, this deployment uses the booksservice (RESTful Web Service Spring Boot Application) with an external MySQL database, running in a separate Docker container. So, the first and second Pod both use the same external MySQL database. Therefor it doesn’t matter to which Pod the service sent the POST-requests. Each of the above GET-requests responds with the same answer (all 2 books).
As a final step I checked the contents of the book table. Again I used a mysql-client for this.
kubectl --namespace=nl-amis-testing run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql-service.nl-amis-testing -ppassword If you don't see a command prompt, try pressing enter.
I had to click on the Enter button.
mysql> use test; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed
mysql> select * from book;
With the following output:
Again, both books that I added, were returned.
mysql> exit Bye pod "mysql-client" deleted
Upgrading a release (booksservice-release)
Next, I made some changes. For version 2.0 of the BooksServiceApplication, the replicas are changed from 2 to 5, so the ReplicaSet ensures that 5 pod replicas are running at any given time.
I changed the content of file deployment-booksservice-dev-v2.0.yaml in the template directory to:
apiVersion: apps/v1 kind: Deployment metadata: name: booksservice-v2.0 namespace: nl-amis-development labels: app: booksservice version: "2.0" environment: development spec: replicas: 5 selector: matchLabels: app: booksservice version: "2.0" environment: development template: metadata: labels: app: booksservice version: "2.0" environment: development spec: containers: - name: booksservice-v2-0-container image: booksservice:v2.0 env: - name: spring.profiles.active value: "development" ports: - containerPort: 9090
Remark:
This deployment uses the booksservice (RESTful Web Service Spring Boot application) with H2 as an embedded in-memory database.
The replicas are set to 5, so the ReplicaSet ensures that 5 pod replicas are running at any given time.
The ReplicaSet manages all the pods with labels that match the selector. In my case these labels are:
Label key | Label value |
app | booksservice |
version | 2.0 |
environment | development |
Via the Environment variables, the active Spring profile has to be set up.
Then, I updated the version of the release in the Chart.yaml file of the booksservice-chart helm chart to 0.2.0
The file Chart.yaml then has the following content:
apiVersion: v1 appVersion: "1.0" description: A Helm chart for Kubernetes name: booksservice-chart version: 0.2.0
Upgrade the booksservice-release release:
helm upgrade booksservice-release ./booksservice-chart
This command upgrades a release to a specified version of a chart and/or updates chart values.
[https://helm.sh/docs/helm/#helm-upgrade]
With the following output:
Release “booksservice-release” has been upgraded. Happy Helming!
LAST DEPLOYED: Mon Mar 11 13:36:57 2019
NAMESPACE: default
STATUS: DEPLOYED
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
booksservice-v1.0-5bcd5fddbd-mx642 1/1 Running 0 19m
booksservice-v1.0-5bcd5fddbd-xbqkw 1/1 Running 0 19m
booksservice-v1.0-68785bc6ff-49cjf 1/1 Running 0 19m
booksservice-v1.0-68785bc6ff-57hn8 1/1 Running 0 19m
booksservice-v2.0-869c5bb47d-fntgd 1/1 Running 0 19m
booksservice-v2.0-869c5bb47d-gqmhh 0/1 ContainerCreating 0 0s
booksservice-v2.0-869c5bb47d-jbgrr 0/1 ContainerCreating 0 0s
booksservice-v2.0-869c5bb47d-mt6wj 1/1 Running 0 19m
booksservice-v2.0-869c5bb47d-zzjrq 0/1 ContainerCreating 0 0s
List releases:
helm list -d
We can see, that the new version number is being used in the chart name.
Check via the Kubernetes Web UI (Dashboard) that the correct number of pods are created (in the nl-amis-development namespace):
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/node?namespace=nl-amis-development
Navigate to Workloads | Replica Sets:
Click on the booksservice-v2.0-869c5bb47d replica set:
Here we can see that the ReplicaSet ensures that 5 pod replicas are running at any given time.
Finally, let’s try to delete and re-install a release.
Delete the booksservice-release release:
helm delete booksservice-release release "booksservice-release" deleted
This command takes a release name, and then deletes the release from Kubernetes. It removes all of the resources associated with the last release of the chart.
[https://helm.sh/docs/helm/#helm-delete]
Install the booksservice-chart helm chart:
helm install ./booksservice-chart --name booksservice-release
With the following output:
Error: a release named booksservice-release already exists.
Run: helm ls –all booksservice-release; to check the status of the release
Or run: helm del –purge booksservice-release; to delete it
Apparently, the release still existed.
List releases:
helm ls --all booksservice-release
By default, it lists only releases that are deployed or failed. Flags like ‘–deleted’ and ‘–all’ will alter this behavior. Such flags can be combined: ‘–deleted –failed’.
Flag: –all show all releases, not just the ones marked DEPLOYED
[https://helm.sh/docs/helm/#helm-list]
With the following output:
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
booksservice-release 2 Mon Mar 11 13:36:57 2019 DELETED booksservice-chart-0.2.0 1.0 default
So, the release still existed with as status DELETED.
Delete the release:
helm del --purge booksservice-release release "booksservice-release" deleted
Install the booksservice-chart helm chart:
helm install ./booksservice-chart --name booksservice-release
With the following output:
NAME: booksservice-release
LAST DEPLOYED: Mon Mar 11 13:49:25 2019
NAMESPACE: default
STATUS: DEPLOYED
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
booksservice-v1.0-5bcd5fddbd-kds47 0/1 ContainerCreating 0 0s
booksservice-v1.0-5bcd5fddbd-q474x 0/1 Pending 0 0s
booksservice-v1.0-68785bc6ff-pgvjj 0/1 ContainerCreating 0 0s
booksservice-v1.0-68785bc6ff-rlhjm 0/1 ContainerCreating 0 0s
booksservice-v2.0-869c5bb47d-5dmjg 0/1 ContainerCreating 0 0s
booksservice-v2.0-869c5bb47d-cfljj 0/1 ContainerCreating 0 0s
booksservice-v2.0-869c5bb47d-jnftt 0/1 Pending 0 0s
booksservice-v2.0-869c5bb47d-lckc2 0/1 Pending 0 0s
booksservice-v2.0-869c5bb47d-r9p4j 0/1 Pending 0 0s
List releases:
helm list -d
Check via the Kubernetes Web UI (Dashboard) that the correct number of pods are created (in the nl-amis-development namespace):
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/#!/node?namespace=nl-amis-development
Navigate to Workloads | Replicae Sets:
Here we can see that the correct number of pods are created.
With this final check I conclude this article.
I tried out some of the functionality of Helm and found it easy and helpful to use this package manager for Kubernetes.
nice articles good to learn
Thanks for your reaction.
Thanks for this helpful post