Jenkins: Building Java and deploying to Kubernetes jenkins pipeline scaled 1

Jenkins: Building Java and deploying to Kubernetes

Kubernetes is a popular platform to run and manage containerized applications. A CI/CD solution is often needed but not always provided. You might need to set this up for yourself. In this blog post I’ll provide a minimal end-to-end solution for Java applications. This starts with a commit in source control and ends with deployment to Kubernetes.

Tools used:

  • Jenkins as CI/CD platform
  • Kubernetes deployed with Kubespray on KVM (here)
  • MetalLB as Kubernetes loadbalancer
  • GitHub as version control system
  • Smee to forward Webhook calls from GitHub to Jenkins
  • Maven to build the Java code
  • Google Jib to wrap my compiled Java in a container
  • DockerHub as container registry

The Java application

I’ve created a simple Spring Boot service. You can find the code here. I hosted it on GitHub since it was easy to use as source for Jenkins.

I needed something to wrap my Java application inside a container. There are various plug-ins available like for example the Spotify dockerfile-maven plug-in (here) and the fabric8 docker-maven-plugin (here). They both require access to a Docker daemon though. This can be complicated, especially when running Jenkins slaves within Kubernetes. There are workarounds but I did not find any that seemed both easy and secure. I decided to go for Google’s Jib to build my containers since it didn’t have that requirement. 

Docker build flow:

Jenkins: Building Java and deploying to Kubernetes docker build flow

Jib build flow:

Jenkins: Building Java and deploying to Kubernetes jib build flow

The benefits of reducing dependencies for the build process are obvious. In addition Jib also does some smart things splitting the Java application in different container layers. See here. This reduces the amount of storage required for building and deploying new versions as often some of the layers, such as the dependencies layer, don’t change and can be cached. This can reduce build time. As you can see, Jib does not use a Dockerfile so the logic usually in the Dockerfile can be found in the plugin configuration inside the pom.xml file.

Since I did not have a private registry available at the time of writing, I decided to use DockerHub for this. You can find the configuration for using DockerHub inside the pom.xml. It uses environment variables set by the Jenkins build for the credentials (and only in the Jenkins slave which is destroyed after usage). This seemed more secure than passing them in the Maven command-line.

Note that Spring buildpacks could provide similar functionality. I have not looked into them yet though. 

Installing Jenkins

For my Kubernetes environment I have used the setup described here. You also need a persistent storage solution as prerequisite for Jenkins. In a PaaS environment, this is usually provided, but if it is not or you are running your own installation, you can consider using OpenEBS. How to install OpenEBS is described here. kubectl (+ Kubernetes configuration .kube/config) and helm need to be installed on the machine from which you are going to perform the Jenkins deployment.

After you have and a storage class, you can continue with the installation of Jenkins.

First create a PersistentVolumeClaim to store the Jenkins master persistent data. Again, this is based on the storage class solution described above.

 kubectl create ns jenkins  
   
 kubectl create -n jenkins -f - <<END  
 apiVersion: v1  
 kind: PersistentVolumeClaim  
 metadata:  
  name: jenkins-pv-claim  
 spec:  
  storageClassName: openebs-sc-statefulset  
  accessModes:  
   - ReadWriteOnce  
  resources:  
   requests:  
    storage: 8Gi  
 END  

Next install Jenkins. Mind that recently the Jenkins repository for the most up to date Helm charts has recently moved.

 cat << EOF > jenkins-config.yaml  
 persistence:  
   enabled: true  
   size: 5Gi  
   accessMode: ReadWriteOnce  
   existingClaim: jenkins-pv-claim  
   storageClass: "openebs-sc-statefulset"  
 EOF  
   
 helm repo add jenkinsci https://charts.jenkins.io  
 helm install my-jenkins-release -f jenkins-config.yaml jenkinsci/jenkins --namespace jenkins  

Get your ‘admin’ user password by running:

 printf $(kubectl get secret --namespace jenkins my-jenkins-release -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo

Make Jenkins available on localhost:8080 by doing the folowing:

 printf $(kubectl get secret --namespace jenkins my-jenkins-release -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
 kubectl --namespace jenkins port-forward $POD_NAME 8080:8080

Now visit http://localhost:8080 in your browser and login using user admin and the previously obtained password

Jenkins: Building Java and deploying to Kubernetes jenkins login

Configuring Jenkins

The pipeline

Since I’m doing ‘configuration as code’ I created a declarative Jenkins pipeline and also put it in GitHub next to the service I wanted to deploy. You can find it here. As you can see, the pipeline has several dependencies.

  • presence of tool configuration in Jenkins
    • Maven
    • JDK
  • the Kubernetes CLI plugin (withKubeConfig)
    This plugin makes Kubernetes configuration available within the Jenkins slaves during the build process
  • the Pipeline Maven Integration plugin (withMaven)
    This plugin archives Maven artifacts created such as test reports and JAR files

Tool configuration

JDK

The default JDK plugin can only download old Java versions from Oracle. JDK 11 for example is not available this way so I needed to add a new JDK. I specified a download location of the JDK. There are various available such as AdoptOpenJDK or the one available from Red Hat or Azul Systems. Inside the archive I checked which sub-directory the JDK was put in. I specified this sub-directory in the tool configuration.

Jenkins: Building Java and deploying to Kubernetes Screenshot from 2020 09 20 14 51 37

Please note that downloading the JDK during each build can be slow and prone to errors (suppose the download URL changes). A better way is to make it available as a mount inside the Jenkins slave container. For this minimal setup I didn’t do that though.

You also need to define a JAVA_HOME variable pointing to a location like indicated below. Why? Well, you also want Maven to use the same JDK.

Jenkins: Building Java and deploying to Kubernetes Screenshot from 2020 09 20 14 20 13

Maven

Making Maven available is easy and can be done without the need to configure specific files to download and environment variables.

Jenkins: Building Java and deploying to Kubernetes Screenshot from 2020 09 20 13 48 42

The name of the Maven installation is referenced in the Jenkins pipeline like:

  tools {  
   jdk 'jdk-11'  
   maven 'mvn-3.6.3'  
  }  
   
  stages {  
   stage('Build') {  
    steps {  
     withMaven(maven : 'mvn-3.6.3') {  
      sh "mvn package"  
     }  
    }  
   }  

Kubectl

For kubectl there is no tool definition in the Jenkins configuration available so I did the following:

 sh 'curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"'  
 sh 'chmod u+x ./kubectl'  
 sh './kubectl apply -f k8s.yaml'

Please mind this does not make the build reproducible since the latest stable kubectl can be updated remotely; it is no fixed version.

As you can see, a k8s.yaml file is required. This file can be partially generated with the commands below. I’ve added the Ingress myself though.

 kubectl create deployment spring-boot-demo --image=docker.io/maartensmeets/spring-boot-demo --dry-run=client -o=yaml > k8s.yaml
 echo --- >> k8s.yaml
 kubectl create service loadbalancer spring-boot-demo --tcp=8080:8080 --dry-run=client -o=yaml >> deployment.yaml

The k8s.yaml also depends on a load-balancer to be present. In on-premises installations, you might not have one. I can recommend MetalLB. It is easy to install and use. As indicated on the site though, it is a young product (read here).

In order to install the loadbalancer, first create some configuration. Mind the IPs; they are specific for my environment. You might need to provide your own.

 kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
 kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
 # On first install only
 kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

 kubectl apply -f - <<END 
 apiVersion: v1
 kind: ConfigMap
 metadata:
   namespace: metallb-system
   name: config
 data:
   config: |
     address-pools:
     - name: default
       protocol: layer2
       addresses:
       - 192.168.122.150-192.168.122.255
 END

Credential configuration

Kubernetes

In order for Jenkins to deploy to Kubernetes, Jenkins needs credentials. An easy way to achieve this is by storing a config file (named ‘config’) in Jenkins. This file is usually used by kubectl and found in .kube/config. It allows Jenkins to apply yaml configuration to a Kubernetes instance.

Jenkins: Building Java and deploying to Kubernetes Screenshot from 2020 09 20 21 20 35

The file can then be referenced from a Jenkins pipeline with the Kubernetes CLI plugin like in the snipped below.

 withKubeConfig([credentialsId: 'kubernetes-config']) {  
      sh 'curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"'  
      sh 'chmod u+x ./kubectl'  
      sh './kubectl apply -f k8s.yaml'  
     }  

DockerHub

I used DockerHub as my container registry. The pom.xml file references the environment variables DOCKER_USERNAME and DOCKER_PASSWORD but how do we set them from the Jenkins configuration? By storing them as credentials of course! 

Jenkins: Building Java and deploying to Kubernetes Screenshot from 2020 09 20 13 47 40

In the pipeline you can access them as followed:

 withCredentials([usernamePassword(credentialsId: 'docker-credentials', usernameVariable: 'DOCKER_USERNAME', passwordVariable: 'DOCKER_PASSWORD')]) {  
      sh "mvn jib:build"  
     }  

This sample stores credentials directly in Jenkins. You can also use the Jenkins Kubernetes Credentials Provider to store credentials in Kubernetes as secrets. This provides some benefits in management of the credentials, for example with kubectl it is easy to script changes. A challenge is giving the Jenkins user sufficient but not too much privileges on Kubernetes.

GitHub

In order to access GitHub, also some credentials are required:

Jenkins: Building Java and deploying to Kubernetes Screenshot from 2020 09 23 19 55 26

Jenkins job configuration

The configuration of the Jenkins job is actually almost the least exciting. The pipeline is defined outside of Jenkins. The only thing Jenkins needs to know is where to find the sources and the pipeline.

Jenkins: Building Java and deploying to Kubernetes Screenshot from 2020 09 23 19 25 35

Create a Multibranch Pipeline job. Multibranch is quite powerful since it allows you to build multiple branches with the same job configuration.

Open the job configuration and specify the Git source. 

Jenkins: Building Java and deploying to Kubernetes Screenshot from 2020 09 23 19 58 22

The build is based on the Jenkinsfile which contains the pipeline definition.

Jenkins: Building Java and deploying to Kubernetes Screenshot from 2020 09 23 20 05 16

After you have saved the job it will start building immediately. 

Building and running

Webhook configuration

What is lacking here is Webhook configuration. See for example here. This will cause Jenkins builds to be triggered when branches are created, pull requests are merged, commits happen, etc. Since I’m running Kubernetes locally I do not have a publicly exposed endpoint as target for the GitHub webhook. You can use a simple service like smee.io to get a public URL and forward it to your local Jenkins. Added benefit is that it generally does not care about things like firewalls (similar to ngrok but Webhook specific and doesn’t require an account).

Jenkins: Building Java and deploying to Kubernetes Screenshot from 2020 09 23 20 11 05

After you have installed the smee CLI and have the Jenkins port forward running (the thing which makes Jenkins available on port 8080) you can do (of course your URL will differ):

 smee -u https://smee.io/z8AyLYwJaUBBDA5V -t http://localhost:8080/github-webhook/

This starts the Webhook proxy and forwards requests to the Jenkins webhook URL. In GitHub you can add a Webhook to call the created proxy and make it trigger Jenkins builds.

Jenkins: Building Java and deploying to Kubernetes Screenshot from 2020 09 23 20 21 37

Next you can confirm it works from Smee and from GitHub

Jenkins: Building Java and deploying to Kubernetes Screenshot from 2020 09 23 20 22 48
Jenkins: Building Java and deploying to Kubernetes Screenshot from 2020 09 23 20 22 58

If I now create a new branch in GitHub

Jenkins: Building Java and deploying to Kubernetes Screenshot from 2020 09 23 20 24 34

It will appear in Jenkins and start building

Jenkins: Building Java and deploying to Kubernetes Screenshot from 2020 09 23 20 25 23
Jenkins: Building Java and deploying to Kubernetes Screenshot from 2020 09 23 20 31 49

If you prefer a nicer web-interface, I can recommend Blue Ocean for Jenkins. It can easily be installed by installing the Blue Ocean plugin.

Jenkins: Building Java and deploying to Kubernetes Screenshot from 2020 09 20 21 31 44

Finally

After you’ve done all the above, you can access your service running in your Kubernetes environment after a GitHub commit which fires off a webhook which is forwarded by Smee to Jenkins which triggers a Multibranch Pipeline build which builds the Java service, wraps it in a container using Jib and deploys the container to Kubernetes, provides a deployment and a service. The service can be accessed via the MetalLB load-balancer.

Jenkins: Building Java and deploying to Kubernetes Screenshot from 2020 09 20 19 12 05

One Response

  1. sameer vogeti May 19, 2021