Oracle announced a managed Kubernetes Cloud service during Oracle OpenWorld 2017. This week, I had an opportunity to work with this new container native cloud offering. It is quite straightforward:
Through the Wercker console
a new Cluster can be created on a Oracle BareMetal Cloud (aka Oracle Cloud Infrastructure) environment. The cloud credentials are provided
Name and K8S version are specified:
The Cluster Size is configured:
And the node configuration is indicated:
Subsequently, Oracle will rollout a Kubernetes cluster to the designated Cloud Infrastructure – according to these specifications.
The Cluster’s Address us highlighted in this screenshot. This endpoint will be required later on to configure the automated deployment pipeline.
This cluster can be managed through the Kubernetes Dashboard. Deployments to the cluster can be done using the normal means – such as the kubectl command line tool. Oracle recommends automating all deployments, using the Wercker pipelines. I will illustrate how that is done in this article.
The source code can be found on GitHub: https://github.com/lucasjellema/the-simple-app. Be warned – the code is extremely simple.
The steps are: (assuming one already has a GitHub account as well as a Wercker account and a local kubectl installation)
- generate a personal token in the Wercker account (to be used for Wercker’s direct interactions with the Kubernetes cluster)
- prepare (local) Kubernetes configuration file – in order to work against the cluster using local kubectl commandline
- implement the application that is to be deployed onto the Kubernetes cluster – for example a simple Node application
- create the wercker.yml file (along with templates for Kubernetes deployment files) that describes the build steps that apply to the application and its deployment to Kubernetes
- push the application to a GitHub repository
- create a release in the Wercker console – associated with the GitHub Repository
- define the Wercker Pipelines for the application – using the Pipelines from the wercker.yml file
- define the automation pipeline – a chain of the pipelines defined in the previous step, triggered by event such as a commit in the GitHub repo
- define environment variables – specifically the Kubernetes endpoint and the user token to use for connecting to the Kubernetes cluster from the automated pipeline
- trigger the automation pipeline – for example through a commit to GitHub
- verify in Kubernetes – dashboard or command line – that the application is deployed and determine the public endpoint
- access the application
- iterate through steps 10..12 while evolving the application
Generate Wercker Token
Prepare local Kubernetes Configuration file
Create a config file in the users/<current user>/.kube directory which contains the server address for the Kubernetes cluster and the token generated in the Wercker user settings. The file looks something like this screenshot:
Verify the correctness of the config file by running for example:
kubectl version
Or any other kubectl command.
Implement the application that is to be deployed onto the Kubernetes
cluster
In this example the application is a very simple Node/Express application that handles two types of HTTP requests: a GET request to the url path /about and a POST request to /simple-app. There is nothing special about the aplication – in fact it is thoroughly underwhelming. The functionality consists of returning a result that proves that application has been invoked successfully – and not much more.
The application source is found in https://github.com/lucasjellema/the-simple-app – mainly in the file app.js.
After implementing the app.js I can run and invoke the application locally:
Create the wercker.yml file for the application
The wercker.yml file provides instructions to the Wercker engine on how to execute the build and deploy steps. This step makes use of parameters the values for which are provided by the Wercker build engine at run time, partially from the values defined for environment variables at organization, application or pipeline level.
Here three pipelines are shown:
The build pipeline use the node:6.10 base Docker container image as its starting point. It adds the source code, executes npm install and generates TLS key and certificate. The push-to-releases pipeline stores the build outcome (the container image) in the configured container registry. The deploy-to-oke (oke == Oracle Kubernetes Engine) pipeline takes the container image and deploys it to the Kubernetes cluster – using the Kubernetes template files, as indicated in this screenshot.
Along with the wercker.yml file we provide templates for Kubernetes deployment
files that describe the
deployment to Kubernetes.
The kubernetes-deployment.yml.template defines the Deployment (based on the container image with a single replica) and the service – exposing port 3000 from the container.
The ingress.yml.template file defines how the service is to exposed through the cluster ingress nginx.
Push the application – including the yml files for Wercker and Kubernetes to a GitHub repository
Create a release in the Wercker console – associated with the GitHub Repository
Define the Wercker Pipelines for the application – using the Pipelines from the wercker.yml file
Click on New Pipeline for each of the build pipelines in the wercker.yml file. Note: the build pipeline is predefined.
Define the automation pipeline
– a chain of the pipelines defined in the previous step, triggered by event such as a commit in the GitHub repo
Define environment variables
– specifically the Kubernetes endpoint and the user token to use for connecting to the Kubernetes cluster from the automated pipeline
Trigger the automation pipeline – for example through a commit to GitHub
When the changes are pushed to GitHub, the web hook fires and the build pipeline in Wercker is triggered.
I even received an email from Wercker, alerting me about this issue:
It turns out I forgot to set the values for the environment variables KUBERNETES_MASTER and KUBERNETES_TOKEN. In this article it is the previous step, preceding this one, in reality I forgot to do it and ran into this error as a result.
After setting the correct values, I triggered the pipeline once more, with better luck this time.
Verify in Kubernetes – dashboard or command line – that the application is deployed
The deployment from Wercker to the Kubernetes Cluster was successful. Unfortunately, the Node application itself did not start as desired. And I was informed about this on the overview page for the relevant namespace – lucasjellema – on the Kubernetes dashboard – that I accessed by running
on my laptop and opening my browser at: http://127.0.0.1:8001/ui.
The logging for the pod made clear that there was a problem with the port mapping.
I fixed the code, committed and pushed to GitHub. The build pipeline was triggered and the application was built into a container that was successfully deployed on the Kubernetes cluster:
I now need to find out what the endpoint is where I can access the application. For that, I check out the Ingress created for the deployment – to find the value for the path: /lucasjellema
Next, I check the ingress service in the oracle-bmc namespace – as that is in my case the cluster wide ingress for all public calls into the cluster:
This provides me with the public ip adress.
Access the Application
Calls to the simple-app application can now be made at: http://<public ip>/lucasjellema/simple-app (and http://<public ip>/lucasjellema/about):
and
Note: because of a certificate issue, the call from Postman to the POST endpoint only succeeds after disabling certificate verification in the general settings:
Evolve the Application
From this point on it is very simple to further evolve the application. Modify the code, test locally, commit and push to Git – and the changed application is automatically built and deployed to the managed Kubernetes cluster.
A quick example:
I add support for /stuff to the REST API supported by simple-app:
The code is committed and pushed:
The Wercker pipeline is triggered
At this point, the application does not yet support requests to /stuff:
After a little less than 3 minutes, the full build, store and deploy to Kubernetes cluster pipeline is done:
And the new functionality is live from the publicly exposed Kubernetes environment:
Resources
Wercker Tutorial on Getting Started with Wercker Clusters Using Wercker Clusters – http://devcenter.wercker.com/docs/getting-started-with-wercker-clusters#exampleend2end