Oracle Managed Kubernetes Cloud– First Steps with Automated Deployment using Wercker Pipelines image 17

Oracle Managed Kubernetes Cloud– First Steps with Automated Deployment using Wercker Pipelines

imageOracle announced a managed Kubernetes Cloud service during Oracle OpenWorld 2017. This week, I had an opportunity to work with this new container native cloud offering. It is quite straightforward:

Through the Wercker console

image

a new Cluster can be created on a Oracle BareMetal Cloud (aka Oracle Cloud Infrastructure) environment. The cloud credentials are provided

SNAGHTMLff83abb

Name and K8S version are specified:

image

The Cluster Size is configured:

image

 

And the node configuration is indicated:

image

Subsequently, Oracle will rollout a Kubernetes cluster to the designated Cloud Infrastructure – according to these specifications.

SNAGHTML1010ac4c

The Cluster’s Address us highlighted in this screenshot. This endpoint will be required later on to configure the automated deployment pipeline.

This cluster can be managed through the Kubernetes Dashboard. Deployments to the cluster can be done using the normal means – such as the kubectl command line tool. Oracle recommends automating all deployments, using the Wercker pipelines. I will illustrate how that is done in this article.

The source code can be found on GitHub: https://github.com/lucasjellema/the-simple-app. Be warned – the code is extremely simple.

The steps are: (assuming one already has a GitHub account as well as a Wercker account and a local kubectl installation)

  1. generate a personal token in the Wercker account (to be used for Wercker’s direct interactions with the Kubernetes cluster)
  2. prepare (local) Kubernetes configuration file – in order to work against the cluster using local kubectl commandline
  3. implement the application that is to be deployed onto the Kubernetes cluster – for example a simple Node application
  4. create the wercker.yml file (along with templates for Kubernetes deployment files) that describes the build steps that apply to the application and its deployment to Kubernetes
  5. push the application to a GitHub repository
  6. create a release in the Wercker console – associated with the GitHub Repository
  7. define the Wercker Pipelines for the application – using the Pipelines from the wercker.yml file
  8. define the automation pipeline – a chain of the pipelines defined in the previous step, triggered by event such as a commit in the GitHub repo
  9. define environment variables – specifically the Kubernetes endpoint and the user token to use for connecting to the Kubernetes cluster from the automated pipeline
  10. trigger the automation pipeline – for example through a commit to GitHub
  11. verify in Kubernetes – dashboard or command line – that the application is deployed and determine the public endpoint
  12. access the application
  13. iterate through steps 10..12 while evolving the application

 

Generate Wercker Token

image

 

Prepare local Kubernetes Configuration file

Create a config file in the users/<current user>/.kube directory which contains the server address for the Kubernetes cluster and the token generated in the Wercker user settings. The file looks something like this screenshot:

SNAGHTML10176445

 

Verify the correctness of the config file by running for example:

kubectl version

image

Or any other kubectl command.

 

    Implement the application that is to be deployed onto the Kubernetes
    cluster

    In this example the application is a very simple Node/Express application that handles two types of HTTP requests: a GET request to the url path /about and a POST request to /simple-app. There is nothing special about the aplication – in fact it is thoroughly underwhelming. The functionality consists of returning a result that proves that application has been invoked successfully – and not much more.

    The application source is found in https://github.com/lucasjellema/the-simple-app – mainly in the file app.js.

    After implementing the app.js I can run and invoke the application locally:

    image

     

    Create the wercker.yml file for the application

    The wercker.yml file provides instructions to the Wercker engine on how to execute the build and deploy steps. This step makes use of parameters the values for which are provided by the Wercker build engine at run time, partially from the values defined for environment variables at organization, application or pipeline level.

    Here three pipelines are shown:

    image

    The build pipeline use the node:6.10 base Docker container image as its starting point. It adds the source code, executes npm install and generates TLS key and certificate. The push-to-releases pipeline stores the build outcome (the container image) in the configured container registry. The deploy-to-oke (oke == Oracle Kubernetes Engine) pipeline takes the container image and deploys it to the Kubernetes cluster – using the Kubernetes template files, as indicated in this screenshot.

    image

    Along with the wercker.yml file we provide templates for Kubernetes deployment
    files that describe the
    deployment to Kubernetes.

    The kubernetes-deployment.yml.template defines the Deployment (based on the container image with a single replica) and the service – exposing port 3000 from the container.

    image

    The ingress.yml.template file defines how the service is to exposed through the cluster ingress nginx.

    Push the application – including the yml files for Wercker and Kubernetes to a GitHub repository

    image

    Create a release in the Wercker console – associated with the GitHub Repository

     

    image

    image

    image

    image

    image

    image

     

    Define the Wercker Pipelines for the application – using the Pipelines from the wercker.yml file

    image

    Click on New Pipeline for each of the build pipelines in the wercker.yml file. Note: the build pipeline is predefined.

    image

     

    image

     

     

    Define the automation pipeline

    – a chain of the pipelines defined in the previous step, triggered by event such as a commit in the GitHub repo

    image

    image

     

    Define environment variables

    – specifically the Kubernetes endpoint and the user token to use for connecting to the Kubernetes cluster from the automated pipeline

    SNAGHTML10a57e52

     

    Trigger the automation pipeline – for example through a commit to GitHub

     

    image

     

    When the changes are pushed to GitHub, the web hook fires and the build pipeline in Wercker is triggered.

    image

    image

    image

    I even received an email from Wercker, alerting me about this issue:

    image

    It turns out I forgot to set the values for the environment variables KUBERNETES_MASTER and KUBERNETES_TOKEN. In this article it is the previous step, preceding this one, in reality I forgot to do it and ran into this error as a result.

    After setting the correct values, I triggered the pipeline once more, with better luck this time.

    image

    image

     

    Verify in Kubernetes – dashboard or command line – that the application is deployed

    The deployment from Wercker to the Kubernetes Cluster was successful. Unfortunately, the Node application itself did not start as desired. And I was informed about this on the overview page for the relevant namespace – lucasjellema – on the Kubernetes dashboard – that I accessed by running

    kubectl proxyimage

    on my laptop and opening my browser at: http://127.0.0.1:8001/ui.

     

    image

     

    The logging for the pod made clear that there was a problem with the port mapping.

    image

    I fixed the code, committed and pushed to GitHub. The build pipeline was triggered and the application was built into a container that was successfully deployed on the Kubernetes cluster:

    image

    I now need to find out what the endpoint is where I can access the application. For that, I check out the Ingress created for the deployment – to find the value for the path: /lucasjellema

    image

    Next, I check the ingress service in the oracle-bmc namespace – as that is in my case the cluster wide ingress for all public calls into the cluster:

    SNAGHTML10b0fd8f

    This provides me with the public ip adress.

    Access the Application

    Calls to the simple-app application can now be made at: http://<public ip>/lucasjellema/simple-app (and http://<public ip>/lucasjellema/about):

    SNAGHTML10b254f1

    and

    image

    Note: because of a certificate issue, the call from Postman to the POST endpoint only succeeds after disabling certificate verification in the general settings:

    image

    image

     

     

    Evolve the Application

    From this point on it is very simple to further evolve the application. Modify the code, test locally, commit and push to Git – and the changed application is automatically built and deployed to the managed Kubernetes cluster.

    A quick example:

    I add support for /stuff to the REST API supported by simple-app:

    image

    The code is committed and pushed:

    image

    The Wercker pipeline is triggered

    image

    At this point, the application does not yet support requests to /stuff:

    image

     

    After a little less than 3 minutes, the full build, store and deploy to Kubernetes cluster pipeline is done:

    image

    And the new functionality is live from the publicly exposed Kubernetes environment:

    image

    Resources

    Wercker Tutorial on Getting Started with Wercker Clusters Using Wercker Clusters – http://devcenter.wercker.com/docs/getting-started-with-wercker-clusters#exampleend2end