Running Istio on Oracle Kubernetes Engine–the managed Kubernetes Cloud Service image 72

Running Istio on Oracle Kubernetes Engine–the managed Kubernetes Cloud Service

imageIn a recent post, I introduced the managed Oracle Cloud Service for Kubernetes, the Oracle Kubernetes Engine (OKE): A logical next step when working with Kubernetes in somewhat challenging situations, for example with microservice style architectures and deployments, is the use of Istio – to configure, monitor and manage the so called service mesh. Istio – – is brand new – not even Beta yet, although a first production release is foreseen for Q3 2018. It offers very attractive features, including:

  • intelligent routing of requests, including load balancing, A/B testing, content/condition based routing, blue/green release, canary release
  • resilicience – for example through circuit breaking and throttling
  • policy enforcement and access control
  • telemetry, monitoring, reporting

In this article, I will describe how I got started with Istio on the OKE cluster that I provisioned in the previous article. Note: there is really nothing very special about OKE for Istio: it is just another Kubernetes cluster, and Istio will do its thing. More interesting perhaps is the fact that I work on a Windows laptop and use a Vagrant/VirtualBox powered Ubuntu VM to do some of the OKE interaction, especially when commands and scripts are Linux only.

The steps I will describe:

  • install Istio client in the Linux VM
  • deploy Istio to the OKE Kubernetes Cluster
  • deploy the Bookinfo sample application with Sidecar Injection (the Envoy Sidecar is the proxy that is added to every Pod to handle all traffic into and out of the Pod; this is the magic that makes Istio work)
  • try out some typical Istio things – like traffic management and monitoring

The conclusion is that leveraging Istio on OKE is quite straightforward.


Install Istio Client in Linux VM

The first step with Istio, prior to deploying Istio to the K8S cluster, is the installation on your client machine of the istoctl client application and associated sources, including the Kubernetes yaml files required for the actual deployment. Note: I tried deployment of Istio using a Helm chart, but that did not work and it seems that Istio 0.7.x is not suitable for Helm (release 0.8 is supposed to be ready for Helm).

Following the instructions in the quick start guide:

and working in the Ubuntu VM that I have spun up with Vagrant and Virtual Box, I go through these steps:

Ensure that the current OCI and OKE user kubie is allowed to do cluster administration tasks:

kubectl create clusterrolebinding k8s_clst_adm –clusterrole=cluster-admin –user=ocid1.user.oc1..aaaaaaaavorp3sgemd6bh5wjr3krnssvcvlzlgcxcnbrkwyodplvkbnea2dq


Download and install istioctl:

curl -L | sh –

imageThen add the bin directory in the Istio release directory structure to the PATH variable, to make istoctl accessible from anywhere.


    Deploy Istio to the OKE Kubernetes Cluster

    The resources that were created during the installation of the Istio client include the yaml files that can be used to deploy Istio to the Kubernetes cluster. The command to perform that installation is very straightforward:

    kubectl apply -f install/kubernetes/istio.yaml

    The screenshot shows some of the steps executed when this command is kicked off:


    The namespace istio-system is created, the logical container for all Istio related resources.


    The last two commands:

    kubectl get svc -n istio-system


    kubectl get pods -n istio-system

    are used to verify what has been installed and is now running [successfully] in the Kubernetes cluster.

    The Dashboard provides a similar overview:


    Deploy supporting facilities

    Istio is prepared for interaction with a number of facilities that will help with monitoring and tracing – such as Zipkin, Prometheus, Jaeger and Grafana. The core installation of Istio does not include these tools. Using the following kubectl commands, we can extend the istio-system namespace with these tools:

    kubectl apply -f install/kubernetes/addons/prometheus.yaml

    kubectl apply -f install/kubernetes/addons/zipkin.yaml

    kubectl apply -n istio-system -f

    kubectl apply -f install/kubernetes/addons/grafana.yaml

    kubectl apply -f install/kubernetes/addons/servicegraph.yaml


    Istio-enabled applications can be configured to collect trace spans using Zipkin or Jaeger. On Grafana (  The Grafana add-on is a pre-configured instance of Grafana. The base image (grafana/grafana:4.1.2) has been modified to start with both a Prometheus data source and the Istio Dashboard installed. The base install files for Istio, and Mixer in particular, ship with a default configuration of global (used for every service) metrics. The Istio Dashboard is built to be used in conjunction with the default Istio metrics configuration and a Prometheus backend. More details on Prometheus: .

    To view a graphical representation of your service mesh,  use the Service Graph Add-On: .

    For log gathering with fluentd and writing them to Elastic Stack, see:




    Deploy the Bookinfo sample application with Sidecar Injection

    (the Envoy Sidecar is the proxy that is added to every Pod to handle all traffic into and out of the Pod; this is the magic that makes Istio work)

    The Bookinfo sample application ( is shipped as part of the Istio client installation. This application is composed of several (versions of) microservices that interact. These services and their interactions can be used to investigate the functionality of Istio.


    To install the Bookinfo application, all you need to do:

    kubectl apply -f <(istioctl kube-inject –debug -f samples/bookinfo/kube/bookinfo.yaml)

    The istoctl kube-inject instruction (see performs a preprocessing of the bookinfo.yaml file – injecting the specs for the Envoy Sidecar. Note: automatic injection of the sidecar into all Pods that get deployed is supported in Kubernetes 1.9 and higher. I did not yet get that to work, so I am using manual or explicit injection.


    We can list the pods and inspect one of them:


    The product page pod was defined with a single container – with a Python web application. However, because of the injection that Istio performed prior to creation of the Pod on the cluster, the Pod actually contains more than a single container: the istio-proxy was added to the pod. The same thing happened in the other pods in this bookinfo application.



    This is what the Bookinfo application looks like:


    (note: using kubectl port-forward I can make the application accessible from my laptop, without having to expose the service on the Kubernetes cluster)

    Try out some typical Istio things – like traffic management and monitoring

      Just by accessing the application, metrics will be gathered by the sidecar and shared with Prometheus. The Prometheus dashboard visualizes these metrics:


      Zipkin helps to visualize the end to end trace of requests through the service mesh. Here is the request to the productpage dissected:


      A drilldown reveals:


      Reviews apparently is called sequentially, after the call to Details is complete. This may be correct, but perhaps we can improve performance by performing these calls in parallel. The calls to review takes much longer than the one to reviews. Both are still quite fast – no more than 35 ms.

      The Grafana dashboard plugin for Istio provides an out of the box dashboard on the HTTP requests that happen inside the service mesh. We can see the number of requests and the success rate (percentage of 200 and 300 response codes vs 400 and 500 responses)


      Here are some more details presented in the dashboard:



      At this point I am ready to start using Istio in anger – for my own microservices.

      Resources –

      Istio on Kubernetes – Quickstart Guide –

      Working with the Istio Sample Application Bookinfo –

      YouTube: Module 1: Istio – Kubernetes – Getting Started – Installation and Sample Application Review by Bruno Terkaly –

      Istioctl reference: