Getting started (again) with Kubernetes on Oracle Cloud

Lucas Jellema

For many of my recent activities, I have not worked with or even on (knowingly at least) Kubernetes. So for many months I have not touched my OKE cluster on Oracle Cloud. However, in the last weeks, I have run into so many interesting things to dive into – that all seem to have Kubernetes as a common theme – that I feel it is high time for me to once again start dabbling in Kubernetes.

After all, Kubernetes is the modern application server – that is distributed, dynamically scalable, universal and suitable for almost any workload. Workloads are REST API implementations, web applications, background jobs, serverless functions, complex workflows and operational tasks. In addition, workloads can be message brokers, ML models, databases, caches and blockchain nodes. Who runs any server activity on bare metal or a Compute VM if they have to – containers on the K8S cluster are the way to go, are they not? They are if you want to standardize and automate the workflow and benefit from the scalability and availability and many other mechanisms K8S offers – as well as the multitude of tools around K8S.

The trend of custom resource definitions and operators that now turns the Kubernetes Control Plane to a universal control plane that is supposed to be used for deploying/provisioning and monitoring/reconciling any type of resource almost anywhere – including Oracle Autonomous Database on OCI – is mind boggling. And to me it still feels like a ‘we have hammer so let’s consider every challenge a nail’ way to thinking and doing. The convergence of so many tools for CI and especially CD, provisioning and monitoring in common way of working and a common technology is attractive though. And the least I have to make sure of is to get back on top of Kubernetes again, build up some Go skills, create my own operator and start seriously playing with tools such as KubeVela, Crossplane, ArgoCD and many more.

First I need my OKE Cluster – and fortunately there is wizard in OCI that is very simple to use and prepares my cluster instance (one node pool, three nodes) in less than  minutes.

As a reminder –to my self on future occasions – here are the steps for the wizard to create the cluster and onwards for interacting with it.

Quick Cluster Creation

In the OCI Console, go to Developer Services | Kubernetes Clusters (OKE)

image

Click on the button Create Cluster:image

A popup is shown with two options, I go for the easy one: Quick Create

image

Press Launch Workflow. A wizard is started. It needs to know just a few things: name for the cluster, the target compartment, the desired Control Plan version for Kubernetes, whether or not the cluster should have a public endpoint, whether or not the nodes should themselves be publicly accessible, the number of nodes, the compute shape to be used for the nodes and in this case – flexible shape – the desired number of OCPUs and the memory for each node. I have accepted all defaults. I can always change many things later on about the nodes for example.

image

Press Next. A review of the settings is shown.

image

Then press Create Cluster to have the OCI resources – network, compute instances, control plane and cluster – created.

image

This overview is shown of all actions that have been kicked off. I close the page.

The Cluster Details are shown next – with the cluster currently in the creating state.

image

After no more than five minutes, the cluster status is reported as Active

image

So I have my Kubernetes Cluster instance on OCI – in addition to my local Minikube and K3S instances. Note that I am burning cloud credits at this point. Not very many – but somewhere in a data center compute resources are assigned to me and doing work on my behalf. That means energy is used, CO2 is produced and cloud credits are spent. If I have no meaningful work for this cluster to perform – I should temporarily shut it down. I have not found an option to suspend or stop the cluster or the node pool. I can stop individual compute nodes though. So I may want to create a script to be run as a job (perhaps an OCI DevOps Build Pipeline) to bring suspend a cluster by stopping all nodes in all node pools as well as a script to restart the cluster by starting up those same nodes again. See for example the work done by Javier Mugueta.

Start Interacting with the OKE Cluster Instance

Interaction with a Kubernetes Cluster is typically done through the kubectl command line tool. We can use the kubectl installation included in Cloud Shell, or a local installation of kubectl. Before I can use kubectl to access a cluster, I have to specify the cluster on which to perform operations by setting up the cluster’s kubeconfig file. Note:: at the time of writing, to access a cluster using kubectl in Cloud Shell, the Kubernetes API endpoint must have a public IP address – which my cluster does, thanks to the wizard that created my cluster.

In OCI Cloud Shell – started inside the OCI Console – the kubectl tool comes preinstalled.

image

It needs to be configured to use my new OKE instance – using the kubeconfig file.

In the Cluster Details window, click the Access Cluster button to display the Access Your Cluster dialog box.

image

In the dialog box

image

Click Cloud Shell Access. Click on Copy to copy the shell statement under (2) to the clipboard. Then click Launch Cloud Shell to display the Cloud Shell window – if it is not yet open. Paste the shell statement into the Cloud Shell:

image

Now press enter to execute the statement.

image

The config file in $HOME/.kube is written (or updated if it already existed) with details for my new OKE cluster. Note that the OCI CLI understands the string PUBLIC_ENDPOINT and writes the proper Public IP Address for the cluster into the configuration file.

As long as I do not change the name or location of the config file from it default, I do not have to explicitly set the KUBECONFIG environment variable to refer to it, and I can now run kubectl to interact with my new cluster.

image

Meet the three nodes that make up my fresh OKE Cluster.

Access the OKE Cluster from my Local Machine using kubectl

In addition to cluster access in Cloud Shell, I want to have local interaction from my laptop. I run Windows 10, use WSL 2 and an Ubuntu 20.4 distro. It has kubectl running already (although perhaps I should do an upgrade).

In the Cluster Details window, click the Access Cluster button to display the Access Your Cluster dialog box and click on Local Access. I have locally already my OCI CLI set up including a configuration that provides authentication details for my OCI tenancy.

image

I can skip step 1 in the dialog box. Copy the first statement (for public IP endpoint) in step 2 and execute it locally:

SNAGHTMLfa8b927

Note: at first this failed for me; my OCI CLI version was 2.12 and the minimum required is 2.24.

At this point, kubectl should be set up and ready for action. The file $HOME/.kube/config is extended in my case with an extra context – in addition to the preexisting context for local docker-desktop. The current-context is set to this newly created one for the OKE cluster.

image

Here is my list of cluster nodes, fetched locally on my laptop using this new context:

image

The Kubernetes Dashboard

The Kubernetes Dashboard is a well known UI for monitoring and managing the Kubernetes Cluster. Many other tools are available to make life easier. However, the dashboard as a fallback option is a sort of anchor or beacon. So let’s get it running on my cluster.

The Dashboard itself is a Kubernetes application – collection of resources that needs to be created on the cluster. And for which a predefined collection of yaml files is available that can be applied with a single command:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml

Here is the effect of running that command:

SNAGHTML106ad858

Checking on the pods that were created in the newly created namespace kubernetes-dashboard:

image

This looks good. So let;s try to access the dashboard in a browser.

First, run a proxy like this:image

Now I can access the dashboard in the browser, using:

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/workloads?namespace=default

When GUI appears in the browser, I am prompted for a kubeconfig file or a token.

image

Let’s create a token. Using a YAML file to create a service account (as described in the docs)

image

I can now generate a token to use for accessing the dashboard using this command:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep oke-admin | awk ‘{print $1}’)

image

With this token, I can copy and paste the value in the UI and then press Sign In

image

The Dashboard is now presented – with once again my three nodes showing up:

image

Application Deployment – Hello World

Let’s finally deploy a simple application and expose a public end point.

Using the YAML file shown below I have applied resource definitions – deployment and service – resulting in a running an publicly accessible application.

image

Checking in the Dashboard, I quickly get an overview of the resources now running for this application:

SNAGHTML107bcd0b

Because I set the Service Type to LoadBalancer, after a little while a public IP address is assigned and the application I have deployed is accessible over public internet:

image

At this point I have confidence that my OKE cluster is running, can be accessed for operational activity as well as for application requests. Time to start doing more interesting things.


Resources

OCI Docs- Create OKE Cluster through wizard – https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengcreatingclusterusingoke_topic-Using_the_Console_to_create_a_Quick_Cluster_with_Default_Settings.htm 

OCI Docs – Accessing the OKE Cluster with kubectl – https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengaccessingclusterkubectl.htm

OCI Docs – Access OKE Cluster using Dashboard – https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengstartingk8sdashboard.htm 

Kubernetes Documentation on Dashboard – https://github.com/kubernetes/dashboard

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Next Post

Dumb pipes, smart endpoints–the Post Filter, the register to stop unwanted mail

Many people receive mail at their front door that they do not want to receive. Commercial mail, mail from companies they may perhaps made enquiries with long ago, mail addressed to people who used to live at the address. Mail that should be blocked – like email ending up in […]
%d bloggers like this: