Oracle recently (May 2018) launched its Managed Kubernetes Cloud Service (OKE – Oracle Kubernetes Engine) – see for example this announcement: https://blogs.oracle.com/developers/kubecon-europe-2018-oracle-open-serverless-standards-fn-project-and-kubernetes. Yesterday I got my self a new free cloud trial on the Oracle Public Cloud (https://cloud.oracle.com/tryit). Subsequently, I created a Kubernetes cluster and deployed my first pod on that cluster. In this article, I will describe the steps that I went through:
- create Oracle Cloud Trial account
- configure OCI (Oracle Cloud Infrastructure) tenancy
- create service policy
- create OCI user
- create virtual network
- create security lists
- create compute instance
- configure Kubernetes Cluster & Node Pool; have the cluster deployed
- install and configure OCI CLI tool
- generate kubeconfig file
- connect to Kubernetes cluster using Kubectl – inspect and roll out a Pod
The resources section at the end of this article references all relevant documentation.
Configure OCI (Oracle Cloud Infrastructure) tenancy
Within your tenancy, a suitably pre-configured compartment must already exist in each region in which you want to create and deploy clusters. The compartment must contain the necessary network resources already configured (VCN, subnets, internet gateway, route table, security lists). For example, to create a highly available cluster spanning three availability domains, the VCN must include three subnets in different availability domains for node pools, and two further subnets for load balancers.
Within the root compartment of your tenancy, a policy statement (
allow service OKE to manage all-resources in tenancy) must be defined to give Container Engine for Kubernetes access to resources in the tenancy.
You have to define a policy to enable Container Engine for Kubernetes to perform operations on the compartment.
Click on Identity | Policies:
Ensure you are in the Root Compartment. Click on Create Policy. Define a new policy. The statement must be:
“allow service OKE to manage all-resources in tenancy”
Click on Create. The new policy is added to the list.
the required network resources
See for instructions: https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengnetworkconfig.htm
Create a Virtual Cloud Network.
The VCN in which you want to create and deploy clusters must be configured as follows:
- The VCN must have a CIDR block defined that is large enough for at least five subnets, in order to support the number of hosts and load balancers a cluster will have. A /16 CIDR block would be large enough for almost all use cases (10.0.0.0/16 for example). The CIDR block you specify for the VCN must not overlap with the CIDR block you specify for pods and for the Kubernetes services (see CIDR Blocks and Container Engine for Kubernetes).
- The VCN must have an internet gateway defined.
- The VCN must have a route table defined that has a route rule specifying the internet gateway as the target for the destination CIDR block.
The VCN must have five subnets defined:
- Three subnets in which to deploy worker nodes. Each worker node subnet must be in a different availability domain. The worker node subnets must have different security lists to the load balancer subnets.
- Two subnets to host load balancers. Each load balancer subnet must be in a different availability domain. The load balancer subnets must have different security lists to the worker node subnets.
The VCN must have security lists defined for the worker node subnets and the load balancer subnets. The security list for the worker node subnets must have:
- Stateless ingress and egress rules that allow all traffic between the different worker node subnets.
- Stateless ingress and egress rules that allow all traffic between worker node subnets and load balancer subnets.
Optionally, you can include ingress rules for worker node subnets to:
explicitly allow SSH access to worker nodes on port 22 (see Connecting to Worker Nodes Using SSH)
explicitly allow access to worker nodes on the default NodePort range (see the Kubernetes documentation)
Create Internet Gateway
Create Route Table
The VCN in which you want to create and deploy clusters must have a route table. The route table must have a route rule that specifies an internet gateway as the target for the destination CIDR block 0.0.0.0/0.
Set DHCP options
The VCN in which you want to create and deploy clusters must have DHCP Options configured. The default value for DNS Type of Internet and VCN Resolver is acceptable.
Create Secuity Lists
The VCN in which you want to create and deploy clusters must have security lists defined for the worker node subnets and the load balancer subnets. The security lists for the worker node subnets and the load balancer subnets must be different.
Create list called workers
Worker Node Seclist Configuration
Create Securty List loadbalancers
Create Subnet in VCN
The VCN in which you want to create and deploy clusters must usually have (five) subnets defined as follows:
- (Three) subnets in which to deploy worker nodes. Each worker node subnet must be in a different availability domain. The worker node subnets must have different security lists to the load balancer subnets.
- (Two) subnets to host load balancers. Each load balancer subnet must be in a different availability domain. The load balancer subnets must have different security lists to the worker node subnets.
In addition, all the subnets must have the following properties set as shown:
- Route Table: The name of the route table that has a route rule specifying an internet gateway as the target for the destination CIDR block 0.0.0.0/0
- Subnet access: Public
- DHCP options: Default
Subnet called workers-1
Associated Security List workers with this subnet:
Create a second subnet called loadbalancers1:
Press the Create button. Now we have all subnets we need.
Create Compute Instance
Click on Home. Then click on Create Compute Instance.
Set the attributes of the VM as shown in the two figures. I am sure other settings are fine too – but at least these work for me.
Click on Create Instance. The VM will now be spun up.
Create a non-federated identity – new user kubie in the OCI tenancy
Note: initially I tried to just create the cluster as the initial user that was created when I initiated the OCI tenancy in my new trial account. However, the Create Cluster button was not enabled.
Oracle Product Management suggested that my account probably was a Federated Identity, which OKE does not support at this time. In order to use OKE in one of these accounts, you need to create a native OCI Identity User
Click on Create/Reset password. You will be presented with a generated password. Copy it to the clipboard or in some text file. You will not see it again.
Add user kubie to group Administrators:
After creating the OCI user kubie we can now login as that user, using the generated password that you had saved in the clipboard or in a text file:
Configure Kubernetes Cluster & Node Pool and have the cluster deployed
After preparing the OCI tenancy to make it comply with the prerequisites for creating the K8S cluster, we can now proceed and provsion the cluster.
Click on Containers in the main menu, Select clusteras the sub menu option. Click on Create Cluster.
Provide details for the Kubernetes cluster; see instructions here: https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Tasks/contengcreatingclusterusingoke.htm . Set the name to k8s-1, select 1.9.7 as the version. Select the VCN that was created earlier on. Select the two subnets that were defined for the load-balancers.
Set the Kubernetes Service CIDR Block: for example to 10.96.0.0/16.
Set the Pods CIDR Block: for example to 10.244.0.0/16.
Enable Dashboard and Tiller:
Click on Add Node Pool.
The details for the Node Pool:
Then press create to start the creation of the cluster.
The k8s-1 cluster is added to the list:
At this point, the cluster creation request is being processed:
After a little while, the cluster has been created:
Install and configure OCI CLI tool
To interact with the Oracle Cloud Infrastructure, we can make use of the OCI Command Line Interface – a Python based tool. We need to use this tool to generate the kubeconfig file that we need to interact with the Kubernetes cluster.
My laptop is a Windows machine. I have used vagrant to spin up a VM with Ubuntu Xenial (16.04 LTS), to have an isolated and Linux based environment to work with the OCI CLI.
In that environment, I download and install the OCI CLI:
bash -c “$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)”
Some more output from the installation process:
And the final steps:
Next, I need to configure the OCI for my specific environment:
oci setup config
The setup action generates a public and private key pair – the former in a file called: oci_api_key_public.pem. The contents of this file should be added as new public key to the OCI user – in my case the user called kubie
At this point, OCI CLI is installed and configured for the right OCI Tenancy. The public key is added to the user account. We should now be able to use OCI CLI to access the OCI Tenancy.
Try out the OCI CLI with simple calls like:
oci compute instance list
oci compute instance list -c ocid1.tenancy.oc1..aaa
Note: the compartment identifier parameter takes the value of the Tenancy OCID.
Generate kubeconfig file
After installing and configuring the OCI CLI tool, we can continue to generate the kubeconfig file. The OCI Console contains the page with details on the k8s-1 cluster. Press the Access Kubeconfig button. A popup opens, with the instructions to generate the kubeconfig file – unfortunately not yet to simply download the kubeconfig file.
Download the get-kubeconfig.sh script to the Ubuntu VM.
Make this script executable and execute using the instructions that are copied and pasted from the popup overhead.
Using the commands provided from the OCI Console, I can generate the kubeconfig file:
Connect to Kubernetes cluster using Kubectl – inspect and roll out a Pod
After generating the kubeconfig file, I have downloaded and installed kubectl to my Ubuntu VM, using the Git Gist:
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
To make executable:
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
And try out if kubectl can be used:
kubectl get nodes
On a Windows client that has kubectl installed and that has access to the kubeconfig file that was created in the previous section, set environment variable KUBECONFIG referencing the kubeconfig file that was generated using OCI CLI. Using that kubeconfig file, create a deployment of a test deployment.yaml through kubectl:
Expose the nginx deployment. Get the list of services. Then change the type of the service – from ClusterIP to NodePort. Then get the list of services again – to retrieve the port at which the service is exposed on the cluster node (31907)
The Kubernetes dashboard is now made available on the client using:
kubectl proxy –kubeconfig=”C:\data\lucasjellemag-oci\kubeconfig”
Now we can check the deployment in the K8S Dashboard:
Here the new Type of Service in the Kubernetes Dashboard:
Access NGINX from any client anywhere:
Documentation on Preparing for an OKE Cluster and installing the Cluster – https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengprerequisites.htm
Docs on how to get hold of kubeconfig file – https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Tasks/contengdownloadkubeconfigfile.htm
Installing and Configuring the OCI Command Line Interface – https://docs.us-phoenix-1.oraclecloud.com/Content/API/SDKDocs/cliinstall.htm
Kubectl Reference – https://kubernetes-v1-4.github.io/docs/user-guide/kubectl-overview/
Git Gist for installing kubectl on Ubuntu –
Deploying a Sample App to the K8S Cluster – https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Tasks/contengdeployingsamplenginx.htm
Articles on the availability of the Oracle Kubernetes Engine cloud service:
2 thoughts on “First steps with Oracle Kubernetes Engine–the managed Kubernetes Cloud Service”
Awesone article Lucas. I’ve followed all the steps successfully. But for some reason I’m not able to see the nginx page… I was able to successfully expose it as a nodePort as well. Did you find any similar issues ? Thank you. Fernando
I am not sure what you are running into. At which IP address do you try to access the NGINX container? You could consider using a Service of type LoadBalancer or perhaps an IngressController? Note: I have been struggling a bit with access on the public IP address(es) and I have not quite figured out what is happening and why.
Comments are closed.