From docker run to kubectl apply – quick Kubernetes cheat sheet for Docker users


Primarily for my personal reference: this article provides instructions on running a Docker container image as a Kubernetes Deployment plus Service. It compares the Docker run command with the approach for running on Kubernetes using a Yaml file and introduces some additional kubectl commands in the process.

imageTo work with a Kubernetes cluster – either local Minikube or a multi node cluster either locally or in some cloud – I am assuming a local installation of kubectl ( and a kubeconfig file that provides the configuration details for the cluster you will be accessing.

To get going with kubectl:

set KUBECONFIG=<reference to kubeconfig file>


export KUBECONFIG=<reference to kubeconfig file>

The main Docker command we will replace in this article with Kubernetes Yaml files is this one:

docker run \
--name=soaring-portal \
-p=3020:3000/tcp \
-p=4600:4500/tcp \
-e="GITHUB_URL=<a href=";">"</a> \

This command runs a container called soaring-portal based on the ojet-run-live-reload image, it maps ports 3000 and 4500 in the container to host ports 3020 and 4600 and it passes in values for two environment variable.

When this command is fully done and after the container startup is complete, we expect to access the application at host:3020 (where host is the server running the Docker engine).

We can use Docker commands to inspect the container:

docker ps
docker logs soaring-portal --follow

And execute a command inside the container:

docker exec -it soaring-portal /bin/bash

Through kubectl and with Kubernetes we can achieve and do the same things – in a slightly different manner and against a potentially vastly different and more powerful container runtime.

The main step in getting a container to run on Kubernetes is to apply a configuration file that describes the desired state (a running container exposed externally on two ports) using kubectl apply. The thing we want to run is called a Pod that contains one or more containers. To expose certain ports from the [containers in the] Pod, we define a Service for Kubernetes. The Deployment artifact is used to describe the desired state as a combination of the Pod specification and the deployment parameters such as scaling (the desired number of Pod instances). We also use the deployment object to roll out changes in the Pod definition in a controlled fashion.

Ideally, we organize our applications on Kubernetes in namespaces. Most management operations can be executed per namespace and authorization can be defined on namespaces. Our first step is to create a namespace:

kubectl create namespace soaring-clouds

The following Yaml file (portal-deployment.yaml) provides the rough equivalent of the parameters of the docker run command:

apiVersion: v1
kind: Service
name: soaring-webshop-portal
namespace: soaring-clouds
k8s-app: soaring-webshop-portal “soaring-webshop-portal”
– name: webui
port: 3020
protocol: TCP
targetPort: ui
– name: reloader
port: 4600
protocol: TCP
targetPort: reload
type: NodePort
k8s-app: soaring-webshop-portal

apiVersion: extensions/v1beta1
kind: Deployment
name: soaring-webshop-portal-deployment
namespace: soaring-clouds
k8s-app: soaring-webshop-portal
version: v2
k8s-app: soaring-webshop-portal
version: v2
– name: soaring-portal
image: lucasjellema/ojet-run-live-reload:0.1
imagePullPolicy: Always
# keep request = limit to keep this container in guaranteed class
cpu: 100m
cpu: 100m
– name: “GITHUB_URL
value: “”
value: “”
– containerPort: 3000
name: ui
protocol: TCP
– containerPort: 4500
name: reload
protocol: TCP

To have the desired state described by this Yaml file put in place, we have to instruct kubectl to apply this file:

kubectl apply -f C:\oow2018\soaring-cloud-environment\webportal\portal-deployment.yml

To check on the status of the rollout, we can use

kubectl rollout status deployment/soaring-webshop-portal-deployment –namespace=soaring-clouds

To learn about the pod(s) running in the namespace:

kubectl&nbsp; get pods --namespace=soaring-clouds

And to check on the logs from any specific pod:

kubectl logs -f <soaring-webshop-portal-unique-identifier-for-POD> --namespace=soaring-clouds

To execute a command in this Pod:

kubectl exec -it <soaring-webshop-portal-unique-identifier-for-POD> --namespace=soaring-clouds /bin/bash

The Kubernetes Dashboard gives a nice overview of all artifacts created:


Finally: everything created on the Kubernetes cluster through the kubectl apply of the Yaml file can be removed again, as simply as this:

kubectl delete -f C:\oow2018\soaring-cloud-environment\webportal\portal-deployment.yml







Kubernetes documentation kubectl for Docker users:

A friendly introduction to Kubernetes :

About Author

Lucas Jellema, active in IT (and with Oracle) since 1994. Oracle ACE Director and Oracle Developer Champion. Solution architect and developer on diverse areas including SQL, JavaScript, Kubernetes & Docker, Machine Learning, Java, SOA and microservices, events in various shapes and forms and many other things. Author of the Oracle Press book Oracle SOA Suite 12c Handbook. Frequent presenter on user groups and community events and conferences such as JavaOne, Oracle Code, CodeOne, NLJUG JFall and Oracle OpenWorld.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.