At the Oracle Partner PaaS Summer Camp IX 2019 in Lisbon, held at the end of August, I followed a 5 day during workshop called “Modern Application Development with Oracle Cloud”. In this workshop, on day 4, the topic was “WebLogic on Kubernetes”.
[https://paascommunity.com/2019/09/02/oracle-paas-summer-camp-2019-results-become-a-trained-certified-oracle-cloud-platform-expert/]
At the Summer Camp we used a free Oracle Cloud trial account.
On day 4, I did a hands-on lab in which an Oracle WebLogic Domain was deployed on an Oracle Container Engine for Kubernetes (OKE) cluster using Oracle WebLogic Server Kubernetes Operator.
In a previous article I described how I made several changes to the configuration of a WebLogic domain:
- Scaling up the number of Managed Servers
- Overriding the WebLogic domain configuration
- Application lifecycle management (ALM), using a new WebLogic Docker image
In this article I will describe how I made the following changes to the configuration of the WebLogic domain:
- Assigning WebLogic Pods to particular nodes
- Assigning WebLogic Pods to a licensed node
- Scaling up the number of Managed Servers (via the replicas property) to a number greater than the Dynamic Cluster Size
- Scaling up the number of Managed Servers by using the Oracle WebLogic Server Administration Console (not recommended by Oracle)
Using Oracle WebLogic Server Kubernetes Operator for deploying a WebLogic domain on Kubernetes
In a previous article I described how I used the Oracle WebLogic Server Kubernetes Operator (the “operator”) to simplify the management and operation of WebLogic domains and deployments.
[https://technology.amis.nl/2019/09/28/deploying-an-oracle-weblogic-domain-on-a-kubernetes-cluster-using-oracle-weblogic-server-kubernetes-operator/]
For deploying a WebLogic domain on Kubernetes, I downloaded a domain resource definition (the file domain.yaml) which contains the necessary parameters for the “operator” to start the WebLogic domain properly.
Opening the Oracle WebLogic Server Administration Console
As you may remember from my previous article, I opened a browser and logged in to the Oracle WebLogic Server Administration Console and on the left, in the Domain Structure, I clicked on “Environment”.
There you can see that the Domain (named: sample-domain1) has 1 running Administration Server (named: admin-server) and 2 running Managed Servers (named: managed-server1 and managed-server2). The Managed Servers are configured to be part of a WebLogic Server cluster (named: cluster-1).
Assigning WebLogic Pods to particular nodes
When you create a Managed Server (Pod), the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for Pods. The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled Containers is less than the capacity of the node. Note that although actual memory or CPU resource usage on nodes is very low, the scheduler still refuses to place a Pod on a node if the capacity check fails. This protects against a resource shortage on a node when resource usage later increases, for example, during a daily peak in request rate.
[https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-requests-are-scheduled]
You can constrain a Pod to only be able to run on particular Node(s) , or to prefer to run on particular nodes. There are several ways to do this, and the recommended approaches all use label selectors to make the selection. Generally, such constraints are unnecessary, as the scheduler will automatically do a reasonable placement (e.g. spread your pods across nodes, not place the pod on a node with insufficient free resources, etc.) but there are some circumstances where you may want more control on a node where a pod land’s, e.g.:
- to ensure that a pod ends up on a machine with an SSD attached to it
- to co-locate pods from two different services that communicate a lot into the same availability zone
- to ensure pods end up in different availability zone for better high availability
- to move away (draining) all pods from a given node because of maintenance reason
- to ensure that pod’s which run certain software end up on a licensed environment
[https://kubernetes.io/docs/concepts/configuration/assign-pod-node/]
nodeSelector is the simplest recommended form of node selection constraint. nodeSelector is a field of PodSpec. It specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). The most common usage is one key-value pair.
[https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector]
Remark:
nodeSelector provides a very simple way to constrain pods to nodes with particular labels. The affinity/anti-affinity feature, greatly expands the types of constraints you can express.
For more information about this, please see:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
Now, I will describe how to assign an individual Managed Server (Pod) and or the whole WebLogic Domain to particular node(s). But first I will scale up the WebLogic Domain to 4 managed servers, in a way I also described in my previous article.
[https://technology.amis.nl/2019/10/14/changing-the-configuration-of-an-oracle-weblogic-domain-deployed-on-a-kubernetes-cluster-using-oracle-weblogic-server-kubernetes-operator-part-1/]
Scaling up the number of Managed Servers to 4
I changed the file domain.yaml, clusters part to the following content, in order to scale up the WebLogic cluster in Kubernetes with an extra Managed Server (via the property replicas):
clusters: - clusterName: cluster-1 serverStartState: "RUNNING" replicas: 4
I used the following command, to apply the changes:
kubectl apply -f /u01/domain.yaml
With the following output:
domain “sample-domain1” configured
I used the following command, to list the Nodes:
kubectl get nodes
With the following output:
NAME STATUS ROLES AGE VERSION
10.0.10.2 Ready node 23d v1.13.5
10.0.11.2 Ready node 23d v1.13.5
10.0.12.2 Ready node 23d v1.13.5
In case of OKE the node name (a unique string which identifies the node) can be the Public IP address of the node or the subnet’s CIDR Block’s first IP address.
I used the following command, to list the Pods:
kubectl get pods -n sample-domain1-ns -o wide
With the following output:
NAME READY STATUS RESTARTS AGE IP NODE
sample-domain1-admin-server 1/1 Running 0 1d 10.244.1.8 10.0.12.2
sample-domain1-managed-server1 1/1 Running 1 1d 10.244.1.10 10.0.12.2
sample-domain1-managed-server2 1/1 Running 0 1d 10.244.0.11 10.0.11.2
sample-domain1-managed-server3 1/1 Running 0 1d 10.244.0.10 10.0.11.2
sample-domain1-managed-server4 1/1 Running 0 1d 10.244.1.9 10.0.12.2
Here’s an overview of the current situation:
Oracle WebLogic Server Administration Console | kubectl get pods -n sample-domain1-ns -o wide | domain.yaml | |
Server Name | NAME | NODE | nodeSelector |
admin-server | sample-domain1-admin-server | 10.0.12.2 | n.a. |
managed-server1 | sample-domain1-managed-server1 | 10.0.12.2 | n.a. |
managed-server2 | sample-domain1-managed-server2 | 10.0.11.2 | n.a. |
managed-server3 | sample-domain1-managed-server3 | 10.0.11.2 | n.a. |
managed-server4 | sample-domain1-managed-server4 | 10.0.12.2 | n.a. |
So, the “operator” made sure that 4 Managed Servers were running.
Also, in the Kubernetes Web UI (Dashboard), you can see that there is 1 Administration Server and 4 Managed Servers.
Labelling
As mentioned before, you can use a nodeSelector to constrain a Pod to only be able to run on particular Node(s). To assign Pod(s) to Node(s) you need to label the desired Node with a custom key-value pair.
I used the following command, to label the first Node:
kubectl label nodes 10.0.10.2 node=1
The following labels are used:
Label key | Label Value |
node | 1 |
With the following output:
node “10.0.10.2” labeled
I used the following command, to label the third Node:
kubectl label nodes 10.0.12.2 node=3
The following labels are used:
Label key | Label Value |
node | 3 |
With the following output:
node “10.0.12.2” labeled
Modifying the domain resource definition
I changed the file domain.yaml, adminServer part to the following content (by adding the property nodeSelector), in order to define the placement of the adminServer:
adminServer: [...] serverPod: nodeSelector: node: 3
I changed the file domain.yaml, managedServers part to the following content (by adding the property nodeSelector), in order to define the placement of each ManagedServer:
spec: [...] managedServers: - serverName: managed-server1 serverPod: nodeSelector: node: 1 - serverName: managed-server2 serverPod: nodeSelector: node: 1 - serverName: managed-server3 serverPod: nodeSelector: node: 3 - serverName: managed-server4 serverPod: nodeSelector: node: 1 [...]
I used the following command, to apply the changes:
kubectl apply -f /u01/domain.yaml
With the following output:
domain “sample-domain1” configured
I used the following command, to list the Pods:
kubectl get pods -n sample-domain1-ns -o wide
The “operator” according to the changes will start to relocate servers.
After some while, with the following output:
NAME READY STATUS RESTARTS AGE IP NODE
sample-domain1-admin-server 1/1 Running 0 5m 10.244.1.11 10.0.12.2
sample-domain1-managed-server1 1/1 Running 1 1d 10.244.1.10 10.0.12.2
sample-domain1-managed-server2 1/1 Terminating 0 1d 10.244.0.11 10.0.11.2
sample-domain1-managed-server3 1/1 Running 0 1m 10.244.1.12 10.0.12.2
sample-domain1-managed-server4 1/1 Running 0 3m 10.244.2.11 10.0.10.2
And in the end, with the following output:
NAME READY STATUS RESTARTS AGE IP NODE
sample-domain1-admin-server 1/1 Running 0 9m 10.244.1.11 10.0.12.2
sample-domain1-managed-server1 0/1 Running 0 21s 10.244.2.13 10.0.10.2
sample-domain1-managed-server2 1/1 Running 0 2m 10.244.2.12 10.0.10.2
sample-domain1-managed-server3 1/1 Running 0 4m 10.244.1.12 10.0.12.2
sample-domain1-managed-server4 1/1 Running 0 7m 10.244.2.11 10.0.10.2
Here’s an overview of the current situation:
Oracle WebLogic Server Administration Console | kubectl get pods -n sample-domain1-ns -o wide | domain.yaml | |
Server Name | NAME | NODE | nodeSelector |
admin-server | sample-domain1-admin-server | 10.0.12.2 | node: 3 |
managed-server1 | sample-domain1-managed-server1 | 10.0.10.2 | node: 1 |
managed-server2 | sample-domain1-managed-server2 | 10.0.10.2 | node: 1 |
managed-server3 | sample-domain1-managed-server3 | 10.0.12.2 | node: 3 |
managed-server4 | sample-domain1-managed-server4 | 10.0.10.2 | node: 1 |
Then I tried another configuration, also using the second Node.
I used the following command, to label the second Node:
kubectl label nodes 10.0.11.2 node=2
The following labels are used:
Label key | Label Value |
node | 2 |
With the following output:
node “10.0.11.2” labeled
I changed the file domain.yaml according to the table below:
Oracle WebLogic Server Administration Console | kubectl get pods -n sample-domain1-ns -o wide | domain.yaml | |
Server Name | NAME | NODE | nodeSelector |
admin-server | sample-domain1-admin-server | 10.0.11.2 | node: 2 |
managed-server1 | sample-domain1-managed-server1 | 10.0.12.2 | node: 3 |
managed-server2 | sample-domain1-managed-server2 | 10.0.11.2 | node: 2 |
managed-server3 | sample-domain1-managed-server3 | 10.0.10.2 | node: 1 |
managed-server4 | sample-domain1-managed-server4 | 10.0.11.2 | node: 2 |
In the way I described before, I applied the changes and listed the Pods, in the end, with the following output:
NAME READY STATUS RESTARTS AGE IP NODE
sample-domain1-admin-server 1/1 Running 0 10m 10.244.0.12 10.0.11.2
sample-domain1-managed-server1 1/1 Running 0 2m 10.244.1.13 10.0.12.2
sample-domain1-managed-server2 1/1 Running 0 4m 10.244.0.14 10.0.11.2
sample-domain1-managed-server3 1/1 Running 0 6m 10.244.2.14 10.0.10.2
sample-domain1-managed-server4 1/1 Running 0 8m 10.244.0.13 10.0.11.2
Deleting a label and commenting the nodeSelector entries in the file domain.yaml
To delete the node’s assignment, delete the node’s label.
I used the following command, to delete the label of the first Node:
kubectl label nodes 10.0.10.2 node-
With the following output:
node “10.0.10.2” labeled
In the same way, I deleted the labels of nodes 10.0.11.2 and 10.0.12.2.
I turned into comment the entries I added for node assignment in the file domain.yaml. In the way I described before, I applied the changes and listed the Pods, after some while, with the following output:
NAME READY STATUS RESTARTS AGE IP NODE
sample-domain1-admin-server 1/1 Running 0 59s 10.244.1.14 10.0.12.2
sample-domain1-managed-server1 1/1 Running 0 44m 10.244.1.13 10.0.12.2
sample-domain1-managed-server2 1/1 Running 0 46m 10.244.0.14 10.0.11.2
sample-domain1-managed-server3 1/1 Running 0 48m 10.244.2.14 10.0.10.2
sample-domain1-managed-server4 1/1 Terminating 0 50m 10.244.0.13 10.0.11.2
And in the end, with the following output:
NAME READY STATUS RESTARTS AGE IP NODE
sample-domain1-admin-server 1/1 Running 0 10m 10.244.1.14 10.0.12.2
sample-domain1-managed-server1 1/1 Running 0 2m 10.244.1.15 10.0.12.2
sample-domain1-managed-server2 1/1 Running 0 4m 10.244.2.16 10.0.10.2
sample-domain1-managed-server3 1/1 Running 0 6m 10.244.2.15 10.0.10.2
sample-domain1-managed-server4 1/1 Running 0 8m 10.244.0.15 10.0.11.2
Here’s an overview of the current situation:
Oracle WebLogic Server Administration Console | kubectl get pods -n sample-domain1-ns -o wide | domain.yaml | |
Server Name | NAME | NODE | nodeSelector |
admin-server | sample-domain1-admin-server | 10.0.12.2 | n.a. |
managed-server1 | sample-domain1-managed-server1 | 10.0.12.2 | n.a. |
managed-server2 | sample-domain1-managed-server2 | 10.0.10.2 | n.a. |
managed-server3 | sample-domain1-managed-server3 | 10.0.10.2 | n.a. |
managed-server4 | sample-domain1-managed-server4 | 10.0.11.2 | n.a. |
So, the pod’s reallocation/restart happened again, based on the scheduler’s decision.
Assigning WebLogic Pods to a licensed node
This use case is similar to the previous use case whereby individual servers/pods were assigned to particular node(s). However, the focus in this use case is on the license coverage.
At v1.13, Kubernetes supports clusters with up to 5000(!) nodes. However certain software like WebLogic requires a license. Using the nodeSelector feature, Kubernetes ensures that WebLogic pods end up on licenced worker node(s).
Now, I will describe how to assign all WebLogic pods (WebLogic domain) to a particular node.
Labelling
I used the following command, to label the second Node:
kubectl label nodes 10.0.11.2 licensed-for-weblogic=true
The following labels are used:
Label key | Label Value |
licensed-for-weblogic | true |
With the following output:
node “10.0.11.2” labeled
Modifying the domain resource definition
I changed the file domain.yaml, serverPod part to the following content (by adding the property nodeSelector), in order to define the placement of all the WebLogic pods:
serverPod: env: [...] nodeSelector: licensed-for-weblogic: true
In the way I described before, I applied the changes and listed the Pods, after some while, with the following output:
NAME READY STATUS RESTARTS AGE IP NODE
sample-domain1-admin-server 1/1 Terminating 0 3h 10.244.1.14 10.0.12.2
sample-domain1-managed-server1 1/1 Running 0 3h 10.244.1.15 10.0.12.2
sample-domain1-managed-server2 1/1 Running 0 3h 10.244.2.16 10.0.10.2
sample-domain1-managed-server3 1/1 Running 0 3h 10.244.2.15 10.0.10.2
sample-domain1-managed-server4 1/1 Running 0 3h 10.244.0.15 10.0.11.2
And in the end, with the following output:
NAME READY STATUS RESTARTS AGE IP NODE
sample-domain1-admin-server 1/1 Running 0 8m 10.244.0.16 10.0.11.2
sample-domain1-managed-server1 1/1 Running 0 5s 10.244.0.20 10.0.11.2
sample-domain1-managed-server2 1/1 Running 0 1m 10.244.0.19 10.0.11.2
sample-domain1-managed-server3 1/1 Running 0 3m 10.244.0.18 10.0.11.2
sample-domain1-managed-server4 1/1 Running 0 6m 10.244.0.17 10.0.11.2
Here’s an overview of the current situation:
Oracle WebLogic Server Administration Console | kubectl get pods -n sample-domain1-ns -o wide | domain.yaml | |
Server Name | NAME | NODE | nodeSelector |
admin-server | sample-domain1-admin-server | 10.0.11.2 | licensed-for-weblogic: true |
managed-server1 | sample-domain1-managed-server1 | 10.0.11.2 | licensed-for-weblogic: true |
managed-server2 | sample-domain1-managed-server2 | 10.0.11.2 | licensed-for-weblogic: true |
managed-server3 | sample-domain1-managed-server3 | 10.0.11.2 | licensed-for-weblogic: true |
managed-server4 | sample-domain1-managed-server4 | 10.0.11.2 | licensed-for-weblogic: true |
Deleting a label and commenting the nodeSelector entries in the file domain.yaml
To delete the node’s assignment, delete the node’s label.
I used the following command, to delete the label the second Node:
kubectl label nodes 10.0.11.2 licensed-for-weblogic-
With the following output:
node “10.0.11.2” labeled
I turned into comment the entries I added for node assignment in the file domain.yaml. In the way I described before, I applied the changes and listed the Pods, after some while, with the following output:
NAME READY STATUS RESTARTS AGE IP NODE
sample-domain1-admin-server 1/1 Running 0 59s 10.244.1.16 10.0.12.2
sample-domain1-managed-server1 1/1 Running 0 4m 10.244.0.20 10.0.11.2
sample-domain1-managed-server2 1/1 Running 0 6m 10.244.0.19 10.0.11.2
sample-domain1-managed-server3 1/1 Running 0 8m 10.244.0.18 10.0.11.2
sample-domain1-managed-server4 1/1 Terminating 0 10m 10.244.0.17 10.0.11.2
And in the end, with the following output:
NAME READY STATUS RESTARTS AGE IP NODE
sample-domain1-admin-server 1/1 Running 0 12m 10.244.1.16 10.0.12.2
sample-domain1-managed-server1 1/1 Running 0 3m 10.244.0.21 10.0.11.2
sample-domain1-managed-server2 1/1 Running 0 5m 10.244.2.18 10.0.10.2
sample-domain1-managed-server3 1/1 Running 0 7m 10.244.1.17 10.0.12.2
sample-domain1-managed-server4 1/1 Running 0 10m 10.244.2.17 10.0.10.2
Here’s an overview of the current situation:
Oracle WebLogic Server Administration Console | kubectl get pods -n sample-domain1-ns -o wide | domain.yaml | |
Server Name | NAME | NODE | nodeSelector |
admin-server | sample-domain1-admin-server | 10.0.12.2 | n.a. |
managed-server1 | sample-domain1-managed-server1 | 10.0.11.2 | n.a. |
managed-server2 | sample-domain1-managed-server2 | 10.0.10.2 | n.a. |
managed-server3 | sample-domain1-managed-server3 | 10.0.12.2 | n.a. |
managed-server4 | sample-domain1-managed-server4 | 10.0.10.2 | n.a. |
Again, the pod’s reallocation/restart happened again, based on the scheduler’s decision.
Scaling up the number of Managed Servers to 7
Now, I will describe what happens when you scale up the number of Managed Servers (via the replicas property) to a number greater than the Dynamic Cluster Size (being 5 in my case).
I changed the file domain.yaml, clusters part to the following content, in order to scale up the WebLogic cluster in Kubernetes with extra Managed Servers:
clusters: - clusterName: cluster-1 serverStartState: "RUNNING" replicas: 7
In the way I described before, I applied the changes and listed the Pods, after some while, with the following output:
NAME READY STATUS RESTARTS AGE IP NODE
sample-domain1-admin-server 1/1 Running 4 30d 10.244.1.21 10.0.12.2
sample-domain1-managed-server1 1/1 Running 1 29d 10.244.0.28 10.0.11.2
sample-domain1-managed-server2 1/1 Running 0 8d 10.244.2.21 10.0.10.2
sample-domain1-managed-server3 1/1 Running 1 8d 10.244.1.27 10.0.12.2
sample-domain1-managed-server4 1/1 Running 0 8d 10.244.2.19 10.0.10.2
sample-domain1-managed-server5 0/1 Running 0 10s 10.244.1.28 10.0.12.2
And in the end, with the following output:
NAME READY STATUS RESTARTS AGE IP NODE
sample-domain1-admin-server 1/1 Running 4 30d 10.244.1.21 10.0.12.2
sample-domain1-managed-server1 1/1 Running 1 29d 10.244.0.28 10.0.11.2
sample-domain1-managed-server2 1/1 Running 0 8d 10.244.2.21 10.0.10.2
sample-domain1-managed-server3 1/1 Running 1 8d 10.244.1.27 10.0.12.2
sample-domain1-managed-server4 1/1 Running 0 8d 10.244.2.19 10.0.10.2
sample-domain1-managed-server5 1/1 Running 0 1m 10.244.1.28 10.0.12.2
Here’s an overview of the current situation:
Oracle WebLogic Server Administration Console | kubectl get pods -n sample-domain1-ns -o wide | domain.yaml | |
Server Name | NAME | NODE | nodeSelector |
admin-server | sample-domain1-admin-server | 10.0.12.2 | n.a. |
managed-server1 | sample-domain1-managed-server1 | 10.0.11.2 | n.a. |
managed-server2 | sample-domain1-managed-server2 | 10.0.10.2 | n.a. |
managed-server3 | sample-domain1-managed-server3 | 10.0.12.2 | n.a. |
managed-server4 | sample-domain1-managed-server4 | 10.0.10.2 | n.a. |
managed-server5 | sample-domain1-managed-server5 | 10.0.12.2 | n.a. |
So, the pod’s reallocation/restart happened again, based on the scheduler’s decision.
In the Oracle WebLogic Server Administration Console, you can see that only 5 Managed Servers are running (the maximum number defined for the Dynamic Cluster Size property) and not 7.
Scaling up the number of Managed Servers by using the Oracle WebLogic Server Administration Console (not recommended by Oracle)
Remember the remark in my previous article:
Do not use the console to scale the cluster. The “operator” controls this operation. Use the operator’s options to scale your cluster deployed on Kubernetes.
[https://technology.amis.nl/2019/10/14/changing-the-configuration-of-an-oracle-weblogic-domain-deployed-on-a-kubernetes-cluster-using-oracle-weblogic-server-kubernetes-operator-part-1/]
And another related remark:
Do not use the WebLogic Server Administration Console to start or stop servers.
[https://oracle.github.io/weblogic-kubernetes-operator/userguide/managing-domains/domain-lifecycle/startup/#starting-and-stopping-servers]
Of course, because I worked in a lab environment, using a free Oracle Cloud trial account, I was curious what would happen if I did use the Oracle WebLogic Server Administration Console to stop a Managed Server.
Shutting down a Managed Server via the Oracle WebLogic Server Administration Console
In the Oracle WebLogic Server Administration Console, via Summary of Servers | tab Control | I selected the Managed Server named managedserver2 and choose Shutdown | Force shutdown now.
In the pop-up I clicked on button “Yes”.
After some time, the shutdown task for Managed Server named managed-server2 was completed.
Next, I opened a browser and started the Kubernetes Web UI (Dashboard). I changed the namespace to sample-domain1-ns and clicked on Workloads | Pods:
Here you can see that the Pod with name sample-domain1-managed-server2 shows the following error message:
Readiness probe failed: Get http://10.244.2.21:8001/weblogic: dial tcp 10.244.2.21:8001: connect: connection refused
On the right I clicked on the Logs of the Pod:
In the (Kubernetes) Logs you can see that I issued remotely a Managed Server shutdown (from the Oracle WebLogic Server Administration Console).
I used the following command, to list the Pods:
kubectl get pods -n sample-domain1-ns -o wide
Starting a Managed Server via the Oracle WebLogic Server Administration Console
In the Oracle WebLogic Server Administration Console, via Summary of Servers | tab Control | I selected the Managed Server named managedserver2 and choose Start.
But this restart didn’t work.
The following messages are shown:
- The server managed-server2 does not have a machine associated with it.
- Message icon – Warning All of the servers selected are currently in a state which is incompatible with this operation or are not associated with a running Node Manager or you are not authorized to perform the action requested. No action will be performed.
Starting a managed server via the file domain.yaml
Also, a restart via the file domain.yaml didn’t work. I listed the Pods, with the following output:
NAME READY STATUS RESTARTS AGE IP NODE
sample-domain1-admin-server 1/1 Running 4 30d 10.244.1.21 10.0.12.2
sample-domain1-managed-server1 1/1 Running 1 30d 10.244.0.28 10.0.11.2
sample-domain1-managed-server2 0/1 Running 0 8d 10.244.2.21 10.0.10.2
sample-domain1-managed-server3 1/1 Running 1 8d 10.244.1.27 10.0.12.2
sample-domain1-managed-server4 1/1 Running 0 8d 10.244.2.19 10.0.10.2
sample-domain1-managed-server5 1/1 Running 0 17m 10.244.1.28 10.0.12.2
As you can see the Pod named sample-domain1-managed-server2 is not running.
Deleting the Pod named sample-domain1-managed-server2 via the Kubernetes Web UI (Dashboard)
From the Kubernetes Web UI (Dashboard), I deleted the Pod named sample-domain1-managed-server2.
In the pop-up “Delete a Pod”, I choose DELETE.
In the way I described before, I listed the Pods, after some while, with the following output:
NAME READY STATUS RESTARTS AGE IP NODE
sample-domain1-admin-server 1/1 Running 4 30d 10.244.1.21 10.0.12.2
sample-domain1-managed-server1 1/1 Running 1 30d 10.244.0.28 10.0.11.2
sample-domain1-managed-server2 0/1 Running 0 36s 10.244.2.22 10.0.10.2
sample-domain1-managed-server3 1/1 Running 1 8d 10.244.1.27 10.0.12.2
sample-domain1-managed-server4 1/1 Running 0 8d 10.244.2.19 10.0.10.2
sample-domain1-managed-server5 1/1 Running 0 21m 10.244.1.28 10.0.12.2
After a short while I checked the Kubernetes Web UI (Dashboard), were I could see the Pod with name sample-domain1-managed-server2 was running again.
In the way I described before, I listed the Pods, with the following output:
NAME READY STATUS RESTARTS AGE IP NODE
sample-domain1-admin-server 1/1 Running 4 30d 10.244.1.21 10.0.12.2
sample-domain1-managed-server1 1/1 Running 1 30d 10.244.0.28 10.0.11.2
sample-domain1-managed-server2 1/1 Running 0 1m 10.244.2.22 10.0.10.2
sample-domain1-managed-server3 1/1 Running 1 8d 10.244.1.27 10.0.12.2
sample-domain1-managed-server4 1/1 Running 0 8d 10.244.2.19 10.0.10.2
sample-domain1-managed-server5 1/1 Running 0 22m 10.244.1.28 10.0.12.2
So, apparently the “operator” made sure that all the managed servers (5 in my case) were running normally again.
As you can see, I had to do a lot of steps to get the Managed Server running again.
So, Oracle was right in its remark about not using the WebLogic Server Administration Console to start or stop servers (deployed on OKE).
Deleting the Pod named sample-domain1-managed-server5 via the Kubernetes Web UI (Dashboard)
From the Kubernetes Web UI (Dashboard), I deleted the Pod with name sample-domain1-managed-server5.
In the pop-up “Delete a Pod”, I choose DELETE.
In the Oracle WebLogic Server Administration Console, you can see the shutdown of Managed Server named managed-server5 was completed.
In the way I described before, I listed the Pods, after some while, with the following output:
NAME READY STATUS RESTARTS AGE IP NODE
sample-domain1-admin-server 1/1 Running 4 30d 10.244.1.21 10.0.12.2
sample-domain1-managed-server1 1/1 Running 1 30d 10.244.0.28 10.0.11.2
sample-domain1-managed-server2 1/1 Running 0 12m 10.244.2.22 10.0.10.2
sample-domain1-managed-server3 1/1 Running 1 8d 10.244.1.27 10.0.12.2
sample-domain1-managed-server4 1/1 Running 0 8d 10.244.2.19 10.0.10.2
sample-domain1-managed-server5 1/1 Terminating 0 33m 10.244.1.28 10.0.12.2
And again, after a while, with the following output:
NAME READY STATUS RESTARTS AGE IP NODE
sample-domain1-admin-server 1/1 Running 4 30d 10.244.1.21 10.0.12.2
sample-domain1-managed-server1 1/1 Running 1 30d 10.244.0.28 10.0.11.2
sample-domain1-managed-server2 1/1 Running 0 13m 10.244.2.22 10.0.10.2
sample-domain1-managed-server3 1/1 Running 1 8d 10.244.1.27 10.0.12.2
sample-domain1-managed-server4 1/1 Running 0 8d 10.244.2.19 10.0.10.2
sample-domain1-managed-server5 0/1 Running 0 46s 10.244.1.29 10.0.12.2
Apparently the “operator” was restarting the fifth managed server.
So, I checked this in the Oracle WebLogic Server Administration Console, were I could see that Managed Server named managed-server5 was running again.
Once again, I listed the Pods, with the following output:
NAME READY STATUS RESTARTS AGE IP NODE
sample-domain1-admin-server 1/1 Running 4 30d 10.244.1.21 10.0.12.2
sample-domain1-managed-server1 1/1 Running 1 30d 10.244.0.28 10.0.11.2
sample-domain1-managed-server2 1/1 Running 0 15m 10.244.2.22 10.0.10.2
sample-domain1-managed-server3 1/1 Running 1 8d 10.244.1.27 10.0.12.2
sample-domain1-managed-server4 1/1 Running 0 8d 10.244.2.19 10.0.10.2
sample-domain1-managed-server5 1/1 Running 0 2m 10.244.1.29 10.0.12.2
So, again the “operator” made sure that all the Managed Servers (5 in my case) were running normally again.
Finally, I checked the Kubernetes Web UI (Dashboard), were I could see that the Pod named sample-domain1-managed-server5 was automatically restarted and running again.
On the right I clicked on the Logs of the Pod:
In the (Kubernetes) Logs you can see that after I deleted the Pod with name sample-domain1-managed-server5 (from the Kubernetes Web UI (Dashboard)), the Pod was automatically restarted and running again.
So now it’s time to conclude this article. In this article I described how I made several changes to the configuration of a WebLogic domain:
- Assigning WebLogic Pods to particular nodes
- Assigning WebLogic Pods to a licensed node
- Scaling up the number of Managed Servers (via the replicas property) to a number greater than the
- Scaling up the number of Managed Servers by using the Oracle WebLogic Server Administration Console (not recommended by Oracle)
For changing the configuration of a WebLogic domain on Kubernetes, I used a domain resource definition (domain.yaml) which contains the necessary parameters for the “operator” to start the WebLogic domain properly.
In this article (and the previous ones in the series) about using the Oracle WebLogic Server Kubernetes Operator, of course I used a lot of material provided to use at the Oracle Partner PaaS Summer Camp IX 2019 in Lisbon, held at the end of August.
I can highly recommend going to an Oracle Partner Camp to learn about new products, whilst the specialists are there at the site, to help you with the labs (using a free Oracle Cloud trial account) and answer your questions. So, thank you Oracle (in particular: Jürgen Kress) for organizing this for your partners.