Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 2) lameriks 201910 31f

Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 2)

At the Oracle Partner PaaS Summer Camp IX 2019 in Lisbon, held at the end of August, I followed a 5 day during workshop called “Modern Application Development with Oracle Cloud”. In this workshop, on day 4, the topic was “WebLogic on Kubernetes”.
[https://paascommunity.com/2019/09/02/oracle-paas-summer-camp-2019-results-become-a-trained-certified-oracle-cloud-platform-expert/]

At the Summer Camp we used a free Oracle Cloud trial account.

On day 4, I did a hands-on lab in which an Oracle WebLogic Domain was deployed on an Oracle Container Engine for Kubernetes (OKE) cluster using Oracle WebLogic Server Kubernetes Operator.

In a previous article I described how I made several changes to the configuration of a WebLogic domain:

  • Scaling up the number of Managed Servers
  • Overriding the WebLogic domain configuration
  • Application lifecycle management (ALM), using a new WebLogic Docker image

[https://technology.amis.nl/2019/10/14/changing-the-configuration-of-an-oracle-weblogic-domain-deployed-on-a-kubernetes-cluster-using-oracle-weblogic-server-kubernetes-operator-part-1/]

In this article I will describe how I made the following changes to the configuration of the WebLogic domain:

  • Assigning WebLogic Pods to particular nodes
  • Assigning WebLogic Pods to a licensed node
  • Scaling up the number of Managed Servers (via the replicas property) to a number greater than the Dynamic Cluster Size
  • Scaling up the number of Managed Servers by using the Oracle WebLogic Server Administration Console (not recommended by Oracle)

Using Oracle WebLogic Server Kubernetes Operator for deploying a WebLogic domain on Kubernetes

In a previous article I described how I used the Oracle WebLogic Server Kubernetes Operator (the “operator”) to simplify the management and operation of WebLogic domains and deployments.
[https://technology.amis.nl/2019/09/28/deploying-an-oracle-weblogic-domain-on-a-kubernetes-cluster-using-oracle-weblogic-server-kubernetes-operator/]

For deploying a WebLogic domain on Kubernetes, I downloaded a domain resource definition (the file domain.yaml) which contains the necessary parameters for the “operator” to start the WebLogic domain properly.

Opening the Oracle WebLogic Server Administration Console

As you may remember from my previous article, I opened a browser and logged in to the Oracle WebLogic Server Administration Console and on the left, in the Domain Structure, I clicked on “Environment”.

Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 2) lameriks 201910 13

There you can see that the Domain (named: sample-domain1) has 1 running Administration Server (named: admin-server) and 2 running Managed Servers (named: managed-server1 and managed-server2). The Managed Servers are configured to be part of a WebLogic Server cluster (named: cluster-1).

Assigning WebLogic Pods to particular nodes

When you create a Managed Server (Pod), the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for Pods. The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled Containers is less than the capacity of the node. Note that although actual memory or CPU resource usage on nodes is very low, the scheduler still refuses to place a Pod on a node if the capacity check fails. This protects against a resource shortage on a node when resource usage later increases, for example, during a daily peak in request rate.
[https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-requests-are-scheduled]

You can constrain a Pod to only be able to run on particular Node(s) , or to prefer to run on particular nodes. There are several ways to do this, and the recommended approaches all use label selectors to make the selection. Generally, such constraints are unnecessary, as the scheduler will automatically do a reasonable placement (e.g. spread your pods across nodes, not place the pod on a node with insufficient free resources, etc.) but there are some circumstances where you may want more control on a node where a pod land’s, e.g.:

  • to ensure that a pod ends up on a machine with an SSD attached to it
  • to co-locate pods from two different services that communicate a lot into the same availability zone
  • to ensure pods end up in different availability zone for better high availability
  • to move away (draining) all pods from a given node because of maintenance reason
  • to ensure that pod’s which run certain software end up on a licensed environment

[https://kubernetes.io/docs/concepts/configuration/assign-pod-node/]

nodeSelector is the simplest recommended form of node selection constraint. nodeSelector is a field of PodSpec. It specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). The most common usage is one key-value pair.
[https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector]

Remark:
nodeSelector provides a very simple way to constrain pods to nodes with particular labels. The affinity/anti-affinity feature, greatly expands the types of constraints you can express.
For more information about this, please see:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity

Now, I will describe how to assign an individual Managed Server (Pod) and or the whole WebLogic Domain to particular node(s). But first I will scale up the WebLogic Domain to 4 managed servers, in a way I also described in my previous article.
[https://technology.amis.nl/2019/10/14/changing-the-configuration-of-an-oracle-weblogic-domain-deployed-on-a-kubernetes-cluster-using-oracle-weblogic-server-kubernetes-operator-part-1/]

Scaling up the number of Managed Servers to 4
I changed the file domain.yaml, clusters part to the following content, in order to scale up the WebLogic cluster in Kubernetes with an extra Managed Server (via the property replicas):

  clusters:
  - clusterName: cluster-1
    serverStartState: "RUNNING"
    replicas: 4

I used the following command, to apply the changes:

kubectl apply -f /u01/domain.yaml

With the following output:

domain “sample-domain1” configured

I used the following command, to list the Nodes:

kubectl get nodes

With the following output:

NAME        STATUS    ROLES     AGE       VERSION
10.0.10.2   Ready     node      23d       v1.13.5
10.0.11.2   Ready     node      23d       v1.13.5
10.0.12.2   Ready     node      23d       v1.13.5

In case of OKE the node name (a unique string which identifies the node) can be the Public IP address of the node or the subnet’s CIDR Block’s first IP address.

I used the following command, to list the Pods:

kubectl get pods -n sample-domain1-ns -o wide

With the following output:

NAME                             READY     STATUS    RESTARTS   AGE       IP            NODE
sample-domain1-admin-server      1/1       Running   0          1d        10.244.1.8    
10.0.12.2
sample-domain1-managed-server1   1/1       Running   1          1d        10.244.1.10   
10.0.12.2
sample-domain1-managed-server2   1/1       Running   0          1d        10.244.0.11   
10.0.11.2
sample-domain1-managed-server3   1/1       Running   0          1d        10.244.0.10   
10.0.11.2
sample-domain1-managed-server4   1/1       Running   0          1d        10.244.1.9    
10.0.12.2

Here’s an overview of the current situation:

Oracle WebLogic Server Administration Console kubectl get pods -n sample-domain1-ns -o wide domain.yaml
Server Name NAME NODE nodeSelector
admin-server sample-domain1-admin-server 10.0.12.2 n.a.
managed-server1 sample-domain1-managed-server1 10.0.12.2 n.a.
managed-server2 sample-domain1-managed-server2 10.0.11.2 n.a.
managed-server3 sample-domain1-managed-server3 10.0.11.2 n.a.
managed-server4 sample-domain1-managed-server4 10.0.12.2 n.a.

So, the “operator” made sure that 4 Managed Servers were running.

Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 2) lameriks 201910 14

Also, in the Kubernetes Web UI (Dashboard), you can see that there is 1 Administration Server and 4 Managed Servers.

Labelling
As mentioned before, you can use a nodeSelector to constrain a Pod to only be able to run on particular Node(s). To assign Pod(s) to Node(s) you need to label the desired Node with a custom key-value pair.

I used the following command, to label the first Node:

kubectl label nodes 10.0.10.2 node=1

The following labels are used:

Label key Label Value
node 1

With the following output:

node “
10.0.10.2” labeled

I used the following command, to label the third Node:

kubectl label nodes 10.0.12.2 node=3

The following labels are used:

Label key Label Value
node 3

With the following output:

node “
10.0.12.2” labeled

Modifying the domain resource definition
I changed the file domain.yaml, adminServer part to the following content (by adding the property nodeSelector), in order to define the placement of the adminServer:

adminServer:
  [...]
  serverPod:
    nodeSelector:
      node: 3

I changed the file domain.yaml, managedServers part to the following content (by adding the property nodeSelector), in order to define the placement of each ManagedServer:

spec:
  [...]
  managedServers:
  - serverName: managed-server1
    serverPod:
      nodeSelector:
        node: 1
  - serverName: managed-server2
    serverPod:
      nodeSelector:
        node: 1
  - serverName: managed-server3
    serverPod:
      nodeSelector:
        node: 3
  - serverName: managed-server4
    serverPod:
      nodeSelector:
        node: 1
  [...]

I used the following command, to apply the changes:

kubectl apply -f /u01/domain.yaml

With the following output:

domain “sample-domain1” configured

I used the following command, to list the Pods:

kubectl get pods -n sample-domain1-ns -o wide

The “operator” according to the changes will start to relocate servers.

After some while, with the following output:

NAME                             READY     STATUS        RESTARTS   AGE       IP            NODE
sample-domain1-admin-server      1/1       Running       0          5m        10.244.1.11   10.0.12.2
sample-domain1-managed-server1   1/1       Running       1          1d        10.244.1.10   10.0.12.2
sample-domain1-managed-server2   1/1       Terminating   0          1d        10.244.0.11   10.0.11.2
sample-domain1-managed-server3   1/1       Running       0          1m        10.244.1.12   10.0.12.2
sample-domain1-managed-server4   1/1       Running       0          3m        10.244.2.11   10.0.10.2

And in the end, with the following output:

NAME                             READY     STATUS    RESTARTS   AGE       IP            NODE
sample-domain1-admin-server      1/1       Running   0          9m        10.244.1.11   
10.0.12.2
sample-domain1-managed-server1   0/1       Running   0          21s       10.244.2.13   
10.0.10.2
sample-domain1-managed-server2   1/1       Running   0          2m        10.244.2.12   
10.0.10.2
sample-domain1-managed-server3   1/1       Running   0          4m        10.244.1.12   
10.0.12.2
sample-domain1-managed-server4   1/1       Running   0          7m        10.244.2.11   
10.0.10.2

Here’s an overview of the current situation:

Oracle WebLogic Server Administration Console kubectl get pods -n sample-domain1-ns -o wide domain.yaml
Server Name NAME NODE nodeSelector
admin-server sample-domain1-admin-server 10.0.12.2 node: 3
managed-server1 sample-domain1-managed-server1 10.0.10.2 node: 1
managed-server2 sample-domain1-managed-server2 10.0.10.2 node: 1
managed-server3 sample-domain1-managed-server3 10.0.12.2 node: 3
managed-server4 sample-domain1-managed-server4 10.0.10.2 node: 1

Then I tried another configuration, also using the second Node.

I used the following command, to label the second Node:

kubectl label nodes 10.0.11.2 node=2

The following labels are used:

Label key Label Value
node 2

With the following output:

node “
10.0.11.2” labeled

I changed the file domain.yaml according to the table below:

Oracle WebLogic Server Administration Console kubectl get pods -n sample-domain1-ns -o wide domain.yaml
Server Name NAME NODE nodeSelector
admin-server sample-domain1-admin-server 10.0.11.2 node: 2
managed-server1 sample-domain1-managed-server1 10.0.12.2 node: 3
managed-server2 sample-domain1-managed-server2 10.0.11.2 node: 2
managed-server3 sample-domain1-managed-server3 10.0.10.2 node: 1
managed-server4 sample-domain1-managed-server4 10.0.11.2 node: 2

In the way I described before, I applied the changes and listed the Pods, in the end, with the following output:

NAME                             READY     STATUS    RESTARTS   AGE       IP            NODE
sample-domain1-admin-server      1/1       Running   0          10m       10.244.0.12   
10.0.11.2
sample-domain1-managed-server1   1/1       Running   0          2m        10.244.1.13   
10.0.12.2
sample-domain1-managed-server2   1/1       Running   0          4m        10.244.0.14   
10.0.11.2
sample-domain1-managed-server3   1/1       Running   0          6m        10.244.2.14   
10.0.10.2
sample-domain1-managed-server4   1/1       Running   0          8m        10.244.0.13   
10.0.11.2

Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 2) lameriks 201910 15

Deleting a label and commenting the nodeSelector entries in the file domain.yaml
To delete the node’s assignment, delete the node’s label.

I used the following command, to delete the label of the first Node:

kubectl label nodes 10.0.10.2 node-

With the following output:

node “
10.0.10.2” labeled

In the same way, I deleted the labels of nodes 10.0.11.2 and 10.0.12.2.

I turned into comment the entries I added for node assignment in the file domain.yaml. In the way I described before, I applied the changes and listed the Pods, after some while, with the following output:

NAME                             READY     STATUS        RESTARTS   AGE       IP            NODE
sample-domain1-admin-server      1/1       Running       0          59s       10.244.1.14   10.0.12.2
sample-domain1-managed-server1   1/1       Running       0          44m       10.244.1.13   10.0.12.2
sample-domain1-managed-server2   1/1       Running       0          46m       10.244.0.14   10.0.11.2
sample-domain1-managed-server3   1/1       Running       0          48m       10.244.2.14   10.0.10.2
sample-domain1-managed-server4   1/1       Terminating   0          50m       10.244.0.13   10.0.11.2

And in the end, with the following output:

NAME                             READY     STATUS    RESTARTS   AGE       IP            NODE
sample-domain1-admin-server      1/1       Running   0          10m       10.244.1.14   
10.0.12.2
sample-domain1-managed-server1   1/1       Running   0          2m        10.244.1.15   
10.0.12.2
sample-domain1-managed-server2   1/1       Running   0          4m        10.244.2.16   
10.0.10.2
sample-domain1-managed-server3   1/1       Running   0          6m        10.244.2.15   
10.0.10.2
sample-domain1-managed-server4   1/1       Running   0          8m        10.244.0.15   
10.0.11.2

Here’s an overview of the current situation:

Oracle WebLogic Server Administration Console kubectl get pods -n sample-domain1-ns -o wide domain.yaml
Server Name NAME NODE nodeSelector
admin-server sample-domain1-admin-server 10.0.12.2 n.a.
managed-server1 sample-domain1-managed-server1 10.0.12.2 n.a.
managed-server2 sample-domain1-managed-server2 10.0.10.2 n.a.
managed-server3 sample-domain1-managed-server3 10.0.10.2 n.a.
managed-server4 sample-domain1-managed-server4 10.0.11.2 n.a.

So, the pod’s reallocation/restart happened again, based on the scheduler’s decision.

Assigning WebLogic Pods to a licensed node

This use case is similar to the previous use case whereby individual servers/pods were assigned to particular node(s). However, the focus in this use case is on the license coverage.
At v1.13, Kubernetes supports clusters with up to 5000(!) nodes. However certain software like WebLogic requires a license. Using the nodeSelector feature, Kubernetes ensures that WebLogic pods end up on licenced worker node(s).

Now, I will describe how to assign all WebLogic pods (WebLogic domain) to a particular node.

Labelling
I used the following command, to label the second Node:

kubectl label nodes 10.0.11.2 licensed-for-weblogic=true

The following labels are used:

Label key Label Value
licensed-for-weblogic true

With the following output:

node “
10.0.11.2” labeled

Modifying the domain resource definition
I changed the file domain.yaml, serverPod part to the following content (by adding the property nodeSelector), in order to define the placement of all the WebLogic pods:

serverPod:
  env:
  [...]
  nodeSelector:
    licensed-for-weblogic: true

In the way I described before, I applied the changes and listed the Pods, after some while, with the following output:

NAME                             READY     STATUS        RESTARTS   AGE       IP            NODE
sample-domain1-admin-server      1/1       Terminating   0          3h        10.244.1.14   10.0.12.2
sample-domain1-managed-server1   1/1       Running       0          3h        10.244.1.15   10.0.12.2
sample-domain1-managed-server2   1/1       Running       0          3h        10.244.2.16   10.0.10.2
sample-domain1-managed-server3   1/1       Running       0          3h        10.244.2.15   10.0.10.2
sample-domain1-managed-server4   1/1       Running       0          3h        10.244.0.15   10.0.11.2

And in the end, with the following output:

NAME                             READY     STATUS    RESTARTS   AGE       IP            NODE
sample-domain1-admin-server      1/1       Running   0          8m        10.244.0.16   
10.0.11.2
sample-domain1-managed-server1   1/1       Running   0          5s        10.244.0.20   
10.0.11.2
sample-domain1-managed-server2   1/1       Running   0          1m        10.244.0.19   
10.0.11.2
sample-domain1-managed-server3   1/1       Running   0          3m        10.244.0.18   
10.0.11.2
sample-domain1-managed-server4   1/1       Running   0          6m        10.244.0.17   
10.0.11.2

Here’s an overview of the current situation:

Oracle WebLogic Server Administration Console kubectl get pods -n sample-domain1-ns -o wide domain.yaml
Server Name NAME NODE nodeSelector
admin-server sample-domain1-admin-server 10.0.11.2 licensed-for-weblogic: true
managed-server1 sample-domain1-managed-server1 10.0.11.2 licensed-for-weblogic: true
managed-server2 sample-domain1-managed-server2 10.0.11.2 licensed-for-weblogic: true
managed-server3 sample-domain1-managed-server3 10.0.11.2 licensed-for-weblogic: true
managed-server4 sample-domain1-managed-server4 10.0.11.2 licensed-for-weblogic: true

Deleting a label and commenting the nodeSelector entries in the file domain.yaml
To delete the node’s assignment, delete the node’s label.

I used the following command, to delete the label the second Node:

kubectl label nodes 10.0.11.2 licensed-for-weblogic-

With the following output:

node “
10.0.11.2” labeled

I turned into comment the entries I added for node assignment in the file domain.yaml. In the way I described before, I applied the changes and listed the Pods, after some while, with the following output:

NAME                             READY     STATUS        RESTARTS   AGE       IP            NODE
sample-domain1-admin-server      1/1       Running       0          59s       10.244.1.16   10.0.12.2
sample-domain1-managed-server1   1/1       Running       0          4m        10.244.0.20   10.0.11.2
sample-domain1-managed-server2   1/1       Running       0          6m        10.244.0.19   10.0.11.2
sample-domain1-managed-server3   1/1       Running       0          8m        10.244.0.18   10.0.11.2
sample-domain1-managed-server4   1/1       Terminating   0          10m       10.244.0.17   10.0.11.2

And in the end, with the following output:

NAME                             READY     STATUS    RESTARTS   AGE       IP            NODE
sample-domain1-admin-server      1/1       Running   0          12m       10.244.1.16   
10.0.12.2
sample-domain1-managed-server1   1/1       Running   0          3m        10.244.0.21   
10.0.11.2
sample-domain1-managed-server2   1/1       Running   0          5m        10.244.2.18   
10.0.10.2
sample-domain1-managed-server3   1/1       Running   0          7m        10.244.1.17   
10.0.12.2
sample-domain1-managed-server4   1/1       Running   0          10m       10.244.2.17   
10.0.10.2

Here’s an overview of the current situation:

Oracle WebLogic Server Administration Console kubectl get pods -n sample-domain1-ns -o wide domain.yaml
Server Name NAME NODE nodeSelector
admin-server sample-domain1-admin-server 10.0.12.2 n.a.
managed-server1 sample-domain1-managed-server1 10.0.11.2 n.a.
managed-server2 sample-domain1-managed-server2 10.0.10.2 n.a.
managed-server3 sample-domain1-managed-server3 10.0.12.2 n.a.
managed-server4 sample-domain1-managed-server4 10.0.10.2 n.a.

Again, the pod’s reallocation/restart happened again, based on the scheduler’s decision.

Scaling up the number of Managed Servers to 7

Now, I will describe what happens when you scale up the number of Managed Servers (via the replicas property) to a number greater than the Dynamic Cluster Size (being 5 in my case).

I changed the file domain.yaml, clusters part to the following content, in order to scale up the WebLogic cluster in Kubernetes with extra Managed Servers:

  clusters:
  - clusterName: cluster-1
    serverStartState: "RUNNING"
    replicas:  7

In the way I described before, I applied the changes and listed the Pods, after some while, with the following output:

NAME                             READY     STATUS    RESTARTS   AGE       IP            NODE
sample-domain1-admin-server      1/1       Running   4          30d       10.244.1.21   10.0.12.2
sample-domain1-managed-server1   1/1       Running   1          29d       10.244.0.28   10.0.11.2
sample-domain1-managed-server2   1/1       Running   0          8d        10.244.2.21   10.0.10.2
sample-domain1-managed-server3   1/1       Running   1          8d        10.244.1.27   10.0.12.2
sample-domain1-managed-server4   1/1       Running   0          8d        10.244.2.19   10.0.10.2
sample-domain1-managed-server5   0/1       Running   0          10s       10.244.1.28   10.0.12.2

And in the end, with the following output:

NAME                             READY     STATUS    RESTARTS   AGE       IP            NODE
sample-domain1-admin-server      1/1       Running   4          30d       10.244.1.21   
10.0.12.2
sample-domain1-managed-server1   1/1       Running   1          29d       10.244.0.28   
10.0.11.2
sample-domain1-managed-server2   1/1       Running   0          8d        10.244.2.21   
10.0.10.2
sample-domain1-managed-server3   1/1       Running   1          8d        10.244.1.27   
10.0.12.2
sample-domain1-managed-server4   1/1       Running   0          8d        10.244.2.19   
10.0.10.2
sample-domain1-managed-server5   1/1       Running   0          1m        10.244.1.28   
10.0.12.2

Here’s an overview of the current situation:

Oracle WebLogic Server Administration Console kubectl get pods -n sample-domain1-ns -o wide domain.yaml
Server Name NAME NODE nodeSelector
admin-server sample-domain1-admin-server 10.0.12.2 n.a.
managed-server1 sample-domain1-managed-server1 10.0.11.2 n.a.
managed-server2 sample-domain1-managed-server2 10.0.10.2 n.a.
managed-server3 sample-domain1-managed-server3 10.0.12.2 n.a.
managed-server4 sample-domain1-managed-server4 10.0.10.2 n.a.
managed-server5 sample-domain1-managed-server5 10.0.12.2 n.a.

So, the pod’s reallocation/restart happened again, based on the scheduler’s decision.

Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 2) lameriks 201910 16

In the Oracle WebLogic Server Administration Console, you can see that only 5 Managed Servers are running (the maximum number defined for the Dynamic Cluster Size property) and not 7.

Scaling up the number of Managed Servers by using the Oracle WebLogic Server Administration Console (not recommended by Oracle)

Remember the remark in my previous article:
Do not use the console to scale the cluster. The “operator” controls this operation. Use the operator’s options to scale your cluster deployed on Kubernetes.
[https://technology.amis.nl/2019/10/14/changing-the-configuration-of-an-oracle-weblogic-domain-deployed-on-a-kubernetes-cluster-using-oracle-weblogic-server-kubernetes-operator-part-1/]

And another related remark:
Do not use the WebLogic Server Administration Console to start or stop servers.
[https://oracle.github.io/weblogic-kubernetes-operator/userguide/managing-domains/domain-lifecycle/startup/#starting-and-stopping-servers]

Of course, because I worked in a lab environment, using a free Oracle Cloud trial account, I was curious what would happen if I did use the Oracle WebLogic Server Administration Console to stop a Managed Server.

Shutting down a Managed Server via the Oracle WebLogic Server Administration Console
In the Oracle WebLogic Server Administration Console, via Summary of Servers | tab Control | I selected the Managed Server named managedserver2 and choose Shutdown | Force shutdown now.

Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 2) lameriks 201910 17

In the pop-up I clicked on button “Yes”.

Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 2) lameriks 201910 18

After some time, the shutdown task for Managed Server named managed-server2 was completed.

Next, I opened a browser and started the Kubernetes Web UI (Dashboard). I changed the namespace to sample-domain1-ns and clicked on Workloads | Pods:

Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 2) lameriks 201910 19

Here you can see that the Pod with name sample-domain1-managed-server2 shows the following error message:
Readiness probe failed: Get http://10.244.2.21:8001/weblogic: dial tcp 10.244.2.21:8001: connect: connection refused

On the right I clicked on the Logs of the Pod:

Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 2) lameriks 201910 20

Here you see part of the Logs:


<Oct 21, 2019 6:39:36,323 PM GMT> <Notice> <Server> <BEA-002638> <Force shutdown was issued remotely from 10.244.1.21:7001.> 
<Oct 21, 2019 6:39:36,323 PM GMT> <Notice> <WebLogicServer> <BEA-000396> <Server shutdown has been requested by weblogic.> 
<Oct 21, 2019 6:39:36,344 PM GMT> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FORCE_SUSPENDING.> 
<Oct 21, 2019 6:39:36,351 PM GMT> <Notice> <Cluster> <BEA-000163> <Stopping “async” replication service> 
<Oct 21, 2019 6:39:36,437 PM GMT> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to ADMIN.> 
<Oct 21, 2019 6:39:36,441 PM GMT> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FORCE_SHUTTING_DOWN.> 
Oct 21, 2019 6:39:36 PM weblogic.wsee.WseeCoreMessages logWseeServiceHalting
INFO: The Wsee Service is halting
<Oct 21, 2019 6:39:36,457 PM GMT> <Notice> <Log Management> <BEA-170037> <The log monitoring service timer has been stopped.> 
<Oct 21, 2019 6:39:42,185 PM GMT> <Warning> <JMX> <BEA-149513> <JMX Connector Server stopped at service:jmx:iiop://sample-domain1-managed-server2:8001/jndi/weblogic.management.mbeanservers.runtime.> 
<Oct 21, 2019 6:39:42 PM GMT> <INFO> <NodeManager> <The server ‘managed-server2’ with process id 1102 is no longer alive; waiting for the process to die.>
<Oct 21, 2019 6:39:42 PM GMT> <FINEST> <NodeManager> <Process died.>
<Oct 21, 2019 6:39:42 PM GMT> <INFO> <NodeManager> <Server was shut down normally>

In the (Kubernetes) Logs you can see that I issued remotely a Managed Server shutdown (from the Oracle WebLogic Server Administration Console).

I used the following command, to list the Pods:

kubectl get pods -n sample-domain1-ns -o wide

With the following output:

NAME                             READY     STATUS    RESTARTS   AGE       IP            NODE
sample-domain1-admin-server      1/1       Running   4          30d       10.244.1.21   10.0.12.2
sample-domain1-managed-server1   1/1       Running   1          30d       10.244.0.28   10.0.11.2
sample-domain1-managed-server2   0/1       Running   0          8d        10.244.2.21   10.0.10.2
sample-domain1-managed-server3   1/1       Running   1          8d        10.244.1.27   10.0.12.2
sample-domain1-managed-server4   1/1       Running   0          8d        10.244.2.19   10.0.10.2
sample-domain1-managed-server5   1/1       Running   0          16m       10.244.1.28   10.0.12.2

Starting a Managed Server via the Oracle WebLogic Server Administration Console
In the Oracle WebLogic Server Administration Console, via Summary of Servers | tab Control | I selected the Managed Server named managedserver2 and choose Start.

But this restart didn’t work.

Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 2) lameriks 201910 21

The following messages are shown:

  • The server managed-server2 does not have a machine associated with it.
  • Message icon – Warning All of the servers selected are currently in a state which is incompatible with this operation or are not associated with a running Node Manager or you are not authorized to perform the action requested. No action will be performed.

Starting a managed server via the file domain.yaml
Also, a restart via the file domain.yaml didn’t work. I listed the Pods, with the following output:

NAME                             READY     STATUS    RESTARTS   AGE       IP            NODE
sample-domain1-admin-server      1/1       Running   4          30d       10.244.1.21   10.0.12.2
sample-domain1-managed-server1   1/1       Running   1          30d       10.244.0.28   10.0.11.2
sample-domain1-managed-server2   0/1       Running   0          8d        10.244.2.21   10.0.10.2
sample-domain1-managed-server3   1/1       Running   1          8d        10.244.1.27   10.0.12.2
sample-domain1-managed-server4   1/1       Running   0          8d        10.244.2.19   10.0.10.2
sample-domain1-managed-server5   1/1       Running   0          17m       10.244.1.28   10.0.12.2

As you can see the Pod named sample-domain1-managed-server2 is not running.

Deleting the Pod named sample-domain1-managed-server2 via the Kubernetes Web UI (Dashboard)
Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 2) lameriks 201910 22

From the Kubernetes Web UI (Dashboard), I deleted the Pod named sample-domain1-managed-server2.

Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 2) lameriks 201910 23

In the pop-up “Delete a Pod”, I choose DELETE.

In the way I described before, I listed the Pods, after some while, with the following output:

NAME                             READY     STATUS    RESTARTS   AGE       IP            NODE
sample-domain1-admin-server      1/1       Running   4          30d       10.244.1.21   10.0.12.2
sample-domain1-managed-server1   1/1       Running   1          30d       10.244.0.28   10.0.11.2
sample-domain1-managed-server2   0/1       Running   0          36s       10.244.2.22   10.0.10.2
sample-domain1-managed-server3   1/1       Running   1          8d        10.244.1.27   10.0.12.2
sample-domain1-managed-server4   1/1       Running   0          8d        10.244.2.19   10.0.10.2
sample-domain1-managed-server5   1/1       Running   0          21m       10.244.1.28   10.0.12.2

Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 2) lameriks 201910 24

After a short while I checked the Kubernetes Web UI (Dashboard), were I could see the Pod with name sample-domain1-managed-server2 was running again.

In the way I described before, I listed the Pods, with the following output:

NAME                             READY     STATUS    RESTARTS   AGE       IP            NODE
sample-domain1-admin-server      1/1       Running   4          30d       10.244.1.21   10.0.12.2
sample-domain1-managed-server1   1/1       Running   1          30d       10.244.0.28   10.0.11.2
sample-domain1-managed-server2   1/1       Running   0          1m        10.244.2.22   10.0.10.2
sample-domain1-managed-server3   1/1       Running   1          8d        10.244.1.27   10.0.12.2
sample-domain1-managed-server4   1/1       Running   0          8d        10.244.2.19   10.0.10.2
sample-domain1-managed-server5   1/1       Running   0          22m       10.244.1.28   10.0.12.2

So, apparently the “operator” made sure that all the managed servers (5 in my case) were running normally again.

As you can see, I had to do a lot of steps to get the Managed Server running again.

So, Oracle was right in its remark about not using the WebLogic Server Administration Console to start or stop servers (deployed on OKE).

Deleting the Pod named sample-domain1-managed-server5 via the Kubernetes Web UI (Dashboard)
Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 2) lameriks 201910 25

From the Kubernetes Web UI (Dashboard), I deleted the Pod with name sample-domain1-managed-server5.

Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 2) lameriks 201910 26

In the pop-up “Delete a Pod”, I choose DELETE.

Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 2) lameriks 201910 27

In the Oracle WebLogic Server Administration Console, you can see the shutdown of Managed Server named managed-server5 was completed.

In the way I described before, I listed the Pods, after some while, with the following output:

NAME                             READY     STATUS        RESTARTS   AGE       IP            NODE
sample-domain1-admin-server      1/1       Running       4          30d       10.244.1.21   10.0.12.2
sample-domain1-managed-server1   1/1       Running       1          30d       10.244.0.28   10.0.11.2
sample-domain1-managed-server2   1/1       Running       0          12m       10.244.2.22   10.0.10.2
sample-domain1-managed-server3   1/1       Running       1          8d        10.244.1.27   10.0.12.2
sample-domain1-managed-server4   1/1       Running       0          8d        10.244.2.19   10.0.10.2
sample-domain1-managed-server5   1/1       Terminating   0          33m       10.244.1.28   10.0.12.2

And again, after a while, with the following output:

NAME                             READY     STATUS    RESTARTS   AGE       IP            NODE
sample-domain1-admin-server      1/1       Running   4          30d       10.244.1.21   10.0.12.2
sample-domain1-managed-server1   1/1       Running   1          30d       10.244.0.28   10.0.11.2
sample-domain1-managed-server2   1/1       Running   0          13m       10.244.2.22   10.0.10.2
sample-domain1-managed-server3   1/1       Running   1          8d        10.244.1.27   10.0.12.2
sample-domain1-managed-server4   1/1       Running   0          8d        10.244.2.19   10.0.10.2
sample-domain1-managed-server5   0/1       Running   0          46s       10.244.1.29   10.0.12.2

Apparently the “operator” was restarting the fifth managed server.

Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 2) lameriks 201910 28

So, I checked this in the Oracle WebLogic Server Administration Console, were I could see that Managed Server named managed-server5 was running again.

Once again, I listed the Pods, with the following output:

NAME                             READY     STATUS    RESTARTS   AGE       IP            NODE
sample-domain1-admin-server      1/1       Running   4          30d       10.244.1.21   10.0.12.2
sample-domain1-managed-server1   1/1       Running   1          30d       10.244.0.28   10.0.11.2
sample-domain1-managed-server2   1/1       Running   0          15m       10.244.2.22   10.0.10.2
sample-domain1-managed-server3   1/1       Running   1          8d        10.244.1.27   10.0.12.2
sample-domain1-managed-server4   1/1       Running   0          8d        10.244.2.19   10.0.10.2
sample-domain1-managed-server5   1/1       Running   0          2m        10.244.1.29   10.0.12.2

So, again the “operator” made sure that all the Managed Servers (5 in my case) were running normally again.

Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 2) lameriks 201910 29

Finally, I checked the Kubernetes Web UI (Dashboard), were I could see that the Pod named sample-domain1-managed-server5 was automatically restarted and running again.

On the right I clicked on the Logs of the Pod:

Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 2) lameriks 201910 30

Here you see part of the Logs:


<Oct 21, 2019 7:08:14 PM GMT> <Info> <WebLogicServer> <BEA-000377> <Starting WebLogic Server with Java HotSpot(TM) 64-Bit Server VM Version 25.211-b12 from Oracle Corporation.> 
<Oct 21, 2019 7:08:14 PM GMT> <Info> <RCM> <BEA-2165021> <“ResourceManagement” is not enabled in this JVM. Enable “ResourceManagement” to use the WebLogic Server “Resource Consumption Management” feature. To enable “ResourceManagement”, you must specify the following JVM options in the WebLogic Server instance in which the JVM runs: -XX:+UnlockCommercialFeatures -XX:+ResourceManagement.> 
<Oct 21, 2019 7:08:15 PM GMT> <Info> <Management> <BEA-141107> <Version: WebLogic Server 12.2.1.3.0 Thu Aug 17 13:39:49 PDT 2017 1882952> 
<Oct 21, 2019 7:08:19 PM GMT> <Info> <Management> <BEA-141330> <Loading situational config file: /u01/oracle/user_projects/domains/sample-domain1/optconfig/introspector-situational-config.xml> 
<Oct 21, 2019 7:08:20 PM GMT> <Info> <Management> <BEA-141330> <Loading situational config file: /u01/oracle/user_projects/domains/sample-domain1/optconfig/jdbc/testDatasource-3399-jdbc-situational-config.xml> 
<Oct 21, 2019 7:08:21 PM GMT> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING.> 
<Oct 21, 2019 7:08:21 PM GMT> <Info> <WorkManager> <BEA-002900> <Initializing self-tuning thread pool.> 
<Oct 21, 2019 7:08:21 PM GMT> <Info> <WorkManager> <BEA-002942> <CMM memory level becomes 0. Setting standby thread pool size to 256.> 
<Oct 21, 2019 7:08:23,946 PM GMT> <Notice> <Log Management> <BEA-170019> <The server log file weblogic.logging.FileStreamHandler instance=893924277
Current log file=/u01/oracle/user_projects/domains/sample-domain1/servers/managed-server5/logs/managed-server5.log
Rotation dir=/u01/oracle/user_projects/domains/sample-domain1/servers/managed-server5/logs
 is opened. All server side log events will be written to this file.> 
<Oct 21, 2019 7:08:29,391 PM GMT> <Notice> <Security> <BEA-090946> <Security pre-initializing using security realm: myrealm> 
<Oct 21, 2019 7:08:31,246 PM GMT> <Notice> <Security> <BEA-090947> <Security post-initializing using security realm: myrealm> 
<Oct 21, 2019 7:08:39,125 PM GMT> <Notice> <Security> <BEA-090082> <Security initialized using administrative security realm: myrealm> 
<Oct 21, 2019 7:08:40,242 PM GMT> <Notice> <JMX> <BEA-149512> <JMX Connector Server started at service:jmx:iiop://sample-domain1-managed-server5:8001/jndi/weblogic.management.mbeanservers.runtime.> 
<Oct 21, 2019 7:08:45,911 PM GMT> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STANDBY.> 
<Oct 21, 2019 7:08:45,914 PM GMT> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING.> 
<Oct 21, 2019 7:08:46,066 PM GMT> <Notice> <Log Management> <BEA-170036> <The Logging monitoring service timer has started to check for logged message counts every 30 seconds.> 
<Oct 21, 2019 7:08:51,621 PM GMT> <Notice> <Cluster> <BEA-000197> <Listening for announcements from cluster using unicast cluster messaging> 
<Oct 21, 2019 7:08:51,725 PM GMT> <Notice> <Log Management> <BEA-170027> <The server has successfully established a connection with the Domain level Diagnostic Service.> 
<Oct 21, 2019 7:08:53,839 PM GMT> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to ADMIN.> 
<Oct 21, 2019 7:08:53,979 PM GMT> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RESUMING.> 
<Oct 21, 2019 7:08:54,190 PM GMT> <Notice> <Cluster> <BEA-000162> <Starting “async” replication service with remote cluster address “null”> 
<Oct 21, 2019 7:08:54,261 PM GMT> <Notice> <Server> <BEA-002613> <Channel “Default” is now listening on 10.244.1.29:8001 for protocols iiop, t3, CLUSTER-BROADCAST, ldap, snmp, http.> 
<Oct 21, 2019 7:08:54,262 PM GMT> <Notice> <Server> <BEA-002613> <Channel “Default” is now listening on 10.244.1.29:8001 for protocols iiop, t3, CLUSTER-BROADCAST, ldap, snmp, http.> 
<Oct 21, 2019 7:08:54,261 PM GMT> <Notice> <WebLogicServer> <BEA-000330> <Started the WebLogic Server Managed Server “managed-server5” for domain “sample-domain1” running in production mode.> 
<Oct 21, 2019 7:08:54,418 PM GMT> <Notice> <WebLogicServer> <BEA-000360> <The server started in RUNNING mode.> 
<Oct 21, 2019 7:08:54,470 PM GMT> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to RUNNING.>

In the (Kubernetes) Logs you can see that after I deleted the Pod with name sample-domain1-managed-server5 (from the Kubernetes Web UI (Dashboard)), the Pod was automatically restarted and running again.

So now it’s time to conclude this article. In this article I described how I made several changes to the configuration of a WebLogic domain:

  • Assigning WebLogic Pods to particular nodes
  • Assigning WebLogic Pods to a licensed node
  • Scaling up the number of Managed Servers (via the replicas property) to a number greater than the
  • Scaling up the number of Managed Servers by using the Oracle WebLogic Server Administration Console (not recommended by Oracle)

For changing the configuration of a WebLogic domain on Kubernetes, I used a domain resource definition (domain.yaml) which contains the necessary parameters for the “operator” to start the WebLogic domain properly.

In this article (and the previous ones in the series) about using the Oracle WebLogic Server Kubernetes Operator, of course I used a lot of material provided to use at the Oracle Partner PaaS Summer Camp IX 2019 in Lisbon, held at the end of August.

I can highly recommend going to an Oracle Partner Camp to learn about new products, whilst the specialists are there at the site, to help you with the labs (using a free Oracle Cloud trial account) and answer your questions. So, thank you Oracle (in particular: Jürgen Kress) for organizing this for your partners.