Changing the configuration of an Oracle WebLogic Domain, deployed on a Kubernetes cluster using Oracle WebLogic Server Kubernetes Operator (part 1)

0

At the Oracle Partner PaaS Summer Camp IX 2019 in Lisbon, held at the end of August, I followed a 5 day during workshop called “Modern Application Development with Oracle Cloud”. In this workshop, on day 4, the topic was “WebLogic on Kubernetes”.
[https://paascommunity.com/2019/09/02/oracle-paas-summer-camp-2019-results-become-a-trained-certified-oracle-cloud-platform-expert/]

At the Summer Camp we used a free Oracle Cloud trial account.

On day 4, I did a hands-on lab in which an Oracle WebLogic Domain was deployed on an Oracle Container Engine for Kubernetes (OKE) cluster using Oracle WebLogic Server Kubernetes Operator.

In a previous article I described the steps that I went through to get an Oracle WebLogic Domain running on a three-node Kubernetes cluster instance (provisioned by Oracle Container Engine for Kubernetes (OKE)) on Oracle Cloud Infrastructure (OCI) in an existing OCI tenancy. The Oracle WebLogic Server Kubernetes Operator (the “operator”), which is an application-specific controller that extends Kubernetes, was used because it simplifies the management and operation of WebLogic domains and deployments.
[https://technology.amis.nl/2019/09/28/deploying-an-oracle-weblogic-domain-on-a-kubernetes-cluster-using-oracle-weblogic-server-kubernetes-operator/]

In this article, I will describe how I made several changes to the configuration of a WebLogic domain:

  • Scaling up the number of managed servers
  • Overriding the WebLogic domain configuration
  • Application lifecycle management (ALM), using a new WebLogic Docker image

In a next article I will describe (among other things) how I made several other changes to the configuration of the WebLogic domain, for example:

  • Assigning WebLogic Pods to particular nodes
  • Assigning WebLogic Pods to a licensed node

In order to get an Oracle WebLogic Domain running on a Kubernetes cluster instance (provisioned by OKE) on Oracle Cloud Infrastructure (OCI) in an existing OCI tenancy, a number of steps had to be taken.

Also, some tools (like kubectl) had to be used. At the Summer Camp our instructors provided us with a VirtualBox appliance for this.

For OKE cluster creation, I used the Quick Create feature, which uses default settings to create a quick cluster with new network resources as required.

Using Oracle WebLogic Server Kubernetes Operator for deploying a WebLogic domain on Kubernetes

In my previous article I described how I used the Oracle WebLogic Server Kubernetes Operator (the “operator”) to simplify the management and operation of WebLogic domains and deployments.
[https://technology.amis.nl/2019/09/28/deploying-an-oracle-weblogic-domain-on-a-kubernetes-cluster-using-oracle-weblogic-server-kubernetes-operator/]

For deploying a WebLogic domain on Kubernetes, I downloaded a domain resource definition which contains the necessary parameters for the “operator” to start the WebLogic domain properly.

I used the following command, to download the domain resource yaml file and saved it as /u01/domain.yaml:
[https://raw.githubusercontent.com/nagypeter/weblogic-operator-tutorial/master/k8s/domain_short.yaml]

curl -LSs https://raw.githubusercontent.com/nagypeter/weblogic-operator-tutorial/master/k8s/domain_short.yaml >/u01/domain.yaml

The file domain.yaml has the following content:

# Copyright 2017, 2019, Oracle Corporation and/or its affiliates. All rights reserved.

# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl.
#
# This is an example of how to define a Domain resource.  Please read through the comments which explain
# what updates are needed.
#
apiVersion: "weblogic.oracle/v2"
kind: Domain
metadata:
  # Update this with the `domainUID` of your domain:
  name: sample-domain1
  # Update this with the namespace your domain will run in:
  namespace: sample-domain1-ns
  labels:
    weblogic.resourceVersion: domain-v2
    # Update this with the `domainUID` of your domain:
    weblogic.domainUID: sample-domain1

spec:
  # This parameter provides the location of the WebLogic Domain Home (from the container's point of view).
  # Note that this might be in the image itself or in a mounted volume or network storage.
  domainHome: /u01/oracle/user_projects/domains/sample-domain1

  # If the domain home is inside the Docker image, set this to `true`, otherwise set `false`:
  domainHomeInImage: true

  # Update this with the name of the Docker image that will be used to run your domain:
  #image: "YOUR_OCI_REGION_CODE.ocir.io/YOUR_TENANCY_NAME/weblogic-operator-tutorial:latest"
  #image: "fra.ocir.io/johnpsmith/weblogic-operator-tutorial:latest"
  image: "iad.ocir.io/weblogick8s/weblogic-operator-tutorial-store:1.0"

  # imagePullPolicy defaults to "Always" if image version is :latest
  imagePullPolicy: "Always"

  # If credentials are needed to pull the image, uncomment this section and identify which
  # Secret contains the credentials for pulling an image:
  #imagePullSecrets:
  #- name: ocirsecret

  # Identify which Secret contains the WebLogic Admin credentials (note that there is an example of
  # how to create that Secret at the end of this file)
  webLogicCredentialsSecret:
    # Update this with the name of the secret containing your WebLogic server boot credentials:
    name: sample-domain1-weblogic-credentials

  # If you want to include the server out file into the pod's stdout, set this to `true`:
  includeServerOutInPodLog: true

  # If you want to use a mounted volume as the log home, i.e. to persist logs outside the container, then
  # uncomment this and set it to `true`:
  # logHomeEnabled: false
  # The in-pod name of the directory to store the domain, node manager, server logs, and server .out
  # files in.
  # If not specified or empty, domain log file, server logs, server out, and node manager log files
  # will be stored in the default logHome location of /shared/logs//.
  # logHome: /shared/logs/domain1

  # serverStartPolicy legal values are "NEVER", "IF_NEEDED", or "ADMIN_ONLY"
  # This determines which WebLogic Servers the Operator will start up when it discovers this Domain
  # - "NEVER" will not start any server in the domain
  # - "ADMIN_ONLY" will start up only the administration server (no managed servers will be started)
  # - "IF_NEEDED" will start all non-clustered servers, including the administration server and clustered servers up to the replica count
  serverStartPolicy: "IF_NEEDED"
#  restartVersion: "applicationV2"
  serverPod:
    # an (optional) list of environment variable to be set on the servers
    env:
    - name: JAVA_OPTIONS
      value: "-Dweblogic.StdoutDebugEnabled=false"
    - name: USER_MEM_ARGS
      value: "-Xms64m -Xmx256m "
#    nodeSelector:
#      licensed-for-weblogic: true

    # If you are storing your domain on a persistent volume (as opposed to inside the Docker image),
    # then uncomment this section and provide the PVC details and mount path here (standard images
    # from Oracle assume the mount path is `/shared`):
    # volumes:
    # - name: weblogic-domain-storage-volume
    #   persistentVolumeClaim:
    #     claimName: domain1-weblogic-sample-pvc
    # volumeMounts:
    # - mountPath: /shared
    #   name: weblogic-domain-storage-volume

  # adminServer is used to configure the desired behavior for starting the administration server.
  adminServer:
    # serverStartState legal values are "RUNNING" or "ADMIN"
    # "RUNNING" means the listed server will be started up to "RUNNING" mode
    # "ADMIN" means the listed server will be start up to "ADMIN" mode
    serverStartState: "RUNNING"
    adminService:
      channels:
       # Update this to set the NodePort to use for the Admin Server's default channel (where the
       # admin console will be available):
       - channelName: default
         nodePort: 30701
       # Uncomment to export the T3Channel as a service
       #- channelName: T3Channel
#    serverPod:
#      nodeSelector:
#        wlservers2: true
#  managedServers:
#  - serverName: managed-server1
#    serverPod:
#      nodeSelector:
#        wlservers1: true
#  - serverName: managed-server2
#    serverPod:
#      nodeSelector:
#        wlservers1: true
#  - serverName: managed-server3
#    serverPod:
#      nodeSelector:
#        wlservers2: true
  # clusters is used to configure the desired behavior for starting member servers of a cluster.
  # If you use this entry, then the rules will be applied to ALL servers that are members of the named clusters.
  clusters:
  - clusterName: cluster-1
    serverStartState: "RUNNING"
    replicas: 2
  # The number of managed servers to start for any unlisted clusters
  # replicas: 1
  #
#  configOverrides: jdbccm
#  configOverrideSecrets: [dbsecret]

I used the following command, to create the Domain:

kubectl apply -f /u01/domain.yaml

With the following output:

domain “sample-domain1” created

Let’s take a closer look at the domain resource yaml file.

PartValueDescription
spec | domainHome/u01/oracle/user_projects/domains/sample-domain1This parameter provides the location of the WebLogic Domain Home (from the container’s point of view).

Note that this might be in the image itself or in a mounted volume or network storage.

spec | domainHomeInImagetrueIf the domain home is inside the Docker image, set this to `true`, otherwise set `false`
spec | image“iad.ocir.io/weblogick8s/weblogic-operator-tutorial-store:1.0”Update this with the name of the Docker image that will be used to run your domain
spec | adminServer
spec | adminServer | serverStartState“RUNNING”serverStartState legal values are “RUNNING” or “ADMIN”

“RUNNING” means the listed server will be started up to “RUNNING” mode

“ADMIN” means the listed server will be start up to “ADMIN” mode

spec | clustersclusters is used to configure the desired behavior for starting member servers of a cluster.

If you use this entry, then the rules will be applied to ALL servers that are members of the named clusters.

spec | clusters | clusterNamecluster-1
spec | clusters | serverStartState“RUNNING”serverStartState legal values are “RUNNING” or “ADMIN”

“RUNNING” means the listed server will be started up to “RUNNING” mode

“ADMIN” means the listed server will be start up to “ADMIN” mode

spec | clusters | replicas2The number of managed servers to start for any unlisted clusters

Opening the Oracle WebLogic Server Administration Console

As you may remember from my previous article, I opened a browser and logged in to the Oracle WebLogic Server Administration Console and on the left, in the Domain Structure, I clicked on “Environment”.

There you can see that the Domain (named: sample-domain1) has 1 running Administration Server (named: admin-server) and 2 running Managed Servers (named: managed-server1 and managed-server2). The Managed Servers are configured to be part of a WebLogic Server cluster (named: cluster-1).

Scaling a WebLogic cluster

WebLogic Server supports two types of clustering configurations, configured and dynamic. Configured clusters are created by manually configuring each individual Managed Server instance. In dynamic clusters, the Managed Server configurations are generated from a single, shared template. With dynamic clusters, when additional server capacity is needed, new server instances can be added to the cluster without having to manually configure them individually. Also, unlike configured clusters, scaling up of dynamic clusters is not restricted to the set of servers defined in the cluster but can be increased based on runtime demands.

When you create a dynamic cluster, the dynamic servers are preconfigured and automatically generated for you, enabling you to easily scale up the number of server instances in your dynamic cluster when you need additional server capacity. You can simply start the dynamic servers without having to first manually configure and add them to the cluster.

If you need additional server instances on top of the number you originally specified, you can increase the maximum number of server instances (dynamic) in the dynamic cluster configuration or manually add configured server instances to the dynamic cluster.
[https://docs.oracle.com/middleware/1221/wls/CLUST/dynamic_clusters.htm#CLUST705]

The Oracle WebLogic Server Kubernetes provides several ways to initiate scaling of WebLogic clusters, including:

  • On-demand, updating the domain resource directly (using kubectl).
  • Calling the operator’s REST scale API, for example, from curl.
  • Using a WLDF policy rule and script action to call the operator’s REST scale API.
  • Using a Prometheus alert action to call the operator’s REST scale API.

Scaling WebLogic cluster using kubectl
The easiest way to scale a WebLogic cluster in Kubernetes is to simply edit the replicas property within a domain resource. To retain changes, edit the domain.yaml and apply changes using kubectl.

I changed the file domain.yaml, clusters part to the following content:

  clusters:
  - clusterName: cluster-1
    serverStartState: "RUNNING"
    replicas: 3

I used the following command, to apply the changes:

kubectl apply -f /u01/domain.yaml

With the following output:

domain “sample-domain1” created

I used the following command, to list the Pods:

kubectl get pods -n sample-domain1-ns

After a short while, with the following output:

NAME                             READY     STATUS    RESTARTS   AGE
sample-domain1-admin-server      1/1       Running   0          21d
sample-domain1-managed-server1   1/1       Running   1          21d
sample-domain1-managed-server2   1/1       Running   1          21d
sample-domain1-managed-server3   0/1       Running   0          1m

And in the end, with the following output:

NAME                             READY     STATUS    RESTARTS   AGE
sample-domain1-admin-server      1/1       Running   0          21d
sample-domain1-managed-server1   1/1       Running   1          21d
sample-domain1-managed-server2   1/1       Running   1          21d
sample-domain1-managed-server3   1/1       Running   0          4m

In the Oracle WebLogic Server Administration Console, you can see that a third Managed Server is running (named: managed-server3).

Remark:
You can edit directly the existing (running) domain resource file by using the kubectl edit command. In this case your domain.yaml will not reflect the changes of the running domain’s resource.

kubectl edit domain DOMAIN_UID -n DOMAIN_NAMESPACE

In case if you use default settings the syntax is:

kubectl edit domain sample-domain1 -n sample-domain1-ns

It will use a vi like editor.

Remark about using the console:
Do not use the console to scale the cluster. The “operator” controls this operation. Use the operator’s options to scale your cluster deployed on Kubernetes.

Overriding the WebLogic domain configuration

You can modify the WebLogic domain configuration for both the “domain in persistent volume” and the “domain in image” options before deploying a domain resource:

  • When the domain is in a persistent volume, you can use WebLogic Scripting Tool (WLST) or WebLogic Deploy Tooling (WDT) to change the configuration.
  • For either case you can use configuration overrides.

Use configuration overrides (also called situational configuration) to customize a WebLogic domain home configuration without modifying the domain’s actual config.xml or system resource files. For example, you may want to override a JDBC datasource XML module user name, password, and URL so that it references a different database.

You can use overrides to customize domains as they are moved from QA to production, are deployed to different sites, or are even deployed multiple times at the same site.
[https://github.com/oracle/weblogic-kubernetes-operator/blob/2.0/site/config-overrides.md]

Situational configuration consists of XML formatted files that closely resemble the structure of WebLogic config.xml and system resource module XML files. In addition, the attribute fields in these files can embed add, replace, and delete verbs to specify the desired override action for the field.

For more details see the Configuration overrides documentation.

Situational configuration files end with the suffix “situational-config.xml” and are domain configuration files only, which reside in a new optconfig directory. Administrators create, update, and delete situational-config.xml files in the optconfig directory.
[https://docs.oracle.com/middleware/12213/wls/DOMCF/changes.htm#DOMCF-GUID-8EBBC8A0-5CF9-47AB-987D-0B3560CAB8C0]

Preparing the JDBC module override
The “operator” requires a different file name format for override templates than WebLogic’s built-in situational configuration feature. It converts the names to the format required by situational configuration when it moves the templates to the domain home optconfig directory.

The following table describes the format:

Original ConfigurationRequired Override Name
config.xmlconfig.xml
JMS modulejms-<MODULENAME>.xmlJava Message Service (JMS)
[https://docs.oracle.com/middleware/1221/wls/JMSAD/overview.htm#JMSAD124]
JDBC modulejdbc-<MODULENAME>.xmlJava Database Connectivity (JDBC)
[https://docs.oracle.com/middleware/12213/wls/JDBCA/jdbc_intro.htm#JDBCA108]
WLDF modulewldf-<MODULENAME>.xmlWebLogic Diagnostic Framework (WLDF)
[https://docs.oracle.com/middleware/12213/wls/WLDFC/intro.htm#WLDFC107]

A <MODULENAME> must correspond to the MBean name of a system resource defined in your original config.xml file.

So, for JDBC, it has to be jdbc-<MODULENAME>.xml.

The custom WebLogic image I used, has a JDBC Datasource called testDatasource.

So, I had to create a template with the name jdbc-testDatasource.xml. But first I created a directory which will contain only the situational JDBC configuration template and a version.txt file.

I used the following commands, to create the template file jdbc-testDatasource.xml:

mkdir -p /u01/override
cat > /u01/override/jdbc-testDatasource.xml <<'EOF'
<?xml version='1.0' encoding='UTF-8'?>
<jdbc-data-source xmlns="http://xmlns.oracle.com/weblogic/jdbc-data-source"
                  xmlns:f="http://xmlns.oracle.com/weblogic/jdbc-data-source-fragment"
                  xmlns:s="http://xmlns.oracle.com/weblogic/situational-config">
  <name>testDatasource</name>
  <jdbc-driver-params>
    <url f:combine-mode="replace">${secret:dbsecret.url}</url>
    <properties>
       <property>
          <name>user</name>
          <value f:combine-mode="replace">${secret:dbsecret.username}</value>
       </property>
    </properties>
  </jdbc-driver-params>
</jdbc-data-source>
EOF

Remark about the template:
This template contains a macro to override the JDBC user name and URL parameters. The values are referred from a Kubernetes secret.

I used the following command, to create the file version.txt (which reflects the version of the “operator”):

cat > /u01/override/version.txt <<EOF
2.0
EOF

I used the following command, to create a ConfigMap from the directory of template and version file:

kubectl -n sample-domain1-ns create cm jdbccm --from-file /u01/override

With the following output:

configmap “jdbccm” created

I used the following command, to label the ConfigMap:

kubectl -n sample-domain1-ns label cm jdbccm weblogic.domainUID=sample-domain1

The following label is used:

Label keyLabel Value
weblogic.domainUIDsample-domain1

With the following output:

configmap “jdbccm” labeled

I used the following command, to describe the ConfigMap:

kubectl describe cm jdbccm -n sample-domain1-ns

With the following output:

Name:
jdbccm
Namespace:
sample-domain1-ns
Labels: weblogic.domainUID=
sample-domain1
Annotations: <none>

Data
====
version.txt:
—-
2.0


jdbc-testDatasource.xml:
—-
<?xml version=’1.0′ encoding=’UTF-8′?>
<jdbc-data-source xmlns=”http://xmlns.oracle.com/weblogic/jdbc-data-source”
xmlns:f=”http://xmlns.oracle.com/weblogic/jdbc-data-source-fragment”
xmlns:s=”http://xmlns.oracle.com/weblogic/situational-config”>
<name>testDatasource</name>
<jdbc-driver-params>
<url f:combine-mode=”replace”>${secret:
dbsecret.url}</url>
<properties>
<property>
<name>user</name>
<value f:combine-mode=”replace”>${secret:
dbsecret.username}</value>
</property>
</properties>
</jdbc-driver-params>
</jdbc-data-source>


Events: <none>

I used the following command, to create a Secret which contains the values of the JDBC user name and URL parameters:

kubectl -n sample-domain1-ns create secret generic dbsecret --from-literal=username=scott2 --from-literal=url=jdbc:oracle:thin:@test.db.example.com:1521/ORCLCDB

With the following output:

secret “dbsecret” created

I used the following command, to label the Secret:

kubectl -n sample-domain1-ns label secret dbsecret weblogic.domainUID=sample-domain1

The following label is used:

Label keyLabel Value
weblogic.domainUIDsample-domain1

With the following output:

secret “dbsecret” labeled

Before applying these changes, I checked the current JDBC parameters using:

  • the Oracle WebLogic Server Administration Console

  • a demo web application.

I opened a browser and started the demo web application (according to the URL pattern: http://EXTERNAL-IP/opdemo/?dsname=testDatasource), via URL:

http://111.11.11.1/opdemo/?dsname=testDatasource

In the table below you can see the Datasource properties:

PropertyValue
Datasource nametestDatasource
Database Userscott
Database URLjdbc:oracle:thin:@//xxx.xxx.x.xxx:1521/ORCLCDB

The final step is to modify the domain resource definition (domain.yaml) to include the override ConfigMap and Secret.

I changed the file domain.yaml, at the end of the spec part to the following content:

spec:
  [ ... ]
  configOverrides: jdbccm
  configOverrideSecrets: [dbsecret]

Restarting the WebLogic domain
Any override change requires stopping all WebLogic pods, applying the domain resource and restarting the WebLogic pods before it can take effect.
To stop all running WebLogic Server pods in the domain, apply a changed resource, and then start the domain.

I changed the file domain.yaml (the property serverStartPolicy) to the following content:

#  serverStartPolicy: "IF_NEEDED"
  serverStartPolicy: "NEVER"

Remark about property serverStartPolicy:
This property determines which WebLogic Servers the Operator will start up when it discovers this Domain. The serverStartPolicy legal values are:

  • “NEVER” will not start any server in the domain
  • “ADMIN_ONLY” will start up only the administration server (no managed servers will be started)
  • “IF_NEEDED” will start all non-clustered servers, including the administration server and clustered servers up to the replica count

I used the following command, to apply the changes, including the one which stops all running WebLogic Server pods in the domain:

kubectl apply -f /u01/domain.yaml

With the following output:

domain “sample-domain1” configured

I used the following command, to list the Pods:

kubectl get pods -n sample-domain1-ns

With, in the end, the following output:

No resources found.

I waited till all pods were terminated and no resources found.

Next, I changed the file domain.yaml (the property serverStartPolicy) to the following content:

  serverStartPolicy: "IF_NEEDED"
#  serverStartPolicy: "NEVER"

I used the following command, to apply the change, which starts WebLogic Server pods in the domain:

kubectl apply -f /u01/domain.yaml

With the following output:
domain “sample-domain1” configured

I used the following command, to list the Pods:

kubectl get pods -n sample-domain1-ns

After some while, with the following output:

NAME                          READY     STATUS    RESTARTS   AGE
sample-domain1-admin-server   0/1       Running   0          11s

And in the end, with the following output:

NAME                             READY     STATUS    RESTARTS   AGE
sample-domain1-admin-server      1/1       Running   0          3m
sample-domain1-managed-server1   1/1       Running   0          1m
sample-domain1-managed-server2   1/1       Running   0          1m
sample-domain1-managed-server3   1/1       Running   0          1m

I checked the new JDBC parameters using the demo web application. I opened a browser and started the demo web application (according to the URL pattern: http://EXTERNAL-IP/opdemo/?dsname=testDatasource), via URL:

http://111.11.11.1/opdemo/?dsname=testDatasource

In the table below you can see the Datasource properties:

PropertyValue
Datasource nametestDatasource
Database Userscott2
Database URLjdbc:oracle:thin:@test.db.example.com:1521/ORCLCDB

So here we can see, the expected result from the JDBC module override. The JDBC user name and URL parameters have been changed.

Application Lifecycle Management

As could be seen before, a Docker image (with a WebLogic domain inside) is used to run a domain. This means that all the artefacts including the deployed applications (such as the demo web application mentioned before) and domain related files are stored within the image. This results in a new WebLogic Docker image every time when one or more of the applications is/are modified. In this – widely adopted – approach the image is the packaging unit instead of the Web/Enterprise Application Archive (war, ear).

I changed the file domain.yaml (the property image) to the following content:

#  image: "iad.ocir.io/weblogick8s/weblogic-operator-tutorial-store:1.0"
  image: "iad.ocir.io/weblogick8s/weblogic-operator-tutorial-store:2.0"

Remark about the new image:
The new image contains a domain and an updated version of the demo web application (with a green title on the main page).

I used the following command, to apply the changes:

kubectl apply -f /u01/domain.yaml

With the following output:
domain “sample-domain1” configured

I used the following command, to list the Pods:

kubectl get pods -n sample-domain1-ns

The “operator” now performs a rolling restart of servers, one by one. The first one is the Admin server than the Managed servers.

Rolling restarts is a coordinated and controlled shut down of all of the servers in a domain or cluster while ensuring that service to the end user is not interrupted.
[https://oracle.github.io/weblogic-kubernetes-operator/userguide/managing-domains/domain-lifecycle/restarting/]

After some while, with the following output:

NAME                             READY     STATUS        RESTARTS   AGE
sample-domain1-admin-server      1/1       Running       0          55s
sample-domain1-managed-server1   1/1       Running       0          14m
sample-domain1-managed-server2   1/1       Running       0          14m
sample-domain1-managed-server3   1/1       Terminating   0          14m

And in the end, with the following output:

NAME                             READY     STATUS    RESTARTS   AGE
sample-domain1-admin-server      1/1       Running   0          7m
sample-domain1-managed-server1   1/1       Running   0          1m
sample-domain1-managed-server2   1/1       Running   0          3m
sample-domain1-managed-server3   1/1       Running   0          5m

During the rolling restart of servers, I checked the demo web application periodically.

For this, I opened a browser and started the demo web application (according to the URL pattern: http://EXTERNAL-IP/opdemo/?dsname=testDatasource), via URL:

http://111.11.11.1/opdemo/?dsname=testDatasource

So, here you see that the responding server (sample-domain1-managed-server3) already restarted because you see the change (green fonts) made on the demo web application.

So, here you see the responding server (sample-domain1-managed-server1) is not yet restarted because it still serves the old version of the demo web application.

In the end, the admin server and all three managed servers are restarted.

So now it’s time to conclude this article. In this article I described how I made several changes to the configuration of a WebLogic domain:

  • Scaling up the number of managed servers
  • Overriding the WebLogic domain configuration
  • Application lifecycle management (ALM), using a new WebLogic Docker image

For changing the configuration of a WebLogic domain on Kubernetes, I used a domain resource definition (domain.yaml) which contains the necessary parameters for the “operator” to start the WebLogic domain properly.

In a next article I will describe (among other things) how I made several other changes to the configuration of the WebLogic domain.

About Author

Marc, active in IT (and with Oracle) since 1995, is a Principal Oracle SOA Consultant with focus on Oracle Cloud, Oracle Service Bus, Oracle SOA Suite, Oracle Database (SQL & PL/SQL) and Java, Docker, Kubernetes, Minikube and Helm. He's Oracle SOA Suite 12c Certified Implementation Specialist. Over the past 20 years he has worked for several customers in the Netherlands. Marc likes to share his knowledge through publications, blog’s and presentations.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.