Creating and scaling Dynamic Clusters using wlst

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

In my previous article, Creating and scaling Dynamic Clusters in Weblogic 12c, I described the creation and scaling of Dynamic Clusters. I used the Weblogic Console to create the Dynamic Clusters and change the number of servers.

Most of the time you will use some wlst scripting to create and manage your Weblogic environments.
In this article I will show you how to create Dynamic Clusters en how you can scale them.

The example scripts from the Oracle documentation where used as base for the following script.
It is just a simple create script to show you how easy it is to create a Dynamic Cluster via wlst. So no fancy functions and exception handling in there. Yet …

createDynamicCluster.py

print '--- Set properties for dynamic Cluster creation'
clusterName='dyna-cluster'
serverTemplate='dyna-server-Template'
serverNamePrefix='dyna-server-'
listenAddress='192.168.100.4${id}'
listenPort=8000
listenPortSSL=9000
maxServerCount=2

print '--- Connect to the AdminServer'
try:
connect('weblogic','Welcome01','t3://wls01.domain.local:7001')
except err:
print "--- Can't connect to AdminServer, "+err
sys.exit(2)

print '--- Start an edit session'
edit()
startEdit()

print '--- Creating the server template '+serverTemplate+' for the dynamic servers and set the attributes'
dynamicServerTemplate=cmo.createServerTemplate(serverTemplate)
dynamicServerTemplate.setListenAddress(listenAddress)
dynamicServerTemplate.setListenPort(listenPort)
dynamicServerTemplateSSL=dynamicServerTemplate.getSSL()
dynamicServerTemplate.setListenPort(listenPortSSL)

print '--- Creating the dynamic cluster '+clusterName+', set the number of dynamic servers and designate the server template to it.'
dynamicCluster=cmo.createCluster(clusterName)
dynamicServers=dynamicCluster.getDynamicServers()
dynamicServers.setMaximumDynamicServerCount(maxServerCount)
dynamicServers.setServerTemplate(dynamicServerTemplate)

print '--- Designating the Cluster to the ServerTemplate'
dynamicServerTemplate.setCluster(dynamicCluster)

print '--- Set the servername prefix to '+serverNamePrefix
dynamicServers.setServerNamePrefix(serverNamePrefix)

print '--- Set Calculate Listen Port and Machinename based on server template'
dynamicServers.setCalculatedMachineNames(true)
dynamicServers.setCalculatedListenPorts(true)

print '--- Save and activate the changes'
save()
activate()
serverConfig()

Running the script with wlst will produce the following output and will create a Dynamic Cluster with two Dynamic Servers.

[oracle@wls01 ~]$ ${WL_HOME}/common/bin/wlst.sh createDynamicCluster.py
Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

--- Set properties for dynamic Cluster creation
--- Connect to the AdminServer
Connecting to t3://wls01.domain.local:7001 with userid weblogic ...
Successfully connected to Admin Server "AdminServer" that belongs to domain "demo_domain".

Warning: An insecure protocol was used to connect to the
server. To ensure on-the-wire security, the SSL port or
Admin port should be used instead.

Start an edit session
Location changed to edit tree. This is a writable tree with
DomainMBean as the root. To make changes you will need to start
an edit session via startEdit().

For more help, use help('edit')

Starting an edit session ...
Started edit session, please be sure to save and activate your
changes once you are done.
--- Creating the server template dyna-server-Template for the dynamic servers and set the attributes
--- Creating the dynamic cluster dyna-cluster, set the number of dynamic servers and designate the server template to it.
--- Designating the Cluster to the ServerTemplate
--- Set the servername prefix to dyna-server-
--- Set Calculate Listen Port and Machinename based on server template
--- Save and activate the changes
Saving all your changes ...
Saved all your changes successfully.
Activating all your changes, this may take a while ...
The edit lock associated with this edit session is released
once the activation is completed.
Activation completed

As you might expect, it is way faster than clicking through the Weblogic Console.
Next step will be to scale the Dynamic Cluster up to four Dynamic Servers.

scaleDynamicCluster.py

print '--- Set properties for dynamic Cluster creation'
clusterName='dyna-cluster'
maxServerCount=4

print '--- Connect to the AdminServer'
try:
connect('weblogic','Welcome01','t3://wls01.domain.local:7001')
except err:
print "Can't connect to AdminServer, "+err
sys.exit(2)

print '--- Start an edit session'
edit()
startEdit()

print '--- Change the maximum number of dynamic servers'
cd('/Clusters/%s' % clusterName )
dynamicServers=cmo.getDynamicServers()
dynamicServers.setMaximumDynamicServerCount(maxServerCount)

print '--- Save and activate the changes'
save()
activate()
serverConfig()

Running the script with wlst will produce the following output and will scale up to four Dynamic Servers.

[oracle@wls01 ~]$ ${WL_HOME}/common/bin/wlst.sh scaleDynamicCluster.py

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

--- Set properties for dynamic Cluster creation
--- Connect to the AdminServer
Connecting to t3://wls01.domain.local:7001 with userid weblogic ...
Successfully connected to Admin Server "AdminServer" that belongs to domain "demo_domain".

Warning: An insecure protocol was used to connect to the
server. To ensure on-the-wire security, the SSL port or
Admin port should be used instead.

--- Start an edit session
Location changed to edit tree. This is a writable tree with
DomainMBean as the root. To make changes you will need to start
an edit session via startEdit().

For more help, use help('edit')

Starting an edit session ...
Started edit session, please be sure to save and activate your
changes once you are done.
--- Change the maximum number of dynamic servers
--- Save and activate the changes
Saving all your changes ...
Saved all your changes successfully.
Activating all your changes, this may take a while ...
The edit lock associated with this edit session is released
once the activation is completed.
Activation completed

As mentioned before, the scripts are very limited and just show you how easy it is to create Dynamic Clusters using wlst. The scripts can be made as comprehensive as you need (want) them to be.
I will create some more examples and post them as I get them ready.

Imagine the possibilities when you create scripts you can connect to your monitoring system. Capacity on demand!

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

Creating and scaling Dynamic Clusters in Weblogic 12c

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

Introduced in Weblogic 12.1.2, dynamic clusters is a great feature to scale your private cloud.
Dynamic clusters provides you easy scaling of Weblogic clusters by adding and removing managed server instances on demand. They contain one or more dynamic servers. These dynamic servers are based on a single servertemplate that guarantees that every member of the cluster is exactly the same.

Creating Dynamic Clusters

Let’s take a look at some of the possibilities as we create a dynamic cluster.

I have created a virtualbox environment.
This environment consists of four VM’s with the following specs.

  • 2 vCPU’s
  • 4 Gb memory
  • 50 Gb disk
  • Oracle Linux 6.6
  • Java 1.7.0_75
  • Weblogic 12.1.3.0.2

I created a simple domain called demo_domain with only an AdminServer and four machines.
After unpacking the domain to the four servers, the nodemanagers where started and are reachable by the AdminServer.

Domain-pic1Now let go through the process of creating a dynamic cluster.

Open the Weblogic Console and navigate to Environment -> Clusters
Lock and Edit the domain in the Change Center
note. I make it a good practice always creating a domain in production mode, even in Development and Test.

Create a new dynamic cluster

Domain-cap1

New -> Dynamic Cluster

Provide the Clustername

Domain-cap2
Cluster name: dyna-cluster
Click Next

We will start of with a cluster containing two dynamic servers.

Domain-cap3
Number of Synamic Servers: 2
Server Name Prefix: dyna-server-
Click Next

For this demo all machines will take part.

Domain-cap4
Select ‘Use any machine configured in this domain’
Click Next

Assign each dynamic server unique listen ports

Domain-cap5
Listen Port for First Server: 8000
SSL Listen Port for First Server: 9000
Click Next

Summary screen

Domain-cap6
Click Finish

With the creation of the Dynamic Cluster there is also a Server Template created for it.

Server templates

A single server template provides the basis for the creation of the dynamic servers. Using this single template provides the possibility of every member being created with exactly the same attributes. Where some of the server-specific attributes like Servername, listen-ports, machines, etc. can be calculated based upon tokens.
You can pre-create server templates and let Weblogic clone one when a Dynamic Cluster is created.
When none is available a server template is create with the Dynamic Cluster. The name and the listen ports are the only server template attributes that you provide during Dynamic Cluster creation.

Before we activate the changes to the domain, we are going to make a change to the server template.
As an example we are going to demonstrate the use of tokens for server-specific configuration.

Navigate to Environment -> Clusters -> Server Templates

Domain-cap8
Click on the name: dyna-server-Template

We are going to use the ${ID} token in the Listen Address

Domain-cap10
Listen Address: 192.168.100.4${ID}
Click Save

The last digit of the listen address is used to make the listen address dynamic.

Activate changes in the Change Center of the Weblogic Console.
After activation the cluster and two managed servers are created.

Domain-cap12Domain-cap11

We can now start the two servers.

In the previous steps we have added a dynamic cluster with two dynamic servers, based on a single server template, to the domain.

Domain-pic2

Scaling a Dynamic Cluster

When the capacity is insufficient and you need to scale-up, you can add dynamic servers on demand.
It requires only a few clicks.

Navigate to Environment -> Clusters

Domain-cap12
Click dyna-cluster

On the Configuration tab go to the Servers tab

Domain-cap13
Change the Maximum Number of Dynamic Servers to: 4
Click save

Activate changes in the Change Center of the Weblogic Console.
After activation two Dynamic Servers are added to the Dynamic Cluster.

Start the two new Dynamic Servers and you have doubled you capacity.

Domain-cap14

Domain-pic3
Scaling down works exactly the same.
Just lower the Maximum Number of DynamicServers and activate.

A few points to keep in mind when scaling up or down.

Up

  • New dynamic servers are not started upon creation
  • Think before you act with the use of Tokens.
    For example.
    In our demo, the number of Dynamic servers can’t grow beyond nine servers, since we use the ${ID} as last digit of the listen address.

Down

  • Dynamic Servers above the new Maximum have to be shutdown before the change can be activated.
  • Dynamic Servers are removed in order, Last -> First
    (In our demo dyna-server-4 gets removed first, then dyna-server-3, etc..)
  • You cannot remove a Dynamic Server directly from the Environment -> Servers page
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

Next step with Docker – Create Image for Oracle Database 12c using Puppet

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

In a previous article – https://technology.amis.nl/2015/03/15/docker-take-two-starting-from-windows-with-linux-vm-as-docker-host/ – I have discussed my first steps with Docker. With Windows as my host environment I used Vagrant to create a VirtualBox VM with Ubuntu. In that VM I installed Docker can played around creating some containers, images and eventually an image for Oracle WebLogic 12.1.3. I leveraged the excellent work by Mark Nelson (especially his article https://redstack.wordpress.com/2014/11/14/gettingn-to-know-docker-a-better-way-to-do-virtualization/).

In this article I am taking things one step further by creating a Docker container – and from that container an image – with the latest Oracle Database release 12.1.0.2 (Enterprise Edition). Again, the Mark Nelson article is my guide and Edwin Biemond – champion of all things automated – provided the Docker file and Puppet scripts that get the job done. Edwin was also kind enough to help me out when a library dependency caused problems.

I ran into the (default) size limitation on Docker containers (10 GB) while installing the Oracle Database. I resolved this challenge by mapping a host folder to the container (with the original database software) and by sharing a volume from a second container that was used as temporary (staging) area. Thus I virtually expanded the file system of my container considerably beyond the 10 GB mark.

The steps I went through are basically:

0. preparation: (as discussed in the previous article) Get a Ubuntu based Virtual Machine (Virtual Box) running on my Windows host laptop. Install Docker into this VM. Download the Oracle Database software (12.1.0.2 Enterprise Edition) from http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index-092322.html. Two files are downloaded to a folder on the host with a total of some 2,6 GB.

image

Note: the Puppet scripts expect two files with names linuxamd64_12c_database_1of2.zip and linuxamd64_12c_database_2of2.zip. I renamed the downloaded files to match these expectations.

1. create Docker image – db12c102_centos6 –  based on CentOS with Puppet installed as well as the Puppet modules required for the database installation

2. create Docker container – softwarecontainer – to act as the ‘staging container’ – a container that shares a volume that can be used as expanded file storage from other containers

3. run Docker container based on image db12c102_centos6 with host folder mapped into it and with volume shared from softwarecontainer

4. edit Puppet files to match the 12.1.0.2 Enterprise Edition installation

5. run the Puppet site.pp manifest – this will install the database software and initialize an instance

6. test whether the container is actually running the database; then create an image from the container

 

At the end of the article, I have both a container and image with the Oracle Database 12c (12.1.0.2) running – based on the oradb Puppet Module by Edwin Biemond. The container exposes port 1521 where the database can be accessed from the host as well as from other containers – as we will see in subsequent articles.

1. Create base Docker image

With a simple Dockerfile – derived from Docker Database Puppet – I create a base image that provides the starting point for the container that will hold the Oracle Database installation.

The Dockerfile looks like this:

# CentOS 6
FROM centos:centos6

RUN yum -y install hostname.x86_64 rubygems ruby-devel gcc git unzip
RUN echo “gem: –no-ri –no-rdoc” > ~/.gemrc

RUN rpm –import https://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs && \
rpm -ivh
http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm

# configure & install puppet
RUN yum install -y puppet tar
RUN gem install highline -v 1.6.21
RUN gem install librarian-puppet -v 1.0.3

RUN yum clean all

ADD puppet/Puppetfile /etc/puppet/
ADD puppet/manifests/site.pp /etc/puppet/

WORKDIR /etc/puppet/
RUN librarian-puppet install

EXPOSE 1521

ADD startup.sh /
RUN chmod 0755 /startup.sh

WORKDIR /

CMD bash -C ‘/startup.sh';’bash’

The image is created from this Dockerfile using this command:

docker build -t db12c102_centos6 .

A screenshot from somewhere midway in the creation process:

image

and the completion:

image

Note: Initially I used the exact same Dockerfile Edwin published – with an edited site.pp file in order to install the 12.1.0.2 Enterprise Edition instead of the 12.1.0.1 Standard Edition. I then ran into disk space limitations. Apparently, copying the software zip files (2,6 GB) to the container and extracting the content of these files inside the container occupied so much space that the installation was aborted.

image

I needed a trick to create container and install the Oracle Database without burdening the container with these zip-files and their temporary extracted contents. The trick consists of three things:

  1. create the image in multiple steps (instead of a single one with a single Dockerfile that auto-runs the complete Puppet script), starting with a simple base image
  2. run a container from this base image with a host folder mapping from which it can access the software zip-files without actually copying them to the container’s file system
  3. have the container import a volume from another container; this volume is used as temporary staging area (for the extracted files needed only during installation); finally create an image from this last container

Also note that in the original script, Edwin did not have the line “RUN gem install highline -v 1.6.21”; he advised me to add this line because the original Dockerfile resulted in a dependency error:

image

Adding this line (to make sure highline gets installed before a version of highline with a more demanding requirement on Ruby is brought along by librarian-puppet.

2. Create Docker container – softwarecontainer – to act as the ‘staging container’

A very simple Docker image is created using this Dockerfile:

FROM busybox

RUN mkdir /var/tmp/install

RUN chmod 777 /var/tmp/install

VOLUME /var/tmp/install

VOLUME /stage

CMD /bin/sh

and this command:

docker build -t softwarestore .

This results in an image called softwarestore which exposes its folder /var/tmp/install as a volume that can be used as expanded file storage from other containers.

Start the container softwarecontainer based on image:

docker run -i -t -v /host_temp/shared:/stage/shared –name softwarecontainer softwarestore /bin/sh

The container softwarecontainer is now available along with its /var/tmp/install volume that will be used during database installation as staging area.

 

3. Run Docker container based on base image db12c102_centos6

Run a new container based on the base image created in step 1:

docker run -ti  -v /host_temp/shared/downloads:/software –volumes-from softwarecontainer  db12c102_centos6  /bin/bash

with host folder mapped into it and with volume shared from softwarecontainer.

The host folder with the database software is accessible from within the container at /software, as is the /var/tmp volume in the softwarecontainer:

image

 

 

4. Edit Puppet files to match the 12.1.0.2 Enterprise Edition installation

Inside the container: Open the site.pp file at /etc/puppet in a text editor. Note: this directory and this file were created along with the base image in step 1.

image

Edit the lines that refer to SE (Standard Edition) and 12.1.0.1:

image

Note that only a few changes are required to process EE instead of SE and 12.1.0.2 instead of some other version.

 

5. Run the Puppet site.pp manifest to install the database software and initialize an instance

The heavy lifting regarding the installation of the Oracle Database and the creation of an instance (service orcl) is done by Puppet. The Puppet script is started (still inside the container) using this command:

puppet apply /etc/puppet/site.pp –verbose –detailed-exitcodes || [ $? -eq 2 ]

The first steps are shown here:

image

And the last ones:

image

When Puppet is done, we have a running database. All temporary files have been cleaned up.

6. Test whether the container is actually running the database – then create an image from the container

With these commands (inside the container) we can run SQL*Plus and connect to the running database instance:

export ORACLE_SID=orcl

export ORACLE_HOME=/oracle/product/12.1/db

cd $ORACLE_HOME/bin

./sqlplus “sys/Welcome01 as sysdba”

SQL*Plus is started and we can for example select from dual.

image

Note: The database sid = orcl. Password for SYS and SYSTEM are Welcome01.

Using exit twice – once to leave SQL*Plus and once to exit the container, we return to the host. The container is shown (agitated_bohr).

The next step – which takes some time, due to the size of the container and the images created from it – is to create an image that captures the current state of the container:

docker ps – a (to find the container id)

docker commit <container id>

docker images

assign nice name and version to image:

docker tag c5d3effcbdd6 oracle-db-12-1-0-2:1.0

Look at the result of this:

image

A sizable image – that through export and import and be reduced in size although that would severe the link with the base centos image.

The situation at this point can be visualized using this picture:

image

 

7. Run Container based on the Image

Run container from that image, local folder /software mapped to host folder that contains the software

docker run -ti -p 0.0.0.0:1521:1521 -v /host_temp/shared/downloads:/software --volumes-from softwarecontainer oracle-db-12-1-0-2:1.0  /bin/bash

Note: the -v and –volumes-from are not really required, because the two folders were required only for installing the database (during the creation of the image). Running the container with:

docker run --privileged=true -ti -p 1521:1521 oracle-db-12-1-0-2:1.0 /bin/bash

will do the job just as well. Note: I have added the –privileges=true here because I ran into a problem with not being able to switch users in the container. This discussion led me to use this additional parameter.

Once the container is fired up, the database can be started – using the /startup.sh script or using the statements listed in step 6. That is: I went through these steps (which seem a little bit more than should be required):

su oracle

/startup.sh

[provide oracle as the password]; this will start the listener

export ORACLE_SID=orcl

export ORACLE_HOME=/oracle/product/12.1/db

cd $ORACLE_HOME/bin

./sqlplus “sys/Welcome01 as sysdba”

SQLPlus starts and connects us to an idle instance. Then type startup – and the database is started.

After exiting SQL*Plus, I can check the listener status:

./lsnrctl

then type status.

The service orcl.example.com and the instance orcl are both ready.

Tidbits

These are really notes to myself – little things I needed or came across while going through the steps described in this article.

Docker

Docker stores container data in the directory /var/lib/docker/containers in Ubuntu.

Remove a single image: docker rmi IMAGE_ID (note: images can only be removed when no containers are based off of it)

Trick for removing multiple images in one go: http://stackoverflow.com/questions/17665283/how-does-one-remove-an-image-in-docker

Remove Docker Containers (http://stackoverflow.com/questions/17236796/how-to-remove-old-docker-containers):

<code>docker rm $(docker ps --before &lt;Container ID&gt; -q)</code>

Volumes are outside of the Union File System by definition, so any data in them will not count towards the devicemapper 10GB limit (http://fqa.io/questions/29083406/docker-disk-quotas). By adding a VOLUME in the Dockerfile, I am hoping to be able to leave the big software install packages outside the 10GB limit. image Learn the IP address assigned to a container:

$ sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' &lt;container_id&gt;

When a Docker container is restarted its, IP Addresses changes. Applications as well as others servers that were communicating with the container before the container restart, will be unable to communicate. Configuring a DNS server on Docker and configure consumers to use DNS names is a solution to the IPAddress change after a container restart.

Interesting observations in this white paper from Oracle on WebLogic and Docker

Linux

Determine size on disk of directories and their contents: du -f  (http://www.codecoffee.com/tipsforlinux/articles/22.html)

Remove entire directory: rm -r directory

Library Dependency

While creating the Docker image using Edwin’s Docker file, I ran into a dependency issue, that Edwin helped me fix. (well, he fixed it for me)

As the screenshot shows, the highline component that apparently gets installed as part of librarian-puppet requires a higher Ruby version than is available.

image

This was resolved by adding a single line to the Docker file:

RUN gem install highline -v 1.6.21

just prior to the line that installs librarian-puppet. This makes sure that highline – a version that does not have this advanced Ruby dependency – is already around when the librarian-puppet is installed. It will therefore not try to install the latest version of highline, that gives us the problem.

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

Quick Introduction to Oracle Stream Explorer Part Two– Business User friendly processing of real time events (enrichment, calculation)

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

Very recently, Oracle released the Oracle Stream Explorer product, available from OTN. With Oracle Stream Explorer, business users and citizen developers as well as hard core developers can create explorations on top of live streaming data to filter, enrich, aggregate, inspect them and detect patterns. Stream Explorer provides a business user friendly browser based user interface in which streams are configured, explorations are composed, enrichment is set up and publication to external consumers is defined. Note that Stream Explorer is built on top of Oracle Event Processor – any Stream Explorer exploration is in fact an OEP application. It can be exported as such and refined in JDeveloper. As such, Stream Explorer is also a great way to get going with OEP.

In a previous article, I introduced the very first steps with Stream Explorer. In this article, I set up a stream of events that report people entering or leaving the rooms of a small conference event. The use case is: we are organizing a small conference. In three rooms, sessions take place simultaneously. Our attendees are free to decide which session to attend. We would like to know at virtually any moment how many people are in each room. We have set up simple detectors at the doors of the rooms that produce a signal whenever someone enters or leaves the room. This signal consists of the room identifier (1,2 or 3) and an IN/OUT flag (values +1 or -1).

In this first article, I used StreamExplorer to process these events and produce an aggregate per room of the net number of people that entered or left the room. In the article you are reading right now, we will continue from where we left off. We will add enrichment – using a database table with room details that are correlated with the room events and the room capacity is to be used to explore the room occupancy rate and detect Standing Room Only rooms (next article). We will see that not only can we use a stream as the source for an exploration, we can also use one exploration as the source for the next – and correlate explorations in creating a new exploration.

Preparation

I am assuming Stream Explorer is installed on top of the OEP server and that the server is running. Stream Explorer can be accessed at http://host:port/sx. The same file with room events is used as in the previous article. Additionally, a table is created in database schema. It holds details about the rooms used for our conference.

create table rooms
( id number(3,0)
, name varchar2(50)
, maximum_capacity number(3,0)
)

And the data stored in the table

image

In order for Stream Explorer to access database tables, a Data Source has to be configured. This can be done through the Visualizer browser application for OEP, accessible at http://host:port/wlevs.

image

Login. Click on the node defaultserver and on the tab Data Sources.

image

Click on the Add button to create a new Data Source.

Provide the name and the jdbc name for the data source. Set the Global Transaction Protocol to One Phase Commit.

image

Open the Global Transaction Protocol tab. In my case, the database is of type Oracle, hence the following settings:

image

Open the tab Connection Pool. You can specify a query to test the connection:

image

when done, press Save to create the data source. You can now exit the Visualizer again.

 

And Action

In order to be able to use the data from the ROOMS table for enrichment, we have to create a Reference object in Stream Explorer.

In SX, on the Catalog page, create a new item of type Reference.

image

Provide the details for the new reference:

image

Select the data source

image

Select the database table (ROOMS) and click Create:

image

The new Reference object is created:

image

 

Enrich Exploration

With the Rooms reference at our disposal, we can now create a new exploration that uses the findings from NetRoomFlow and enriches them with room details as well as the data to calculate the room occupancy percentage. NOTE: I have not yet figured out how to add a property based on an expression or formula. I have high hopes that this will indeed prove possible.

Create a new Exploration:

image

Enter details and select exploration NetRoomFlow as the source for this new exploration:

image

Press create.

The Exploration page appears. Click on the sources field and select the Rooms reference to add as a source.

image

Configure the correlation condition:

image

RoomId in the NetRoomFlow source should match with Id in the Rooms source.

Next, you can organize the properties reported from the exploration – change their names and the order they are listed in or whether they are listed at all:

image

These settings influence the appearance of the exploration to external targets as well as to downstream explorations using this one as their source.

 

Behind the scenes

When you login to the Visualizer tool, you will find that each exploration (as well as stream and reference) corresponds with an actual OEP application deployed to the OEP server. In fact, every version that is published from SX results in a separate OEP application:

image

We can inspect these OEP application to see how our actions as a business user or citizen developer have been translated by Stream Explorer in OEP configuration and CQL statements. This can be very instructive for when we want to learn how to develop OEP applications of our own.

SNAGHTML3f8a42f

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

Cloud Control Agent 12c managing many objects

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

An Oracle Enterprise Management Agent 12c that has to manage hundreds of objects needs extra tweaking. If not, it will not start, or it will die soon, leaving you with an unmanaged infrastructure.

Situation

The customer runs over 200 databases on 1 host. Intel bases host with 384GB RAM and over 4TB of storage. Linux OS. Works as a charm.
Enterprise Manager 12c Release 4, Agent Version 12.1.0.4.0.

The databases should be managed with Oracle Cloud Control 12c. Installing an agent on the database host was done as a routine job. The next step would be to (auto)discover the databases and adding them to the repository. Based on some previous experiences I decided to add them in random chunks of about 20 per time, wait until all databases were managed (the green arrow pointing upward) and proceed with the next random chunk.

This went very well for the first 100-ish databases but then suddenly the newly added databases did not appear in the main databases screen as managed. Even worse: all databases eventually became unmanaged.

Investigation

It looked like the agent was not running so I logged on to the database host to investigate further. The agent was down indeed. So, I tried to start it again. After waiting and waiting and seeing the number of dotted lines grow to five or six (!) the startup eventually failed.

Time for deeper investigation. I’ll save you the failed attempts (and my frustration about that) and skip to: I completely removed the agent installation from the host and from the Enterprise repository and redid it. That went smooth as ever. I started adding databases to the repository and in the mean time frantically checked the status of the agent.

[oracle@s-xxxx-db-11 oracle]$ /u01/app/oracle/Agent12c/agent_inst/bin/emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation. All rights reserved.
---------------------------------------------------------------
Agent Version : 12.1.0.4.0
OMS Version : 12.1.0.4.0
Protocol Version : 12.1.0.1.0
Agent Home : /u01/app/oracle/Agent12c/agent_inst
Agent Log Directory : /u01/app/oracle/Agent12c/agent_inst/sysman/log
Agent Binaries : /u01/app/oracle/Agent12c/core/12.1.0.4.0
Agent Process ID : 41118
Parent Process ID : 41039
Agent URL : https://s-xxxx-db-11:3872/emd/main/
Local Agent URL in NAT : https://s-xxxx-db-11:3872/emd/main/
Repository URL : https://s-xxxx-em-01.xxx.com:4903/empbs/upload
Started at : 2014-11-03 16:07:51
Started by user : oracle
Operating System : Linux version 2.6.32-504.el6.x86_64 (amd64)
Last Reload : (none)
Last successful upload : 2014-11-03 16:49:59
Last attempted upload : 2014-11-03 16:49:59
Total Megabytes of XML files uploaded so far : 10
Number of XML files pending upload : 280
Size of XML files pending upload(MB) : 0.25
Available disk space on upload filesystem : 32.81%
Collection Status : Collections enabled
Heartbeat Status : Ok
Last attempted heartbeat to OMS : 2014-11-03 16:48:58
Last successful heartbeat to OMS : 2014-11-03 16:48:58
Next scheduled heartbeat to OMS : 2014-11-03 16:49:59
Receivelet Interaction Manager Current Activity: Outstanding receivelet event tasks
----------------------------------
TargetID = oracle_database.VALIDD.xxx.com - EventType - TARGET_EVENT for operation SAVE_TARGET submitted at 2014-11-03 16:48:49 CET
TargetID = oracle_database.PSP.xxx.com - EventType - TARGET_EVENT for operation SAVE_TARGET submitted at 2014-11-03 16:48:54 CET
TargetID = oracle_database.VACQD.xxx.com - EventType - TARGET_EVENT for operation SAVE_TARGET submitted at 2014-11-03 16:48:45 CET
TargetID = oracle_database.TSM14SP.xxx.com - EventType - TARGET_EVENT for operation SAVE_TARGET submitted at 2014-11-03 16:48:37 CET
TargetID = oracle_database.ZOND.xxx.com - EventType - TARGET_EVENT for operation SAVE_TARGET submitted at 2014-11-03 16:48:54 CET
TargetID = oracle_database.TSM13RT.xxx.com - EventType - TARGET_EVENT for operation SAVE_TARGET submitted at 2014-11-03 16:48:34 CET
TargetID = oracle_database.skbc.xxx.com - EventType - TARGET_EVENT for operation SAVE_TARGET submitted at 2014-11-03 16:48:51 CET
TargetID = oracle_database.TSM17RT.xxx.com - EventType - TARGET_EVENT for operation SAVE_TARGET submitted at 2014-11-03 16:48:50 CET
TargetID = oracle_database.MGRMP.xxx.com - EventType - TARGET_EVENT for operation SAVE_TARGET submitted at 2014-11-03 16:46:10 CET

Target Manager Current Activity : Compute Dynamic Properties (total operations: 37, active: 9, finished: 28)

Current target operations in progress
-------------------------------------
oracle_database.ZOND.xxx.com - ADD_TARGET running for 261 seconds
oracle_database.TSM14SP.xxx.com - ADD_TARGET running for 261 seconds
oracle_database.PSP.xxx.com - ADD_TARGET running for 261 seconds
oracle_database.skbc.xxx.com - ADD_TARGET running for 261 seconds
oracle_database.VALIDD.xxx.com - ADD_TARGET running for 262 seconds
oracle_database.MGRMP.xxx.com - ADD_TARGET running for 262 seconds
oracle_database.TSM13RT.xxx.com - ADD_TARGET running for 262 seconds
oracle_database.VACQD.xxx.com - ADD_TARGET running for 262 seconds
oracle_database.TSM17RT.xxx.com - ADD_TARGET running for 262 seconds

Dynamic property executor tasks running
------------------------------
---------------------------------------------------------------
Agent is Running and Ready

This was new to me. And allthough the agent claimed to be running and ready, in reality it was useless.
Eventually we raised an SR at Oracle. Their response was to the point but no solution; I’ll quote the important part here:

Agent will be configured to start with minimum memory allocation.
Whenever the memory configured is not enough for the agent opereations, agent will restart and during restart auto tune and increase the default memory allocated to higher value so as to fulfil the requirement.
The memory allocated will not be sufficient when the number of targets monitored are high and hence need to set more memory for agent .

Also when starting the Agent, agent will collect and load metedata of all its targets and then only report the status as RUNNING and READY. When the targets are high, this may take few minutes and hence if the status is checked immediately after start up, it report ‘Agent is Running but not ready’. It will report status as Ready after the collection is completed and this is not an issue.

Perform the steps below to increase the Agent memory settings

1.Stop the Agent

2.take a backup and edit /sysman/config/emd.properties file
Change
agentJavaDefines=-Xmx673M -XX:MaxPermSize=96M
to
agentJavaDefines=-Xmx1024M -XX:MaxPermSize=96M

Save the file

3.Start the Agent and monitor its status
/bin>./emctl start agent
after 5- 10 minutes
/bin>./emctl status agent
/bin>./emctl upload

This can be found in the manual and I had already tried it. I even went as far as Xmx10240M, in small steps, but there was no noticable difference.
Apart from the SR I managed to contact Kellyn Pot’Vin, the author of an interesting Oracle blog. She seemed very knowlegable when it comes to Enterprise Manager related stuff. Her first attempt was to have me check a new option in Enterprise Manager to view the data from the collections graph and look at that performance analysis to see what may be backlogging collections and impacting the performance. Sadly I just got an empty graph.

Solution

Upon studying my logfiles she came up with a simple question:

Could you send me the results of:

ulimit -Su
ulimit -Hu

Okay:

[oracle@s-xxxx-db-11 bin]$ ulimit -Su
8192
[oracle@s-xxxx-db-11 bin]$ ulimit -Hu
3100271

Her next request was da bomb:

Could you set both of these to unlimited and restart the agent?

Bingo! The agent started in a jiffy, not even half a line of dots. Apart from that, it was running smooth and I could easily add another 100 databases to the repository. Problem solved!
Off course, afterward you need to have your system administrator find the right setting for those ulimits, which in our case turned out to be only slightly higher than the above published.

Not only did the agent start fast, it kept running as it’s supposed to do, for weeks now.
This information was not in the manual. Maybe it will eventually. Many thanks to Kellyn!

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page