AMIS Technology Blog http://technology.amis.nl Friends of Oracle and Java Mon, 01 Sep 2014 05:03:37 +0000 en-US hourly 1 http://wordpress.org/?v=3.9.2 SQL> Select * From Alert_XML_Errors; http://technology.amis.nl/2014/08/29/sql-select-alert_xml_errors/?utm_source=rss&utm_medium=rss&utm_campaign=sql-select-alert_xml_errors http://technology.amis.nl/2014/08/29/sql-select-alert_xml_errors/#comments Fri, 29 Aug 2014 18:58:45 +0000 http://technology.amis.nl/?p=32087 Once you are able to show the xml version of the alert log as data in database table Alert_XML, it would be nice to checkout the errors with accompanying timestamps from within view Alert_XML_Errors. Like this, with the help of 2 types and a pipelined function. su - oracle . oraenv [ orcl ] [oracle@localhost [...]

The post SQL> Select * From Alert_XML_Errors; appeared first on AMIS Technology Blog.

]]>
Once you are able to show the xml version of the alert log as data in database table Alert_XML, it would be nice to checkout the errors with accompanying timestamps from within view Alert_XML_Errors. Like this, with the help of 2 types and a pipelined function.

su - oracle
. oraenv [ orcl ]
[oracle@localhost ~]$ sqlplus harry/*****
....
SQL> desc alert_xml
Name                                      Null?    Type
----------------------------------------- -------- ----------------------------
TEXT                                               VARCHAR2(400 CHAR)

SQL> CREATE OR REPLACE TYPE v2_row AS OBJECT ( text varchar2(400));
/

Type created.

SQL> CREATE OR REPLACE TYPE v2_table AS TABLE OF v2_row;
/

Type created.

SQL> CREATE OR REPLACE FUNCTION Get_Errors
( P sys_refcursor )
RETURN v2_table PIPELINED
IS
out_rec  v2_row := v2_row(NULL);
this_rec alert_xml%ROWTYPE;
currdate VARCHAR2(400) := 'NA';
last_printed_date VARCHAR2(400) := currdate;
testday VARCHAR2(3);
testerr VARCHAR2(4);
firstdate BOOLEAN := TRUE;
BEGIN
currdate := 'NA';
last_printed_date := currdate;
LOOP
FETCH p INTO this_rec;
EXIT WHEN p%NOTFOUND;

this_rec.text := LTRIM(this_rec.text);

-- check if this line contains a date stamp
testday := SUBSTR(this_rec.text,1,3);
IF testday = '201'
THEN
-- show dates as in de text version of the alert log
currdate := to_char(to_date(substr(this_rec.text,1,19),'YYYY-MM-DD HH24:MI:SS'),'Dy Mon DD hh24:mi:ss yyyy','NLS_DATE_LANGUAGE = AMERICAN');
ELSIF
testday = 'Sat'
OR testday = 'Sun'
OR testday = 'Mon'
OR testday = 'Tue'
OR testday = 'Wed'
OR testday = 'Thu'
OR testday = 'Fri'
THEN
currdate := this_rec.text;
END IF;

testerr := SUBSTR(this_rec.text,1,4);
IF testerr = 'ORA-'
OR testerr = 'TNS-'
THEN
IF last_printed_date != currdate
OR ( currdate != 'NA' AND firstdate )
THEN
last_printed_date := currdate;
firstdate := FALSE;
out_rec.text := '****';
PIPE ROW(out_rec);
out_rec.text := currdate;
PIPE ROW(out_rec);
out_rec.text := '****';
PIPE ROW(out_rec);
END IF;
out_rec.text := this_rec.text;
pipe ROW(out_rec);

END IF;
END LOOP;

CLOSE P;
RETURN;
END Get_Errors;
/

Function created.

SQL> CREATE OR REPLACE FORCE VIEW ALERT_XML_ERRORS
AS
SELECT "TEXT"
FROM TABLE (get_errors (CURSOR (SELECT * FROM alert_xml)));

View created.

And checkout the errors now:

SQL> set pagesize 0
SQL> select * from alert_xml_errors;
****
Tue Aug 26 11:01:08 2014
****
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/u02/oradata/orcl/users01.dbf'
ORA-27037: unable to obtain file status
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/u02/oradata/orcl/users01.dbf'
ORA-1157 signalled during: ALTER DATABASE OPEN...
****
Tue Aug 26 11:12:51 2014
****
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/u02/oradata/orcl/users01.dbf'
ORA-27037: unable to obtain file status
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/u02/oradata/orcl/users01.dbf'
ORA-1157 signalled during: ALTER DATABASE OPEN...
****
Tue Aug 26 13:39:36 2014
****
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/u02/oradata/orcl/users01.dbf'
ORA-27037: unable to obtain file status
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/u02/oradata/orcl/users01.dbf'
ORA-1157 signalled during: ALTER DATABASE OPEN...
****

< snip >

SQL>

And yes, the pipelined function will only work till 2020 on the xml version of the alert log – see if you can find the code line! – , and yes, it should be functional on the text version of the alert log too, provided the external table describes like alert_xml.

The post SQL> Select * From Alert_XML_Errors; appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/29/sql-select-alert_xml_errors/feed/ 0
SQL> Select * From Alert_XML; http://technology.amis.nl/2014/08/28/sql-select-alert_xml-preprocessing-adrci/?utm_source=rss&utm_medium=rss&utm_campaign=sql-select-alert_xml-preprocessing-adrci http://technology.amis.nl/2014/08/28/sql-select-alert_xml-preprocessing-adrci/#comments Thu, 28 Aug 2014 20:46:55 +0000 http://technology.amis.nl/?p=32069 By mapping an external table to some text file, you can view the file contents as if it were data in a database table. External tables are available since Oracle 9i Database, and from Oracle 11gR2 Database on, it is even possible to do some inline preprocessing on the file. The following example of this [...]

The post SQL> Select * From Alert_XML; appeared first on AMIS Technology Blog.

]]>
By mapping an external table to some text file, you can view the file contents as if it were
data in a database table. External tables are available since Oracle 9i Database, and from Oracle
11gR2 Database on, it is even possible to do some inline preprocessing on the file.

The following example of this feature picks up on standard output of shell script “get_alert_xml.sh”.
It isn’t referencing any file, but take notice of the fact that an empty “dummyfile” must still be
present and readable by oracle. By pre-executing some ADRCI commands and redirecting output to screen,
external table Alert_XML will show the last 7 days of entries of the xml version of the alert log.

su - oracle
. oraenv [ orcl ]

$ cd /u01/app/oracle/admin/scripts
$ touch dummyfile
$ echo '#!/bin/sh'                                                                     > get_alert_xml.sh
$ echo 'ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1'                          >> get_alert_xml.sh
$ echo 'DIAG_HOME=diag/rdbms/orcl/orcl'                                               >> get_alert_xml.sh
$ echo 'DAYS=\\"originating_timestamp > systimestamp-7\\"'                            >> get_alert_xml.sh
$ echo '$ORACLE_HOME/bin/adrci exec="set home $DIAG_HOME;show alert -p $DAYS -term;"' >> get_alert_xml.sh
$ chmod 744 get_alert_xml.sh
$ sqlplus / as sysdba
SQL> create directory exec_dir as '/u01/app/oracle/admin/scripts';
SQL> grant read,execute on directory exec_dir to harry;
SQL> connect harry/****
SQL> CREATE TABLE ALERT_XML ( TEXT VARCHAR2(400 CHAR) )
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY EXEC_DIR
ACCESS PARAMETERS
( RECORDS DELIMITED BY NEWLINE
PREPROCESSOR EXEC_DIR:'get_alert_xml.sh'
nobadfile
nodiscardfile
nologfile
)
LOCATION ('dummyfile')
)
REJECT LIMIT UNLIMITED
NOPARALLEL
NOMONITORING;
SQL> select * from alert_xml;

TEXT
--------------------------------------------------------------------------------

ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl:
*************************************************************************
2014-08-26 10:21:19.018000 +02:00
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST
2014-08-26 10:21:20.066000 +02:00

> snip <

SQL>

Check out Alert_XML_Errors here.

The post SQL> Select * From Alert_XML; appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/28/sql-select-alert_xml-preprocessing-adrci/feed/ 3
Sqlnet tracing during nightly hours… http://technology.amis.nl/2014/08/26/sqlnet-tracing-nightly-hours/?utm_source=rss&utm_medium=rss&utm_campaign=sqlnet-tracing-nightly-hours http://technology.amis.nl/2014/08/26/sqlnet-tracing-nightly-hours/#comments Tue, 26 Aug 2014 13:47:25 +0000 http://technology.amis.nl/?p=32038 A TNS error at night… Sometime ago my data warehouse colleague came to me with a TNS error. At night times he runs his batch jobs in order to update his data warehouse. That night one of his jobs did not run properly and generated an ORA-12592 error. He had to restart this job during [...]

The post Sqlnet tracing during nightly hours… appeared first on AMIS Technology Blog.

]]>
A TNS error at night…

Sometime ago my data warehouse colleague came to me with a TNS error. At night times he runs his batch jobs in order to update his data warehouse. That night one of his jobs did not run properly and generated an ORA-12592 error. He had to restart this job during daytime.

It turned out it was not the only occurrence of this TNS error. A couple of days later he again came to me with similar TNS errors which were generated at a similar time. I looked in the alert.log and in the listener.log but nothing could be found. Therefore I decided to switch on sqlnet tracing in order to find out what was happening. However sqlnet tracing generates a lot of data. The TNS errors were generated at night. It is not a good idea to switch on sqlnet tracing during daytimes and then come back the next day and switch it off. You will probably get disk full problems!

Therefore I decided to make some scripts. Using crontab or windows scheduler I switch on sqlnet and listener tracing some time before the TNS error normally occurs and switch it off some time after. I would like to share with you the way I did it.

My configuration to trace.

We run an oracle 11.2.0.4 database on an Oracle Linux 6 server. Our client computer is a windows server computer. On this client some data warehouse tools are installed and run from this client. Also oracle 11.2 client software is installed on that client.

How to switch on sqlnet tracing

I set sqlnet tracing on 3 levels: client level, server level and listener (also on server) level. Sqlnet tracing on the client level can be switched on by setting parameters in the sqlnet.ora file on the client computer. On the server level you have to set parameters in the sqlnet.ora on the server. Setting parameters in the listener.ora file switches on listener tracing. These files can be found in the $ORACLE_HOME/network/admin directory.

Setting sqlnet tracing on the server:

On the server I copied the sqlnet.ora file to a file with the name sqlnet.ora.off. I made another copy of sqlnet.ora and gave it the name sqlnet.ora.on. Both files were put in the $ORACLE_HOME/network/admin directory, the same directory as for the original sqlnet.ora. I edited the sqlnet.ora.on file and added the following parameters to this file:

sqlnet.ora.on on the server:

TRACE_LEVEL_SERVER = 16
TRACE_FILE_SERVER = sqlnet_server.trc
TRACE_DIRECTORY_SERVER = /u03/network/trace
TRACE_UNIQUE_SERVER = ON
TRACE_TIMESTAMP_SERVER = ON

LOG_DIRECTORY_SERVER = /u03/network/log
LOG_FILE_SERVER = sqlnet_server.log

DIAG_ADR_ENABLED = OFF
ADR_BASE = /u01/app/oracle

This is not the place to explain the meaning of these parameters. For more information take a look at note id 219968.1 which can be found on the Oracle Support site or read the Oracle documentation for example: Oracle Database Net Services Administrator’s Guide, chapter 16.8: http://docs.oracle.com/cd/E11882_01/network.112/e41945/trouble.htm#r2c1-t57

However I would like to make some remarks:

TRACE_LEVEL_SERVER = 16
You can set the level of tracing with this parameter. I used the highest level. But it could be a good idea to start with a lower level for example 4 or 6. Higher levels produce more data and therefore more gigabytes.

TRACE_DIRECTORY_SERVER = /u03/network/trace
LOG_DIRECTORY_SERVER = /u03/network/log
I decided to use another mountpoint than the default in order to prevent disk full errors. There was more disk space on the /u03 mountpoint.

TRACE_UNIQUE_SERVER = ON
This causes Oracle to generate for every connection unique trace files.

TRACE_TIMESTAMP_SERVER = ON
If you set this parameter then a timestamp in the form of [DD-MON-YY 24HH:MI:SS] will be recorded for each operation traced by the trace file.

DIAG_ADR_ENABLED = OFF
ADR_BASE = /u01/app/oracle
You should set these two parameters if you are using version 11g or higher. If you use version 10g or lower then you should not add these parameters.

In my first version of the sqlnet.ora.on I also set the parameters:
# TRACE_FILELEN_SERVER = ….
# TRACE_FILENO_SERVER = ….
But it turned out that this was not a very good idea: huge amounts of files were generated. So I decided to throw them out.

Setting tracing on the listener:

I also made a copy of the listener.ora and named it listener.ora.off. I made another copy of this file and named it listener.ora.on. Also these files were put in the $ORACLE_HOME/network/admin directory. I edited the listener.ora.on and added the following parameters:

listener.ora.on on the server:

TRACE_LEVEL_LISTENER = 16
TRACE_FILE_LISTENER = listener.trc
TRACE_DIRECTORY_LISTENER = /u03/network/trace
TRACE_UNIQUE_LISTENER = ON
TRACE_TIMESTAMP_LISTENER = ON

LOG_DIRECTORY_LISTENER = /u03/network/log
LOGGING_LISTENER = ON
LOG_FILE_LISTENER = listener.log

DIAG_ADR_ENABLED_LISTENER = OFF
ADR_BASE_LISTENER = /u01/app/oracle

A remark:
If your listener has another name than the default LISTENER for example LSTNR than you should use parameters as TRACE_LEVEL_LSTNR and so on.

Setting sqlnet tracing on the client:

Also on the client computer I made two copies of sqlnet.ora: sqlnet.ora.off and sqlnet.ora.on. I added the following parameters to the sqlnet.ora.on file:

sqlnet.ora.on on the client:

TRACE_LEVEL_CLIENT = 16
TRACE_FILE_CLIENT = sqlnet_client.trc
TRACE_DIRECTORY_CLIENT = C:\app\herman\product\11.2.0\client_1\network\trace
TRACE_UNIQUE_CLIENT = ON
TRACE_TIMESTAMP_CLIENT = ON

LOG_DIRECTORY_CLIENT = C:\app\herman\product\11.2.0\client_1\network\log
LOG_FILE_CLIENT = sqlnet_client.log

TNSPING.TRACE_DIRECTORY = C:\app\herman\product\11.2.0\client_1\network\trace
TNSPING.TRACE_LEVEL = ADMIN

DIAG_ADR_ENABLED = OFF
ADR_BASE = c:\app\herman

Scripts for switching on sqlnet tracing

Scripts on the server:

On the server I created the following two scripts: sqlnet_trace_on.sh and sqlnet_trace_off.sh

sqlnet_trace_on.sh:

#!/bin/bash
# ******************************************************************************
# Script Name : sqlnet_trace_on.sh
# Purpose : To switch on sqlnet tracing and listener tracing
# Created by : AMIS Services, Nieuwegein, The Netherlands
#
# Remarks : a set of sqlnet.ora.on, sqlnet.ora.off, listener.ora.on and
# listener.ora.off must be available in the
# OH/network/admin-directory
#
#——————————————————————————-
# Revision record
# Date Version Author Modification
# ———- —— —————– ———————————-
# 07-11-2013 1.0 Karin Kriebisch Created, listener tracing
# 06-05-2014 1.1 Herman Buitenhuis sqlnet tracing added
#
#******************************************************************************
#
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export LISTENER_ORA_LOC=$ORACLE_HOME/network/admin
export LISTENER_TRACE_LOC=$ORACLE_HOME/network/log
export LOG=$LISTENER_TRACE_LOC/Listener_Trace_ON.log
#
echo — Initializing Logfile – Switching sqlnet/listener tracing ON — > $LOG
echo `date` >>$LOG
echo ================================================================ >>$LOG
echo >>$LOG
#
echo Copy listener.ora.on to listener.ora >>$LOG
#
cp $LISTENER_ORA_LOC/listener.ora.on $LISTENER_ORA_LOC/listener.ora >>$LOG
#
echo Copy sqlnet.ora.on to sqlnet.ora >>$LOG
#
cp $LISTENER_ORA_LOC/sqlnet.ora.on $LISTENER_ORA_LOC/sqlnet.ora >>$LOG
#
#
echo Restart LISTENER >>$LOG
$ORACLE_HOME/bin/lsnrctl stop >>$LOG
$ORACLE_HOME/bin/lsnrctl start >>$LOG
echo `date` >>$LOG
#
echo Check LISTENER status after 30 seconds >>$LOG
sleep 30
$ORACLE_HOME/bin/lsnrctl status >>$LOG
#
echo `date` >>$LOG
echo === sqlnet and listener Tracing switched ON === >>$LOG

sqlnet_trace_off.sh:

#!/bin/bash
# ******************************************************************************
# Script Name : sqlnet_trace_off.sh
# Purpose : To switch off sqlnet tracing and listener tracing
# Created by : AMIS Services, Nieuwegein, The Netherlands
#
# Remarks : a set of sqlnet.ora.on, sqlnet.ora.off, listener.ora.on and
# listener.ora.off must be available in the
# OH/network/admin-directory
#
#——————————————————————————-
# Revision record
# Date Version Author Modification
# ———- —— —————– ———————————-
# 07-11-2013 1.0 Karin Kriebisch Created, listener tracing
# 06-05-2014 1.1 Herman Buitenhuis sqlnet tracing added
#
#******************************************************************************
#
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export LISTENER_ORA_LOC=$ORACLE_HOME/network/admin
export LISTENER_TRACE_LOC=$ORACLE_HOME/network/log
export LOG=$LISTENER_TRACE_LOC/Listener_Trace_OFF.log
#
echo — Initializing Logfile – Switching sqlnet/listener tracing OFF — > $LOG
echo `date` >>$LOG
echo ================================================================ >>$LOG
echo >>$LOG
#
echo Copy listener.ora.off to listener.ora >>$LOG
#
cp $LISTENER_ORA_LOC/listener.ora.off $LISTENER_ORA_LOC/listener.ora >>$LOG
#
echo Copy sqlnet.ora.off to sqlnet.ora >>$LOG
#
cp $LISTENER_ORA_LOC/sqlnet.ora.off $LISTENER_ORA_LOC/sqlnet.ora >>$LOG
#
#
echo Restart LISTENER >>$LOG
$ORACLE_HOME/bin/lsnrctl stop >>$LOG
$ORACLE_HOME/bin/lsnrctl start >>$LOG
echo `date` >>$LOG
#
echo Check LISTENER status after 30 seconds >>$LOG
sleep 30
$ORACLE_HOME/bin/lsnrctl status >>$LOG
#
echo `date` >>$LOG
echo === Switched sqlnet/listener Tracing OFF === >>$LOG

Scripts on the windows client:

On the windows client I made the following two scripts: sqlnet_trace_on.cmd and sqlnet_trace_off.cmd.

sqlnet_trace_on.cmd:

REM Script Name: sqlnet_trace_on.cmd
REM Purpose : to switch on sqlnet tracing on the windows client
REM Created by : AMIS Services, Nieuwegein, The Netherlands
REM
REM Remarks : sqlnet.ora.on, sqlnet.ora.off must be available in the
REM OH/network/admin-directory
REM
REM Revision record
REM Date Version Author Modification
REM ———- —— —————– ———————————-
REM 06-05-2014 1.0 Herman Buitenhuis Creation, sqlnet tracing
REM

set ORACLE_HOME=C:\app\herman\product\11.2.0\client_1\
set SQLNET_ORA_LOC=%ORACLE_HOME%/network/admin
set SQLNET_TRACE_LOC=%ORACLE_HOME%/network/log
set LOG=%SQLNET_TRACE_LOC%/sqlnet_trace_on.log

echo — Initializing Logfile – Switching sqlnet tracing ON — > %LOG%

@echo off
For /f “tokens=2-4 delims=/ ” %%a in (‘date /t’) do (set mydate=%%c-%%a-%%b)
For /f “tokens=1-2 delims=/:” %%a in (‘time /t’) do (set mytime=%%a%%b)
echo %mydate%_%mytime% >>%LOG%

@echo on

echo ================================================================ >>%LOG%
echo >>%LOG%
echo Copy sqlnet.ora.on to sqlnet.ora >>%LOG%

cd %SQLNET_ORA_LOC%
copy sqlnet.ora.on sqlnet.ora

@echo off
For /f “tokens=2-4 delims=/ ” %%a in (‘date /t’) do (set mydate=%%c-%%a-%%b)
For /f “tokens=1-2 delims=/:” %%a in (‘time /t’) do (set mytime=%%a%%b)
echo %mydate%_%mytime% >>%LOG%
@echo on

echo === Switched sqlnet Tracing ON === >>%LOG%

sqlnet_trace_off.cmd:

REM Script Name: sqlnet_trace_off.cmd
REM Purpose : to switch off sqlnet tracing on the windows client
REM Created by : AMIS Services, Nieuwegein, The Netherlands
REM
REM Remarks : sqlnet.ora.on, sqlnet.ora.off must be available in the
REM OH/network/admin-directory
REM
REM Revision record
REM Date Version Author Modification
REM ———- —— —————– ———————————-
REM 06-05-2014 1.0 Herman Buitenhuis Creation, sqlnet tracing
REM

set ORACLE_HOME=C:\app\herman\product\11.2.0\client_1\
set SQLNET_ORA_LOC=%ORACLE_HOME%/network/admin
set SQLNET_TRACE_LOC=%ORACLE_HOME%/network/log
set LOG=%SQLNET_TRACE_LOC%/sqlnet_Trace_OFF.log

echo — Initializing Logfile – Switching sqlnet tracing OFF — > %LOG%

@echo off
For /f “tokens=2-4 delims=/ ” %%a in (‘date /t’) do (set mydate=%%c-%%a-%%b)
For /f “tokens=1-2 delims=/:” %%a in (‘time /t’) do (set mytime=%%a%%b)
echo %mydate%_%mytime% >>%LOG%

@echo on
echo ================================================================ >>%LOG%
echo >>%LOG%
echo Copy sqlnet.ora.off to sqlnet.ora >>%LOG%

cd %SQLNET_ORA_LOC%
copy sqlnet.ora.off sqlnet.ora

@echo off
For /f “tokens=2-4 delims=/ ” %%a in (‘date /t’) do (set mydate=%%c-%%a-%%b)
For /f “tokens=1-2 delims=/:” %%a in (‘time /t’) do (set mytime=%%a%%b)
echo %mydate%_%mytime% >>%LOG%
@echo on

echo === Switched sqlnet Tracing OFF === >>%LOG%

Switching on sqlnet tracing manually…

Using the scripts you can switch on and switch off sqlnet tracing.

On the server you switch on sqlnet and listener tracing by the following command:

./sqlnet_trace_on.sh

You can switch off tracing by:

./sqlnet_trace_off.sh

On the client you can run the scripts sqlnet_trace_on.cmd and sqlnet_trace_off.cmd. However there is an important thing to say: Because of windows security, you should run these scripts in a cmd box with “run as administrator”! If you don’t do that you get “Access is denied” errors.

Switching on sqlnet tracing automatically

Using crontab you can automatically switch on and switch off sqlnet tracing on the server. For example if you want to daily switch on sqlnet tracing on 02:00 and switch it off on 03:00 you add (with “crontab –e”) the following lines to the crontab file:

# switch on/off sqlnet/listener tracing
00 02 * * * /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/sqlnet_trace_on.sh
00 03 * * * /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/sqlnet_trace_off.sh
#

On the windows client you can use the windows task scheduler to switch on and switch off sqlnet tracing. However because of windows security you can get access denied errors. In order to solve this I had to contact the windows system administrator. He changed the security settings of the %ORACLE_HOME%/network/admin directory. And then it worked without any problems.

I switch on tracing on the client before I did the restart of the listener. So I scheduled the script sqlnet_trace_on.cmd on 01:55 and sqlnet_trace_off.cmd on 02:55.

Using the above script and method I was able to do my sqlnet and listener tracing at night. And also sleep very well! :-)

I would like to thank my colleague Karin Kriebisch. She made the first initial version of the script.

The post Sqlnet tracing during nightly hours… appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/26/sqlnet-tracing-nightly-hours/feed/ 1
SOA Suite 12c: Using Enterprise Scheduler Service to schedule deactivation and activation of inbound adapter bindings http://technology.amis.nl/2014/08/23/soa-suite-12c-using-enterprise-scheduler-service-to-schedule-deactivation-and-activation-of-inbound-adapter-bindings/?utm_source=rss&utm_medium=rss&utm_campaign=soa-suite-12c-using-enterprise-scheduler-service-to-schedule-deactivation-and-activation-of-inbound-adapter-bindings http://technology.amis.nl/2014/08/23/soa-suite-12c-using-enterprise-scheduler-service-to-schedule-deactivation-and-activation-of-inbound-adapter-bindings/#comments Sat, 23 Aug 2014 20:46:10 +0000 http://technology.amis.nl/?p=32027 The Enterprise Scheduler Service that is available in Fusion Middleware 12.1.3 supports a number of administration activities around the SOA Suite. We will look at one particular use case regarding environment management using the ESS. Suppose we have an inbound database adapter. Suppose we have created the PortalSlotRequestProcessor SOA composite that uses a database poller [...]

The post SOA Suite 12c: Using Enterprise Scheduler Service to schedule deactivation and activation of inbound adapter bindings appeared first on AMIS Technology Blog.

]]>
The Enterprise Scheduler Service that is available in Fusion Middleware 12.1.3 supports a number of administration activities around the SOA Suite. We will look at one particular use case regarding environment management using the ESS. Suppose we have an inbound database adapter. Suppose we have created the PortalSlotRequestProcessor SOA composite that uses a database poller looking for new records in a certain table PORTAL_SLOT_ALLOCATIONS (this example comes from the Oracle SOA Suite 12c Handbook, Oracle Press). The polling frequency was set to once every 20 seconds. And that polling goes on and on for as long as the SOA composite remains deployed and active.

Imagine the situation where every day during a certain period, there is a substantial load on the SOA Suite, and we would prefer to reduce the resource usage from non-crucial processes. Further suppose that the slot allocation requests arriving from the portal are considered not urgent, for example because the business service level agreed with our account managers is that these requests have to be processed within 24 hours – rather than once every 20 seconds. We do not want to create a big batch, and whenever we can, we strive to implement straight through processing. But between 1 and 2 AM on every day, we would like to pause the inbound database adapter.

In this section, we will use the Enterprise Scheduler Service to achieve this. We will create the schedules that trigger at 1 AM every day, used for deactivating the adapter, and 2 AM, used for activating the adapter. In fact, in order to make testing more fun, we will use schedules that trigger at 10 past the hour and 30 past the hour. These schedules are then associated in the Enterprise Manager Fusion Middleware Control with the inbound database adapter binding PortalSlotRequestPoller.

Create Schedules

An ESS Schedule is used to describe either one or a series of moments in time. A schedule can be associated with one or many Job definitions to come to describe when those jobs should be executed. A recurring schedule has a frequency that describes how the moments in time are distributed over time. A recurring schedule can have a start time and an end time to specify the period during which the recurrence should take place.

To create the schedules that will govern the inbound database adapter, open the EM FMW Control and select the node Scheduling Services | ESSAPP. From the dropdown list at the top of the page, select Job Requests | Define Schedules, as is shown in this figur

 

image

Click on the icon to create a new schedule. Specify the name of the schedule as At10minPastTheHour. Set the display name to “10 minutes past each hour”. The schedule has to be created in the package [/oracle/apps/ess/custom/]soa. This is a requirement for schedules used for adapter activation.

Select the frequency as Hourly/Minute, Every 1 Hour(s) 0 Minute(s) and the start date as any date not too far in the future (or even in the past) with a time set to 10 minutes past any hour.

image

Note that using the button Customize Times, we can have a long list of moments in time generated and subsequently manually modify them if we have a need for some exceptions to the pattern.

Click on OK to save this schedule.

Create a second schedule called At30minPastTheHour. The definition is very similar to the previous one, except for the start time that should 30 minutes past some hour.

image

Click OK to save this schedule definition.

Note that more sophisticated recurrence schedules can be created through the Java API exposed by ESS as well as through the IDE support in JDeveloper. These options that allow specific week days or months to be included or excluded can currently not set set through the EM FMW Control.

Apply Schedules for Activation and Deactivation of Inbound Database Adapter

Select node SOA | soa-infra | default | PortalSlotRequestProcessor – the composite we created in the previous chapter. Under Services and References, click on the PortalSlotRequestPoller, the inbound database adapter binding.

clip_image002

The PortalSlotRequestProcessor appears. Click on the icon for adapter schedules.

image

In the Adapter Schedules popup that appears, we can select the schedule that is to be used for deactivating and for activating the adapter binding. Use the At10minPastTheHour schedule for deactivation and At30minPastTheHour for activation. Press Apply Schedules to confirm the new configuration.

clip_image003

From this moment on, the inbound database adapter binding that polls table PORTAL_SLOT_ALLOCATIONS is active only for 40 minutes during every hour, starting at 30 minutes past the hour.

For example, at 22:14, the binding is clearly not active.

image

 

Test switching off and on of Database Adapter binding

When the schedules for activation and deactivation have been applied, they are immediately in effect. You can test this in the Dashboard page for the inbound database adapter binding, as is shown here

clip_image002[5]

Here we see how a single record was processed by the adapter binding, insert at 10:09PM. Four more records were inserted into table PORTAL_SLOT_ALLOCATIONS at 10:13 and 10:14. However, because the adapter binding is currently not active, so these records have not yet been processed.

image

image

At 30 minutes past the hour – 10:30 in this case – the adapter becomes active again and starts processing the records it will then find in the table. Because the adapter was configured to pass just a single record to a SOA composite and not process more than two records in a single transaction, it will take two polling cycles to process the four records that were inserted between 10:10 and 10:30. These figures illustrate this.

image

image

clip_image004

 

The SOA composite instances that are created for these four records retrieved in two poll cycles:

image

and the flow trace for the instance at 10:30:09 looks like this – processing two separate database records:

image

imageWhen you check in the ESS UI in EM FMW Control, you will find two new Job Definitions, generic Jobs for executing SOA Suite management stuff:

ess_adapteractivation1

In the Job Requests overview, instances of these jobs appear, every hour one of each. And the details of these job requests specify which adapter binding in which composite is the target of the SOA administrative action performed by the job.

ess_adapteractivation2

The post SOA Suite 12c: Using Enterprise Scheduler Service to schedule deactivation and activation of inbound adapter bindings appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/23/soa-suite-12c-using-enterprise-scheduler-service-to-schedule-deactivation-and-activation-of-inbound-adapter-bindings/feed/ 0
SOA Suite 12c: Invoke Enterprise Scheduler Service from a BPEL process to submit a job request http://technology.amis.nl/2014/08/23/soa-suite-12c-invoke-enterprise-scheduler-service-from-a-bpel-process-to-submit-a-job-request/?utm_source=rss&utm_medium=rss&utm_campaign=soa-suite-12c-invoke-enterprise-scheduler-service-from-a-bpel-process-to-submit-a-job-request http://technology.amis.nl/2014/08/23/soa-suite-12c-invoke-enterprise-scheduler-service-from-a-bpel-process-to-submit-a-job-request/#comments Sat, 23 Aug 2014 10:50:19 +0000 http://technology.amis.nl/?p=31992 The Fusion Middleware 12.1.3 platform contains the ESS or Enterprise Scheduler Service. This service can be used as an asynchronous, schedule based job orchestrator. It can execute jobs that are Operating System jobs, Java calls (local Java or EJB), PL/SQL calls, and Web Service calls (synchronous, asynchronous and one-way) including SOA composite, Service Bus and [...]

The post SOA Suite 12c: Invoke Enterprise Scheduler Service from a BPEL process to submit a job request appeared first on AMIS Technology Blog.

]]>
The Fusion Middleware 12.1.3 platform contains the ESS or Enterprise Scheduler Service. This service can be used as an asynchronous, schedule based job orchestrator. It can execute jobs that are Operating System jobs, Java calls (local Java or EJB), PL/SQL calls, and Web Service calls (synchronous, asynchronous and one-way) including SOA composite, Service Bus and ADF BC web services.

Jobs and schedules can be defined from client applications through a  Java API or through the Enterprise Manager FMW Control user interface. Additionally, ESS exposes a web service through which (pre defined) jobs can be scheduled. This web service can be invoked from BPEL processes in SOA composites. In this article I will briefly demonstrate how to do the latter: submit a request to the Enterprise Scheduler Service to execute a job according to a specified schedule.

Because the job cannot be executed anonymously, the ESS Scheduler Service has an attached WSM policy to enforce credentials to be passed in. As a consequence, the SOA composite that invokes the service needs to have a WSM policy attached to the reference binding for the ESS Service in order to provide those required credentials. This article explains how to do that.

Steps:

  • Preparation: create an ESS Job Definition and a Schedule – in my example these are SendFlightUpdateNotification (which invokes a SOA composite to send an email) and Every5Minutes
  • Ensure that the ESS Scheduler Web Service has a WSM security policy attached to enforce authentication details to be provided (see description in this article: FMW 12.1.3 – Invoking Enterprise Scheduler Service Web Services from SoapUI)
  • Create a SOA composite application with a one way BPEL process exposed as a SOAP Web Service
  • Add a Schedule Job activity to the BPEL process and configure it to request the SendFlightUpdateNotification according to the Every5Minutes schedule; pass the input to the BPEL process as the application property for the job
  • Set a WSDL URL for a concrete WSDL – instead of the abstract one that is configured by default for the ESS Service
  • Attach a WSM security policy to the Reference Binding for the ESS Scheduler Web Service
  • Configure username and password as properties in composite.xml file – to provide the authentication details used by the policy and passed in security headers
  • Deploy and Test

 

Preparation: create an ESS Job Definition and a Schedule

in my example these are SendFlightUpdateNotification (which invokes a SOA composite to send an email)

image

and Every5Minutes

image

 

Ensure that the ESS Scheduler Web Service has a WSM security policy attached

to enforce authentication details to be provided (see description in this article: FMW 12.1.3 – Invoking Enterprise Scheduler Service Web Services from SoapUI)

image

Create a SOA composite application

with a one way BPEL process exposed as a SOAP Web Service

image

Add a Schedule Job activity to the BPEL process

image

and configure it to request the SendFlightUpdateNotification according to the Every5Minutes schedule;

image

image

Leave open the start time and end time (these are inherited now from the schedule)

SNAGHTML62b8333

Open the tab application properties.

SNAGHTML62bc65a
Here we can override the default values for Job application properties with values taken for example from the BPEL process instance variables:

image

SNAGHTML62ce36c

 

note: in order to select the Job and Schedule, you need to create a database MDS connection to the MDS partition with the ESS User Meta Data

SNAGHTML62abfb6

 

When you close the Schedule Job definition, you will probably see this warning:

image

Click OK to acknowledge the message. We will soon replace the WSDL URL on the reference binding to correct this problem.

The BPEL process now looks like this:

image

Set a concrete WSDL URL on the Reference Binding for the ESS Service

Get hold of the URL for the WSDL for the live ESS Web Service.

image

image

image

image

Then right click the ESS Service Reference Binding and select Edit from the menu. Set the WSDL URL in the field in the Update Reference dialog.

 

image

Attach a WSM security policy to the Reference Binding for the ESS Scheduler Web Service

Because the ESS Scheduler Web Service is protected by a WSM Security Policy, it requires callers to pass the appropriate WS Security Header. We can simply attach a WSM policy [of our own]to achieve that effect. We can even do so through EM FMW Control, in the run time environment, rather than right here at design time. But this time we will go for the design time, developer route.

Right click the EssService reference binding. Select Configure SOA WS Policies | For Request from the menu.

image

The dialog for configuring SOA WS Policies appears. Click on the plus icon for the Security category. From the list of security policies, select oracle/wss_username_token_client_policy. Then press OK.

image

The policy is attached to the reference binding.

SNAGHTML66e5071

Press OK again.

What we have configured at this point will cause the OWSM framework to intercept the call from our SOA composite to the EssService and inject WS Security policies into it. Or at least, that is what it would like to do. But the policy framework needs access to credentials to put in the WS Security header. The normal approach with this is for the policy framework to inspect the configured credential store for the username and password to use. The default credential store is called basic.credentials,  but you can specify on the policy that it should a different credential store. See this article for more details: http://biemond.blogspot.nl/2010/08/http-basic-authentication-with-soa.html .

There is a short cut however, that we will use here. Instead of using a credential store, our security policy can also simply use a username and password that are configured as properties on the reference binding to which the policy is attached. For the purpose of this article, that is far more convenient.

Click on the reference binding once more. Locate the section Composite Properties | Binding Properties in the properties palette, as shown here.

image

Click on the green plus icon to add a new property. Its name is oracle.webservices.auth.username and the value is for example weblogic. Then add a second property, called oracle.webservices.auth.password and set its value:

SNAGHTML6760e82

You will notice that these two properties are not displayed in the property palette. However annoying that is, it is not a problem: the properties are added to the composite.xml file all the same:

image

Deploy and Test

The work is done. Time to deploy the SOA composite to the run time.

Then invoke the service it exposes:

image

Wait for the response

image

and inspect the audit trail:

image

When we drill down into the flow trace and inspect the BPEL audit details, we will find the response from the ESS service – that contains the request identifier:

image

At this point apparently a successful job request submission has taken place with ESS. Let’s check in the ESS console:

image

Job request 605 has spawned 606 that is currently waiting:

image

A little later, the job request 606 is executed:

image

We can inspect the flow trace that was the result of this job execution:

image

Note that there no link with the original SOA composite that invoked the scheduler service to start the job that now result in this second SOA composite instance.

After making two calls to the SOA composite that makes the call to the scheduler and waiting a little, the effects are visible of a job that executes every five minutes (and that is started twice):

image

The post SOA Suite 12c: Invoke Enterprise Scheduler Service from a BPEL process to submit a job request appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/23/soa-suite-12c-invoke-enterprise-scheduler-service-from-a-bpel-process-to-submit-a-job-request/feed/ 0
FMW 12.1.3 – Invoking Enterprise Scheduler Service Web Services from SoapUI http://technology.amis.nl/2014/08/23/fmw-12-1-3-invoking-enterprise-scheduler-service-web-services-from-soapui/?utm_source=rss&utm_medium=rss&utm_campaign=fmw-12-1-3-invoking-enterprise-scheduler-service-web-services-from-soapui http://technology.amis.nl/2014/08/23/fmw-12-1-3-invoking-enterprise-scheduler-service-web-services-from-soapui/#comments Sat, 23 Aug 2014 07:03:01 +0000 http://technology.amis.nl/?p=31922 The Fusion Middleware 12.1.3 platform contains the ESS or Enterprise Scheduler Service. This service can be used as asynchronous, schedule based job orchestrator. It can execute jobs that are Operating System jobs, java calls (local Java or EJB), PL/SQL calls, and Web Service calls (synchronous, asynchronous and one-way) including SOA composite, Service Bus and ADF [...]

The post FMW 12.1.3 – Invoking Enterprise Scheduler Service Web Services from SoapUI appeared first on AMIS Technology Blog.

]]>
The Fusion Middleware 12.1.3 platform contains the ESS or Enterprise Scheduler Service. This service can be used as asynchronous, schedule based job orchestrator. It can execute jobs that are Operating System jobs, java calls (local Java or EJB), PL/SQL calls, and Web Service calls (synchronous, asynchronous and one-way) including SOA composite, Service Bus and ADF BC web services.

Jobs and schedules can be defined from client applications through a  Java API or through the Enterprise Manager FMW Control user interface. Additionally, ESS exposes a web service through which (pre defined) jobs can be scheduled. This web service can be invoked from BPEL processes in SOA composites – or from any component that knows how to invoke a SOAP Web Service.

In this article I will briefly demonstrate how to invoke the ESS Web Service from SoapUI. I will not describe how to create the Job Definition – I will assume two pre existing Job Definitions: HelloWorld (of type PL/SQL job) and SendFlightUpdateNotification of type SOA composite based one way Web Service. Both Job Definitions contain application properties – parameters that can be set for every job instance and that are used in the job execution. When invoking the ESS Web Service to schedule a job, values for these properties can be passed in.

There is one tricky aspect with ESS: jobs cannot be run as anonymous users. So if ESS does not know who makes the request for scheduling a job, it will not perform the request. It returns an error such as

oracle.as.scheduler.RuntimeServiceAccessControlException: ESS-02002 User anonymous does not have sufficient privilege to perform operation submitRequest JobDefinition://oracle/apps/ess/custom/saibot/SendFlightUpdateNotification.

To ensure we do not run into this problem, we have to attach a WSM security policy to the ESS Web Service and pass a WS Security Header with valid username and password in our request. Then the job request is made in the context of a validated user and this problem goes away.

The steps to go through:

  • preparation: create Job definitions in ESS that subsequently can be requested for scheduled execution (not described in this article)
  • attach the WSM policy oracle/wss_username_token_service_policy to the ESS Web Service
  • retrieve the WSDL (address) for the ESS Web Service
  • create a new SoapUI project based on the WSDL
  • create a request for the submitRequest operation
    • add WS Addressing headers to request
    • add WS Security header to request
  • run request and check the results – the response and the newly scheduled/executed job

Attach the WSM policy to the ESS Web Service

In EM FMW Control, click on the node for the Scheduling Service | ESSAPP on the relevant managed server. From the dropdown menu on the right side of the page, selection option Web Services

image

You will be taken to the Web Service overview page. Click on the link for the SchedulerServiceImplPort.

image

This brings you to another overview page for the SchedulerServiceImplPort. Open the tab labeled WSM Policies:

image

Click on the icon labeled Attach/Detach. Now you find yourself on the page where policies can be attached to this Web Service (port binding). Find the security policy oracle/wss_username_token_service_policy in the list of available policies. Click on the Attach button to attach this policy to the ESS Web Service.

image

 

Click on OK to confirm this new policy attachment.

image

At this point, the ESS Scheduler Service can only be invoked by parties that provide a valid username and password. As a result, the Web Service’s operations are executed in the context of a real user – just like job related operations performed through the EM FMW Control’s UI for ESS are or actions from a client application through the Java API.

Retrieve the WSDL (address) for the ESS Web Service

Click on the link for the WSDL Document SchedulerServiceImplPort:

image

The WSDL opens. We can see from the WSDL that the WS Security policy has been added. We will need the URL for this WSDL document to create the SoapUI project.

image

 

Create a new SoapUI project

Open SoapUI and create a new project. Set the address of the WSDL document that you retrieved in the previous step as the initial WSDL in this new project:

SNAGHTML59dbff1

Edit the request for the submitRequest operation

The request to the submitRequest operation is the request that will cause a new Job Request to be created (and therefore a job to be executed, one or potentially many times). Open the request that was generated by SoapUI.

image

You need to provide the details for the predefined job that already exists in ESS, so ESS will know what to do in processing this request. In this example, I want to run the HelloWorld job from package /oracle/apps/ess/custom through the (out of the box installed) EssNativeHostingApp application. I also provide a value for the application property mytestIntProp:

image

All details have been provided in the request message itself. However, trying to submit this request will fail for two reasons: no security details (a WS Security header) are passed and no WS Addressing details are provided – and the ESS Web Service requires those as well.

image

Let’s add the security side of things.

In the request properties palette, provide the username and password for a valid user account; it is easiest to try this out with the administrator account, probably something like weblogic/weblogic1

image

Then, right click on the request message and click on the option Add WSS Username Token

image

Specify Password Text as the password type

SNAGHTML5abd94b

SoapUI will add the header to the message:

image

When you now try again to submit the request, you will receive a fault regarding a WS Addressing header:

image

This is easily remedied. Click on the WS A tab at the bottom to the request pane:

image

The WS Addressing header properties palette is shown. Ensure that the checkbox for enabling WS-A addressing is checked and also that the checkbox Randomly generate MessageId is checked:

 

image

 

Now you can submit the request once more. And this time it will succeed. The response message indicates a successful submission of the job request, and it provides an identifier for that request:

image

In the EM FMW Control pages for ESS, we can inspect all job requests and locate our number 409:

SNAGHTML5b0bcc7

We can drill down to find out more details about this job request and its execution:

image

Note the application property value that was passed in from SoapUI to override the default value specified in the Job definition.

Whatever the PL/SQL procedure is supposed to do, has been done by now.

Resources

Documentation on ESS Web Service: http://docs.oracle.com/middleware/1213/ess/ESSDG/webservice.htm.

The post FMW 12.1.3 – Invoking Enterprise Scheduler Service Web Services from SoapUI appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/23/fmw-12-1-3-invoking-enterprise-scheduler-service-web-services-from-soapui/feed/ 0
SOA Suite 12c: Configuring GMail as the inbound email provider for UMS (IMAP, SSL) http://technology.amis.nl/2014/08/17/soa-suite-12c-configuring-gmail-as-the-inbound-email-provider-for-ums-imap-ssl/?utm_source=rss&utm_medium=rss&utm_campaign=soa-suite-12c-configuring-gmail-as-the-inbound-email-provider-for-ums-imap-ssl http://technology.amis.nl/2014/08/17/soa-suite-12c-configuring-gmail-as-the-inbound-email-provider-for-ums-imap-ssl/#comments Sun, 17 Aug 2014 15:06:48 +0000 http://technology.amis.nl/?p=31773 In a recent article, I discussed how to configure the SOA Suite 12c for sending emails using GMail: http://technology.amis.nl/2014/08/05/setup-gmail-as-mail-provider-for-soa-suite-12c-configure-smtp-certificate-in-trust-store/. An interesting aspect of that configuration is the loading of the GMail SSL certificate into the Keystore used by WebLogic, in order for the SSL based interaction with GMail to successfully be performed. The configuration of [...]

The post SOA Suite 12c: Configuring GMail as the inbound email provider for UMS (IMAP, SSL) appeared first on AMIS Technology Blog.

]]>
In a recent article, I discussed how to configure the SOA Suite 12c for sending emails using GMail: http://technology.amis.nl/2014/08/05/setup-gmail-as-mail-provider-for-soa-suite-12c-configure-smtp-certificate-in-trust-store/. An interesting aspect of that configuration is the loading of the GMail SSL certificate into the Keystore used by WebLogic, in order for the SSL based interaction with GMail to successfully be performed. The configuration of GMail for inbound interactions requires a similar procedure for the certificate for the imap.gmail.com server.

This article quickly presents the steps required for getting this inbound interaction going, from the expected error:

image

<Aug 17, 2014 3:50:22 PM CEST> <Error> <oracle.sdpinternal.messaging.driver.email.inbound.ImapEmailStore> <SDP-26123>
ould not initialize Email Store for: user saibot.airport@gmail.com, server imap.gmail.com, folder INBOX, sslEnabled tr
javax.mail.MessagingException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.prov
er.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target;
  nested exception is:
        javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun
ecurity.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
        at com.sun.mail.imap.IMAPStore.protocolConnect(IMAPStore.java:665)
        at javax.mail.Service.connect(Service.java:295)
        at javax.mail.Service.connect(Service.java:176)
        at oracle.sdpinternal.messaging.driver.email.inbound.ImapEmailStore.initStore(ImapEmailStore.java:159)
        at oracle.sdpinternal.messaging.driver.email.inbound.ImapEmailStore.initStore(ImapEmailStore.java:106)

to the final successful reception of an email:

image

 

Load Certificate into Keystore

The interaction between the UMS server and GMail’s IMAP API takes place over SSL. That means that the WebLogic managed server on which the UMS service runs has to have the SSL certificate for the IMAP server loaded in its local keystore – in the exact same way that we needed to load the SMTP server’s certificate in order to be able send emails via GMail (http://technology.amis.nl/2014/08/05/setup-gmail-as-mail-provider-for-soa-suite-12c-configure-smtp-certificate-in-trust-store/).

The steps are a little familiar by now,  at least to me.

Download the certificate from Google and store it in a file. Depending on your operating system, this can be done in various ways. On Linux, here is a possible command:

openssl s_client -connect imap.gmail.com:993 > gmail-imap-cert.pem

image

The file gmail-imap-cert.pem should be created now. Note: this openssl action can take a long time or not even finish at all. You can end it after a few seconds (CTRL+C for example) because the important part is done very quickly and right at the beginning.

image

Open the file you retrieved with OpenSSL – gmail-imap-cert.pem in my case – in an editor (such as vi).

Remove all the lines before the line that says —–BEGIN CERTIFICATE—– – but leave this line itself! Also remove all lines after the line with —–END CERTIFICATE—– but again, leave this line itself. Save the resulting file, for example as gmail-imp-certificate.txt (but you can pick any name you like).

SNAGHTML20f4d3f

image

WebLogic (on which SOA Suite is running) out of the default installation uses a special keystore. It does not use the cacerts store that is installed with the JDK or JRE but instead uses a file called DemoTrust.jks and typically located at %WL_HOME/server/lib/DemoTrust.jks. This trust store is “injected” into the JVM when the WebLogic domain is started: “-Djavax.net.ssl.trustStore=/opt/oracle/middleware12c/wlserver/server/lib/DemoTrust.jks”. We have the option of removing this start up parameter: remove “-Djavax.net.ssl.trustStore=%WL_HOME%\server\lib\DemoTrust.jks” in setDomainEnv.cmd  and then add the certificates to the default Java keystore (cacerts) or, the easier option, we can add the certificate to the DemoTrust keystore that WebLogic uses.

The command for doing this, looks as follows, in my environment at least:

/usr/java/latest/jre/bin/./keytool -import -alias imap.gmail.com -keystore /opt/oracle/middleware12c/wlserver/server/lib/DemoTrust.jks -file /var/log/weblogic/gmail-imap-certificate.txt

image

The default password for the keystore is DemoTrustKeyStorePassPhrase.

You will be asked explicitly whether you trust this certificate [and are certain about adding it to the keystore]. Obviously you will have to type y in order to confirm the addition to the keystore:

image

When done, we can check the contents of the keystore using this command:

/usr/java/latest/jre/bin/./keytool -list -keystore  /opt/oracle/middleware12c/wlserver/server/lib/DemoTrust.jks

SNAGHTML20d47e3

Next, you have to restart the WebLogic Managed Server – and perhaps the AdminServer as well (I am not entirely sure about that, but I did it anyway)

 

Email Driver Properties

In EM FMW Control, open the User Messaging Service node in the navigator and select the usermessagingdriver-email for the relevant managed server. From the context menu, select Email Driver Properties. When there no configuration yet, you will create a new one. If you already configured the SOA Suite for outbound mail traffic, you can edit that configuration for the inbound direction.

 

image

In the property overview, there are some properties to set:

image

The Email Receiving protocol for GMail is IMAP. The Incoming Mail Server is imap.gmail.com. The port should be set to 993 and GMail wants to communicate over SSL, so the checkbox should be checked. The Incoming MailIDs are the email addresses that correspond to the lust names under Incoming User IDs. For GMail these can both be the full GMail email-addresses, such as saibot.airport@gmail.com, an account created for the Oracle SOA Suite 12c Handbook that I am currently writing. There are several ways to configure the password. The least safe one is by selecting Use Cleartext Pasword and simply typing the password for the GMail account in the password field. The password is then stored somewhere on the WebLogic server in readable form.

Press the OK button at the top of the page to apply all configuration changes.

image

 

SOA composite application with Inbound UMS Adapter binding

I have created a very simple composite. The (really the only) interesting aspect is the Inbound UMS Adapter on the left. This adapter binding when deployed negotiates with the UMS services on the WebLogic platform to have the configured mailbox polled and to have an instance of this composite created for every mail that was received. Note that we could have configured message filters to only trigger this composite for specific senders or subjects.

image

The inbound UMS adapter is configured largely with default settings – apart from the name (ReceiveEmail), the steps through the wizard are these:

SNAGHTML1f3fc90

SNAGHTML1f4916f

SNAGHTML1f4a58b

Specify which of the accounts that are configured on the UMS email-driver is associated with this particular adapter binding (note: this means that the value provided here for the end-point has to be included in the Incoming MailIDs property set on the email driver)

SNAGHTML1f4badf

Let’s process the mail content as a string – no attempt to natively transform. Note that many associated properties are available inside the SOA composite from the jca header properties.

SNAGHTML1f69ae1

Do not need Message Filters for this simple test:

SNAGHTML1f78a9c

Nor any custom Java to determine whether to process it. See for example this article for details on this custom Java callout: http://technology.amis.nl/2013/04/07/soa-suite-definitive-guide-to-the-ums-adapter-11-1-1-7/ 

SNAGHTML1f7d246

Press Finish to complete the adapter configuration.

Deploy the SOA composite to the SOA Suite run time. Send an email to the address that is being polled by the inbound UMS adapter:

image

Wait for a little while (about 15 seconds on average, with our current settings). Then check the EM FMW Control for new instances of our composite:

image

And check the contents of the message processed by the Mediator:

image

and scroll:

image

Yes! We did it.

The post SOA Suite 12c: Configuring GMail as the inbound email provider for UMS (IMAP, SSL) appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/17/soa-suite-12c-configuring-gmail-as-the-inbound-email-provider-for-ums-imap-ssl/feed/ 0
ADF DVT: Editor for easily creating custom base map definition files (hotspot editor) http://technology.amis.nl/2014/08/17/adf-dvt-editor-for-easily-creating-custom-base-map-definition-files-hotspot-editor/?utm_source=rss&utm_medium=rss&utm_campaign=adf-dvt-editor-for-easily-creating-custom-base-map-definition-files-hotspot-editor http://technology.amis.nl/2014/08/17/adf-dvt-editor-for-easily-creating-custom-base-map-definition-files-hotspot-editor/#comments Sun, 17 Aug 2014 12:46:16 +0000 http://technology.amis.nl/?p=31722 Using a custom image as the base map for the ADF DVT Thematic Map component, such as is supported as of release 12.1.3, is very interesting. Visualization is extremely powerful for conveying complex aggregated information. Using maps to associate information  with particular locations – using shape, color, size as well – is very valuable. Being [...]

The post ADF DVT: Editor for easily creating custom base map definition files (hotspot editor) appeared first on AMIS Technology Blog.

]]>
Using a custom image as the base map for the ADF DVT Thematic Map component, such as is supported as of release 12.1.3, is very interesting. Visualization is extremely powerful for conveying complex aggregated information. Using maps to associate information  with particular locations – using shape, color, size as well – is very valuable. Being able to not only use a geographical map but any image (with sensibly identifiable locations) is even better.

Creating the custom base map with the Thematic Map component is quite easy. See for example this article for a demonstration: http://technology.amis.nl/2014/08/17/adf-dvt-creating-a-thematic-map-using-a-custom-base-map-with-hotspots/ .There really is only one inconvenience along the way: the creation of an XML file that describes the custom map (image) and the hotspots for associating markers with. That is not necessarily very hard to do, but it takes some time and effort and is error prone.

To overcome that (small) obstacle, I have  created a simple tool – a custom base map file editor. It runs as an ADF Web application. An image file is uploaded to it. The image is displayed and the user can click on all the hotspots on the image. Meanwhile, the XML file is composed.

Here is a visual example of the use of the tool:

Download an image that you want to use as a custom base map:

image

Run the custom base map editor tool. Upload the image to be used:

image

click on the button Process Image.

image

The image is now displayed in the browser.

image

The user can click on the relevant locations on the image. The tool identifies the hotspots from the mouse clicks and creates the custom XML file in the code editor component.

image

You can edit the contents of the code editor, for example to provide the values for the longLabel attribute.

The contents of the code editor can be copied and pasted into the custom XML file. You will only have to change the file reference to point to the correct local directory.

Resources

You will find the sources for this tool in GitHub at: https://github.com/lucasjellema/adf_dvt_12_1_3_custom_basemap_hotspot-editor. The sources constitute a JDeveloper application. Two Java classes, a JSF file and JavaScript library make up the custom base map editor.

image

Steps to get going:

  • Clone this repository or download the zip-file and expand locally
  • Open the CustomBaseMapEditor.jws file in JDeveloper 12.1.3 (or higher)
  • Identify a local directory that you will use for holding the image files
  • Configure that local image directory in class FileHandler (public final static String imageDirectory)
  • Run custom-basemap-editor.jsf
  • When the page opens, upload an image, press the button Process File. The image is shown in the browser. Click on it to define hotspots; in the code editor you will find the custom base map xml required for the Thematic Map component

The post ADF DVT: Editor for easily creating custom base map definition files (hotspot editor) appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/17/adf-dvt-editor-for-easily-creating-custom-base-map-definition-files-hotspot-editor/feed/ 0
ADF DVT: Creating a Thematic Map using a Custom Base Map with hotspots http://technology.amis.nl/2014/08/17/adf-dvt-creating-a-thematic-map-using-a-custom-base-map-with-hotspots/?utm_source=rss&utm_medium=rss&utm_campaign=adf-dvt-creating-a-thematic-map-using-a-custom-base-map-with-hotspots http://technology.amis.nl/2014/08/17/adf-dvt-creating-a-thematic-map-using-a-custom-base-map-with-hotspots/#comments Sun, 17 Aug 2014 11:47:57 +0000 http://technology.amis.nl/?p=31708 One of the interesting new features in ADF DVT 12.1.3 is the option to use a custom image as the base map for the Thematic Map component. This enables us to visualize information and support interaction in a wide variety of visual contexts. The custom image we use can represent a geographical layout, but it [...]

The post ADF DVT: Creating a Thematic Map using a Custom Base Map with hotspots appeared first on AMIS Technology Blog.

]]>
One of the interesting new features in ADF DVT 12.1.3 is the option to use a custom image as the base map for the Thematic Map component. This enables us to visualize information and support interaction in a wide variety of visual contexts. The custom image we use can represent a geographical layout, but it can really be anything we like. A map of the shopping mall, a picture of a mannequin, a map of the galaxy, a chess board: any image on which we can meaningfully present information will do. We can define the hotspots in the image where the thematic map component may render markers. The markers through shape, color, orientation and size visualize information associated with the hotspot (position).

In this article a very simple example is described of using the Thematic Map with a custom base map.

The steps are (assuming an ADF application is already created):

  • find image to use for base map and add it to the application (for example public html/images)
  • describe the custom base map in an XML file that references the image and defined all hotspot through x,y coordinates and a logical identifier
  • add Thematic Map to a page, references the custom basemap – through a reference to the map and the XML file that contains the custom base map’s definition
  • configure the Thematic Map’s pointDataLayer and marker – associating the data set to the hotspots set up for the custom base map

In this example, I will take an artist’s impression of a playground and plot the incident rate on it for each of various contraptions and areas. On top of the image, I will define hotspots for all areas and equipment for which incidents were registered. The final result looks a little like this:

image

The magenta bars indicate the number of occurrences of incidents on that particular spot in the playground. It turns out that the step near the very entrance into the playground is by far the most dangerous part.

Step 1 – find and download image

Just a simple Google search:

image

Download and save – then move to project folder images:

image

Step 2 – Create Custom Base Map XML file

The custom XML file has to contain the specifications for the image to be used and for each of the hotspots. I have used good old Paint to determine the x,y coordinates for my hotspots (simply move your mouse to the hotspot and read x and y on the bottom of the Paint window).

image

I have created the file in the Web Content (root) directory. Note the URL to the image source.

The name attribute for each of the points represents the logical identifier. The data set used to stamp out the markers will provide a matching value.

Step 3 – Add Thematic Map to a page

Assuming we already have a JSF page, simply drag the Thematic Map component to the right location.

image

Choose whatever shipped base map you like – we are going to overwrite that selection anyways:

image

In the page source, configure the thematicMap with the basemap set to playgroundBaseMap and the source referencing the XML file we created earlier on: /playground-map/xml

image

Step 4 – Create the data set providing the data [on the playground incidents]for the hotspots

In this case, I have set up classes PlaygroundIncident – with the details for the occurrences for a single location in the playground – and PlayGroundIncidentStatistics, that returns a collection of all locations and their incidents.

image

Configure a managed bean for the latter class:

image

Step 5 – Configure the pointDataLayer and the markers

With the managed bean at our disposal to provide the data on which we can base the markers in the thematic map for the playground, we can now configure the pointDataLayer, the pointLocation and the marker to be stamped at each location. Note how the pointDataLayer is associated with the collection (List) returned by the managed bean. This could also have been a tree-data binding to an ADF Data Control.

The pointLocation’s pointName attribute associates the location value in each incident retrieved from the collection with the logical names of the points in the custom base map (the XML file). When there is a match, the Thematic Map knows where on the image to position the marker. The marker finally has a shape, a vertical scale factor based on the number of incidents and a shortDesc – or tooltip – derived from the number of occurrences, the comment in the bean (and the long label set in the custom base map XML file.

image

With all this in place, we can run the application, see the end result and reconsider perhaps the color used for the “bars” in the custom map.image

Here is an alternative, with triangleUp as shape and blue as the fill color:

image

 

Resources

Find the sources for this simple sample in GitHub: https://github.com/lucasjellema/ADF_DVT_CustomBaseMapSample_PlaygroundIncidents.

The post ADF DVT: Creating a Thematic Map using a Custom Base Map with hotspots appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/17/adf-dvt-creating-a-thematic-map-using-a-custom-base-map-with-hotspots/feed/ 0
ADF DVT – Past, present and future of ADF Data Visualization with Katarina Obradovic-Sarkic http://technology.amis.nl/2014/08/16/adf-dvt-past-present-and-future-of-adf-data-visualization-with-katarina-obradovic-sarkic/?utm_source=rss&utm_medium=rss&utm_campaign=adf-dvt-past-present-and-future-of-adf-data-visualization-with-katarina-obradovic-sarkic http://technology.amis.nl/2014/08/16/adf-dvt-past-present-and-future-of-adf-data-visualization-with-katarina-obradovic-sarkic/#comments Sat, 16 Aug 2014 08:26:36 +0000 http://technology.amis.nl/?p=31667 On Thursday 14th of August, the AMIS office was the venue for a session for the ADF community. Oracle Product Manager Katarina Obradovic-Sarkic was the key presenter at an event dedicated to ADF DVT, the data visualization components in ADF. In this event, Katarina went over the many use cases for data visualizations. She told [...]

The post ADF DVT – Past, present and future of ADF Data Visualization with Katarina Obradovic-Sarkic appeared first on AMIS Technology Blog.

]]>
imageOn Thursday 14th of August, the AMIS office was the venue for a session for the ADF community. Oracle Product Manager Katarina Obradovic-Sarkic was the key presenter at an event dedicated to ADF DVT, the data visualization components in ADF. In this event, Katarina went over the many use cases for data visualizations. She told and showed how various Oracle products make use of the visualizations – and how functional requirements from internal product development teams frequently are the driving force behind new visualizations.

She then discussed the many new features in DVT in the recent 12.1.3 release of ADF – as well as their counterparts in the Mobile Application Framework (MAF) that was launched last month. Many DVT components are shared between ADF and MAF, although some are not (yet). For example, the new Diagram component is not yet part of MAF, The Timeline component in MAF has some new features – such as support for time duration – that ADF currently does not have (but will have in the next release, probably 12.1.4). MAF also contains the N-Box that is not yet but will part of ADF too.

Here is the timeline component in MAF, with the time durations:

image

DVT in 12.1.3 is considerably refreshed – with new components, new features and a partially new architecture. Some examples are for example discussed in this blog article.

An important evolution is the introduction of the new Chart components that replace the pre 12.1.3 Graph components. The Chart components use either simple Java Collections as their data set. They can also work with the same Tree binding that is used for ADF Rich Table and List View. The rendering of the Charts takes place on the client – rather than the server. The render modes flash, png and svg are no longer supported: all rendering is done using HTML5. The client side rendering makes the user experience smoother. Additionally, client side operations such as scroll and zoom  are supported and operate very smoothly:

image

Currently there is no client side API that allows direct manipulation of the data set on which the chart is based. All data refresh has to come in from the server. This might be changed in the near future – as to allow pure client side refresh and interaction between charts. The next release will also bring support for ADS (Active Data Service) for the client side chart components.

Not new in 12.1.3 but still relatively recent is the introduction of the Time Axis. This axis is date and time aware. It can handle irregular data sets (with “missing” periods), can now be used for the y-axis. Time axes also support mixed frequency time data, where the time stamps vary by series.

The image is described in the surrounding text.

 

The Thematic Map component support several new features. These include the option to set the orientation of the marker – allowing us to provide additional meaning such as direction. The ability to hide the map itself. The option to isolate a single area – a form of drilldown. And the ability to use a custom base map. This latter option means that we can take any image, define hotspots on that image and assign logical names or identifier to them. Subsequently, we can take a data set that references those same logical names and have markers displayed on the custom image, allowing interaction such as popup and drilldown.

imageimageimage

A brand new component is the Diagram component. This component is very versatile and a little abstract. It is good at showing nodes and dependencies between nodes. That basic premise allows for a wide range of applications, including visual editors, network visualizations, visual bill of materials and many more.

imageimageimage

 

image

 

Miscellaneous

Some observations: with the (client) chart components, styling has become a whole new and much simpler ballgame. Attributes such as color can be data bound, just like value and label.

A new component that is one the drawing table is the picture chart.

Another component that we can expect in the near future is the so called N-Box. It was originally used in HCM Cloud to classify employees along two dimensions (potential and performance). It could be used for classification in any two (discrete) dimensions. The component has some nice aggregation facilities – collapsing individuals into groups based on selected criteria.

image

A poor or impatient’s man N-Box can be created using a Bubble Chart with reference lines, as was done in Fusion Application:

image

Hands-on

The event concluded – apart from the drinks at the bar – with an hands-on lab. Participants received a Virtual Machine with Linux, JDeveloper, Oracle XE and several demo applications pre installed. The demos – for which the sources will be made available shortly – showed several new features, including thematic map with custom base map, rating gauge and other new gauges, animation in various DVTs, diagram, client side charting with overview and scroll and a non-DVT component: a 3D tagcloud.

imageimageimageimageimageimageimage

 

Resources

The presentation slides for this event: ADF DVT 12.1.3 – New Features and the Background Story.

Hosted Demo for the Mobile Application Framework – http://jdevadf.oracle.com/amx/ (Chrome or Safari only)

image

The ADF Rich Components Demo (live) – including ADF DVT: hosted at OTN.

image

This demo can be downloaded as a WAR file that can be imported into JDeveloper to review all sources and run the demos locally; go to OTN ADF Downloads and near the bottom of the page, click on the download button for Oracle ADF Faces Components Demo

image

The Recorded videos (ADF):

DVT New Features Overview

Diagram Layout Tutorial

Chart Formatting

The Oracle Mobile Platform YouTube channel: http://bit.ly/oramobilesub

New DVT Blog

The post ADF DVT – Past, present and future of ADF Data Visualization with Katarina Obradovic-Sarkic appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/16/adf-dvt-past-present-and-future-of-adf-data-visualization-with-katarina-obradovic-sarkic/feed/ 1
Materialized views: fast refresh, complete refresh or recreate? http://technology.amis.nl/2014/08/15/materialized-views-fast-refresh-complete-refresh-recreate/?utm_source=rss&utm_medium=rss&utm_campaign=materialized-views-fast-refresh-complete-refresh-recreate http://technology.amis.nl/2014/08/15/materialized-views-fast-refresh-complete-refresh-recreate/#comments Fri, 15 Aug 2014 15:24:07 +0000 http://technology.amis.nl/?p=31618 Have you ever wondered why it takes a century to completely refresh your materialized view? I did, so I did some testing. Recently I was asked to support a customer whose database was extremely slow. As it turned out, some indexes had been created on a materialized view and that view was being refreshed. Soon [...]

The post Materialized views: fast refresh, complete refresh or recreate? appeared first on AMIS Technology Blog.

]]>
Have you ever wondered why it takes a century to completely refresh your materialized view? I did, so I did some testing.

Recently I was asked to support a customer whose database was extremely slow. As it turned out, some indexes had been created on a materialized view and that view was being refreshed. Soon I found that a large ‘delete from’ job was running, which turned out to be part of the complete refresh.

Materialized views can be refreshed in two ways: fast or complete. A fast refresh requires having a materialized view log on the source tables that keeps track of all changes since the last refresh, so any new refresh only has changed (updated, new, deleted) data applied to the MV.

A complete refresh does what it says: it completely refreshes all data in the MV. No materialized view logs are needed. And it takes a little longer.

A little? Hold your horses. MV’s are read consistent like any other table as well. So a refresh consists of a consistent delete and consistent insert. Meaning? All the time that the MV is refreshing, other sessions must be able to read the MV as it was before you started your refresh. So you have undo data, and redo/archivelog for all the delete. And not only for the MV itself but also for all indexes on that view.

I kept track of the timing and the number of archivelogs during some MV manipulation and the results are even more dramatic than I anticipated.


SQL> create materialized view mat_view1 as select * from IMO_SHIPMENT_ACTORS where code_role='HD';

Materialized view created.

Elapsed: 00:00:47.17
SQL>

In the mean time, 15 archive log files were created, solely due to this transaction.

Next, I did a complete refresh.


SQL>  exec DBMS_SNAPSHOT.REFRESH( '"OWNER1"."MAT_VIEW1"','C');

PL/SQL procedure successfully completed.

Elapsed: 00:07:47.63
SQL>

A staggering 104 archivelogs were created, again solely due to this transaction! So there must have been a lot of redo generation too. And redolog writing is utterly important for database performance. If it can be limited somehow, do it.

Maybe I could just drop the MV and create it again?


SQL> @drop_and_create.sql

Materialized view dropped.

Elapsed: 00:00:01.85

Materialized view created.

Elapsed: 00:00:54.08
SQL>

The number of logfiles this time was 17. When we create the MV with the NOLOGGING option there won’t even be any logfiles. Remember that, according to documentation, you should make a backup immediately afterward but in this specific case that is just silly.

Of course this is not always possible, some MV’s must always be available.

Another solution is using fast refresh. It’s quite useless to demonstrate here since the number of changes on the table determines how much work it is, as is the frequency of refreshing.

 

So there it is: try and avoid complete refreshes, use fast refresh of drop-and-create whenever possible.

The post Materialized views: fast refresh, complete refresh or recreate? appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/15/materialized-views-fast-refresh-complete-refresh-recreate/feed/ 2
AMIS Whitepaper User Experience Frameworks “Future of optimal UI development” http://technology.amis.nl/2014/08/15/whitepaper-user-experience-future-optimal-interface-development/?utm_source=rss&utm_medium=rss&utm_campaign=whitepaper-user-experience-future-optimal-interface-development http://technology.amis.nl/2014/08/15/whitepaper-user-experience-future-optimal-interface-development/#comments Fri, 15 Aug 2014 09:01:17 +0000 http://technology.amis.nl/?p=31595 “ There’s A Lot More Behind This Pretty Face “ The whitepaper “User Experience Frameworks – Future of optimal UI development -” starts with an overview of user experience guidelines. These guidelines translate to additional UX requirements when designing and building a new user interface on modern systems. We will also discuss the two major architectural [...]

The post AMIS Whitepaper User Experience Frameworks “Future of optimal UI development” appeared first on AMIS Technology Blog.

]]>
“ There’s A Lot More Behind This Pretty Face “

The whitepaper “User Experience Frameworks – Future of optimal UI development -” starts with an overview of user experience guidelines. These guidelines translate to additional UX requirements when designing and building a new user interface on modern systems. We will also discuss the two major architectural paradigms for user interface development, followed by an overview of the major frameworks and technologies used for implementing this architecture. In this whitepaper we give you insight in the major differences between Thin Server and Thin Client development. This is the most important choice when considering a new (or refactoring) your user interface. Finally we will give a number of business examples and the preferred technology for implementing the requirements.  Download your copy of the AMIS whitepaper-future-of-optimal-ui-development and share your remarks below.

We need to shift from straightforward User Interface development towards User Experience development.

2014-08-14 18_43_40-www.amis.nl_~_media_Files_AMIS-whitepapers_whitepaper-future-of-optimal-ui-develModern business web applications are faced with rapidly changing requirements. Users can choose from a wide variety of systems and have a distinct preference when it comes to usability. The forced or required use of one single system is becoming unacceptable. So are systems with poor user experience, even if the business logic behind it is implemented well. Business users demand apps that are effective, intuitive and efficient. They must have fast performance and 24/7 availability. And they have to look sexy…..

User Experience (UX) has become the major reason for rejecting a system during end user tests or even worse: after go-live. Users have high expectations, based on the frequent use of social media applications, and expect the same standard for their own business systems. Users expect an easy to use interface, fast interface response time, usage on a variety of different devices, easy login and offline availability.

To be able to meet these expectations, software developers require short development cycles and full test coverage to support agile development cycles, seamless support for multiple platforms and devices, secure transactions and easy decoupling from backend systems. And during operations, systems managers, need to be prepared for the unpredictable timing and growth of the visitors of business applications. In some cases the system and hosting platforms need to be able to support a burst in demand or the exponential growth of the user community without drastic changes to the application architecture.

This also requires a productive development environment with massive scalability for both the number of developers and eventually the number of concurrent end users. Frameworks with an intrinsic agile capability to modify and expand the functionality with a very short time to market. We feel there is no one-size-fits-all solution for UX requirements. We see a shift from technology derived designs towards user centric designs facilitating every end user with a personalized, timely, effective interface. This kind of approach will lead to more effective, easy to use and enjoyable applications.

I hope you enjoy reading this whitepaper and please share your remarks and feedback below in the comments section.

The post AMIS Whitepaper User Experience Frameworks “Future of optimal UI development” appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/15/whitepaper-user-experience-future-optimal-interface-development/feed/ 0
Oracle InMemory compared to indexing http://technology.amis.nl/2014/08/14/oracle-inmemory-first-practical-tries/?utm_source=rss&utm_medium=rss&utm_campaign=oracle-inmemory-first-practical-tries http://technology.amis.nl/2014/08/14/oracle-inmemory-first-practical-tries/#comments Thu, 14 Aug 2014 12:00:02 +0000 http://technology.amis.nl/?p=31565 In August 2014 Oracle released its RDBM 12.1.0.2 with a potentially useful and exiting new option: Database InMemory. Upon reading about it it became clear to me that this is a powerful option, worth examining deeper. This blog will briefly describe what InMemory is and what it isn’t. The emphasis however is on practical examples. [...]

The post Oracle InMemory compared to indexing appeared first on AMIS Technology Blog.

]]>
In August 2014 Oracle released its RDBM 12.1.0.2 with a potentially useful and exiting new option: Database InMemory. Upon reading about it it became clear to me that this is a powerful option, worth examining deeper.

This blog will briefly describe what InMemory is and what it isn’t. The emphasis however is on practical examples. I didn’t have a real database at hand for some ultimate real life experience, but a local database on VirtualBox proved to be enough to show some interesting details.

 

First of all, what is Database InMemory?

I’ll tell you what it’s not: it does not mean that the database is completely loaded into RAM, thereby avoiding a lot of physical disk access. That doesn’t even come close to describing it.

Then what is it?

Using the InMemory option, specific segments of the database, like tables, materialized views, tablespaces or partitions, can be loaded in a separate part of the SGA. And they are stored in a special way known as InMemory column format.

This format is particularly well designed for column operations like the sum of all values in one column. It eliminates the need to go through all the other information in other columns that you don’t need. It is therefor good for BI-like operations. Operations that are often taken away from the OLTP database as to not disturb daily use of the database. InMemory should enable to have both OLTP and BI on the same database, thus reducing the number of databases and eliminating complex maintenance like ETL processes. As an added bonus, all queries can access all data up to the latest commit and not until last-night-when-the-ETL-job-ran.

Note that InMemory is no more than an extra representation of already existing data. It is an extra and doesn’t change anything to storage or traditional presense in SGA.

Here’s a picture that helps understanding.

 

image

 

The optimizer takes the precense of InMemory tables automatically in its considerations to make a plan. That is very important because it makes the use of it transparent: plans are automatically recalculated when a table is made InMemory.

Now for the real thing.

I’ve got a table with twelve columns and over 62 million rows. I made sure it’s in the buffer cache already.

My inmemory setting are


SQL> show parameter inmemory

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
inmemory_clause_default              string
inmemory_force                       string      DEFAULT
inmemory_max_populate_servers        integer     1
inmemory_query                       string      ENABLE
inmemory_size                        big integer 3G
inmemory_trickle_repopulate_servers_ integer     1
percent
optimizer_inmemory_aware             boolean     TRUE
SQL>

What tables are inmemory?


SQL> select owner,segment_name,populate_status from v$im_segments;

no rows selected

Elapsed: 00:00:00.00
SQL>

Nothing there.

Now we’ll do a query on one column in particular:


SQL> select sum(iata_code) from owner1.table1;

SUM(IATA_CODE)
--------------
1.2232E+13

Elapsed: 00:00:13.04
SQL>

How is this code handled?


SQL> explain plan for select sum(iata_code) from owner1.table1;

Explained.

Elapsed: 00:00:00.01
SQL> @utlxpls

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3207558960

-----------------------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |        |     1 |     2 |   140K  (1)| 00:00:06 |
|   1 |  SORT AGGREGATE    |        |     1 |     2 |            |          |
|   2 |   TABLE ACCESS FULL| TABLE1 |    62M|   119M|   140K  (1)| 00:00:06 |
-----------------------------------------------------------------------------

9 rows selected.

Elapsed: 00:00:00.02
SQL>

Now put that table inmemory and do the same queries. Mind the extra parameters ‘priority critical’. It ensures that the table will be put in inmemory immediately. Default behaviour is that the inmemory clause will only be executed the first time the table is scanned.


SQL> alter table owner1.table1 inmemory priority critical;

Table altered.

Elapsed: 00:00:00.06
SQL> select owner,segment_name,populate_status from v$im_segments;

OWNER   SEGMENT_NAME   POPULATE_STATUS
------- ------------- -----------------
OWNER1  TABLE1         COMPLETED

Elapsed: 00:00:00.02
SQL> select sum(iata_code) from owner1.table1;

SUM(IATA_CODE)
--------------
1.2232E+13

Elapsed: 00:00:00.29
SQL> explain plan for select sum(iata_code) from owner1.table1;

Explained.

Elapsed: 00:00:00.02
SQL> @utlxpls

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3207558960

--------------------------------------------------------------------------------------
| Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |        |     1 |     2 |  6140  (11)| 00:00:01 |
|   1 |  SORT AGGREGATE             |        |     1 |     2 |            |          |
|   2 |   TABLE ACCESS INMEMORY FULL| TABLE1 |    62M|   119M|  6140  (11)| 00:00:01 |
--------------------------------------------------------------------------------------

9 rows selected.

Elapsed: 00:00:00.05
SQL>

This is a spectacular improvement.  Execution time dropped from 13.4 seconds to 0.29 seconds, that’s 46 times faster. But wait: wouldn’t we achieve a similar result using an index on that column? Let’s see.


SQL> create index owner1.iata_code on owner1.table1(iata_code);

Index created.

Elapsed: 00:00:28.17
SQL> select sum(iata_code) from owner1.table1;

SUM(IATA_CODE)
--------------
1.2232E+13

Elapsed: 00:00:00.23
SQL> explain plan for select sum(iata_code) from owner1.table1;

Explained.

Elapsed: 00:00:00.00
SQL> @utlxpls

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2487845580

-----------------------------------------------------------------------------------
| Id  | Operation             | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |           |     1 |     2 |  1260   (1)| 00:00:01 |
|   1 |  SORT AGGREGATE       |           |     1 |     2 |            |          |
|   2 |   INDEX FAST FULL SCAN| IATA_CODE |    62M|   119M|  1260   (1)| 00:00:01 |
-----------------------------------------------------------------------------------

9 rows selected.

Elapsed: 00:00:00.01
SQL>

Hmm, pity. The good old index beat the brand new inmemory. But wait. Maybe this was too simple. So, let’s make it a bit more complicated.


SQL> explain plan for select sum(iata_code) from owner1.table1 where code_role='HD';

Explained.

Elapsed: 00:00:00.01
SQL> @utlxpls

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3207558960

--------------------------------------------------------------------------------------
| Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |        |     1 |     6 |  6153  (12)| 00:00:01 |
|   1 |  SORT AGGREGATE             |        |     1 |     6 |            |          |
|*  2 |   TABLE ACCESS INMEMORY FULL| TABLE1 |  4177K|    23M|  6153  (12)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------

2 - inmemory("CODE_ROLE"='HD')
filter("CODE_ROLE"='HD')

15 rows selected.

Elapsed: 00:00:00.01
SQL>

So we went back to the InMemory table access. But that’s not fair, there’s no index on code_role. So let’s create one and see what happens.


SQL> create index owner1.code_role on owner1.table1(code_role);

Index created.

Elapsed: 00:01:04.81
SQL> explain plan for select sum(iata_code) from owner1.table1 where code_role='HD';

Explained.

Elapsed: 00:00:00.01
SQL> @utlxpls

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3207558960

--------------------------------------------------------------------------------------
| Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |        |     1 |     6 |  6153  (12)| 00:00:01 |
|   1 |  SORT AGGREGATE             |        |     1 |     6 |            |          |
|*  2 |   TABLE ACCESS INMEMORY FULL| TABLE1 |  4177K|    23M|  6153  (12)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------

2 - inmemory("CODE_ROLE"='HD')
filter("CODE_ROLE"='HD')

15 rows selected.

Elapsed: 00:00:00.02
SQL>

Good. Now it uses InMemory. And that seems logical: compare it to the plan that would have been followed had we not had InMemory:


SQL> alter table owner1.table1 no inmemory;

Table altered.

Elapsed: 00:00:00.27
SQL> explain plan for select sum(iata_code) from owner1.table1 where code_role='                   HD';

Explained.

Elapsed: 00:00:00.02
SQL> @utlxpls

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1157679052

--------------------------------------------------------------------------------------------
| Id  | Operation               | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |                  |     1 |     6 | 89823   (1)| 00:00:04 |
|   1 |  SORT AGGREGATE         |                  |     1 |     6 |            |          |
|*  2 |   VIEW                  | index$_join$_001 |  4177K|    23M| 89823   (1)| 00:00:04 |
|*  3 |    HASH JOIN            |                  |       |       |            |          |
|*  4 |     INDEX RANGE SCAN    | CODE_ROLE        |  4177K|    23M|  8561   (1)| 00:00:01 |
|   5 |     INDEX FAST FULL SCAN| IATA_CODE        |  4177K|    23M|  5785   (1)| 00:00:01 |

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("CODE_ROLE"='HD')
3 - access(ROWID=ROWID)
4 - access("CODE_ROLE"='HD')

19 rows selected.

SQL>

But it doesn’t end here: what if we had one index on both columns?


SQL> alter table owner1.table1 no inmemory;

Table altered.

SQL> drop index owner1.code_role;

Index dropped.

SQL> drop index owner1.iata_code;

Index dropped.

SQL> create index owner1.iata_code_role on owner1.table1(code_role,iata_code);

Index created.

SQL> explain plan for select sum(iata_code) from owner1.table1 where code_role='HD';

Explained.

SQL> @utlxpls

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 4094297780

------------------------------------------------------------------------------------
| Id  | Operation         | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                |     1 |     6 |  9214   (1)| 00:00:01 |
|   1 |  SORT AGGREGATE   |                |     1 |     6 |            |          |
|*  2 |   INDEX RANGE SCAN| IATA_CODE_ROLE |  4177K|    23M|  9214   (1)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------

2 - access("CODE_ROLE"='HD')

14 rows selected.

SQL>

Now put the table InMemory again. I expect the optimizer to choose the InMemory option than, based on the cost above.


SQL> alter table owner1.table1 inmemory priority critical;

Table altered.

SQL> explain plan for select sum(iata_code) from owner1.table1 where code_role='HD';

Explained.

SQL> @utlxpls

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3207558960

--------------------------------------------------------------------------------------
| Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |        |     1 |     6 |  6153  (12)| 00:00:01 |
|   1 |  SORT AGGREGATE             |        |     1 |     6 |            |          |
|*  2 |   TABLE ACCESS INMEMORY FULL| TABLE1 |  4177K|    23M|  6153  (12)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------

2 - inmemory("CODE_ROLE"='HD')
filter("CODE_ROLE"='HD')

15 rows selected.

SQL>

We’ve seen that InMemory can indeed speed up selects, albeit not magnitudes faster than with indexes. I wonder what the impact will be on OLTP. After all, this columnar presentation needs to be maintained with every OLTP on the table. However, indexes also need to be maintained and that involves disk IO, both for the index itself as for undo and for redo and archiving.

I tried to make some more examples but it turned out that my database and environment are too limited. I even got contradictionary results.

And of course, a table has to be in memory only once to serve many purposes. For indexes, it’s not uncommon to have many of them on one table, serving different purposes. I’ve seen large databases where the indexes actually took up more space then the data.

 

I might make a part 2 of this blog some day. For now I leave it to you, readers, to gather more results. I think this option really has a lot of potential and I’m eager to read some results from the real world.

The post Oracle InMemory compared to indexing appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/14/oracle-inmemory-first-practical-tries/feed/ 0
Cloud Control authorization with Active Directory http://technology.amis.nl/2014/08/14/cloud-control-authorization-active-directory/?utm_source=rss&utm_medium=rss&utm_campaign=cloud-control-authorization-active-directory http://technology.amis.nl/2014/08/14/cloud-control-authorization-active-directory/#comments Thu, 14 Aug 2014 09:45:41 +0000 http://technology.amis.nl/?p=31522 About 2 months ago I wrote an article about setting up user authentication in Cloud control, based on their account in the Active Directory. As promised, here is the second part describing Cloud Control authorization with Active Directory. A small recap about why this could be useful: If your company is preferring Microsoft Active Directory [...]

The post Cloud Control authorization with Active Directory appeared first on AMIS Technology Blog.

]]>
About 2 months ago I wrote an article about setting up user authentication in Cloud control, based on their account in the Active Directory. As promised, here is the second part describing Cloud Control authorization with Active Directory.

A small recap about why this could be useful:

If your company is preferring Microsoft Active Directory (further named AD) as a source of truth (or at least you´re trying to) you should be using the AD as source for the user accounts in Cloud Control. The advantages are obvious: Control the validity of accounts in one single place, no different login’s and passwords for the user etc.

Keep in mind that access control is made up of 2 parts:

  • Authentication
    Validate if the person trying to login is who he claims to be (i.e. by password check, see previous post)
  • Authorization
    Give access to whatever the authenticated user is allowed to (i.e. by roles)

In this article I will give a step-by-step to setup the authorization part, thus allowing the user to do what he/she has to do.

As a starting point for this I will take the endpoint situation of my previous post, which is: All users in the Active Directory can login in your Cloud Control, but have no rights automatically granted. This still is a manual job.

First point to attack is the fact that all user will be able to login into Cloud Control. Although they will not be able to break anything, it isn’t a very desirable situation.

Speak to your AD-administrator again and ask him/her to create a security group. Only members of this group will be able to login into Cloud Control. In this example we will use “AMIS_OEM_Login”. When the security group is created you’ll need the following:

  • Exact name of the security group (i.e. AMIS_OEM_login)
  • Group Base DN (i.e. OU=CloudControl,OU=Global,OU=Security Groups,OU=xxxxxxx,DC=xxxxxxx,DC=local)
  • At least 1 testaccount which should be member of this group to be able to test

The steps to perform:

  1. Perform a backup of your current setup to be able to go back to a stable situation if something fails
  2. Login into the WebLogic console as administrator (weblogic)
  3. Click the <Lock & Edit> button on the left to be able to make changes
  4. In the “Domain Structure” click on “Security Realms”, click on “myrealm” and select the tab “Providers”
  5. Click on “EM_AD_Provider”
  6. Select the tab “Provider Specific”
  7. Scroll down to the section “Groups”
  8. Put the Group Base DN received from your AD-administrator in the field “Group Base DN:”
  9. Keep “All Groups Filter:” empty
  10. Scroll down and click save
  11. Click on the <Activate Changes> button on the left of the screen. You should receive a message that the changes have been activated, but a restart is required to take effect
  12. Restart your Cloud Control environment

  1. Log into the host as user oracle and navigate to the ./em/oms/bin directory
  2. Execute the following statement to achieve that only useraccounts who are member of this group will be auto-provided to OEM when a user logs in the first time.
./emctl set property -name "oracle.sysman.core.security.auth.autoprovisioning_minimum_role" -value "AMIS_OEM_Login"

Now you should be able to use your test account (which is member of the specified group) to login into Cloud Control. A valid AD account which is not member of the designated group will nog be able to login. Remember, no rights and permissions will be assigned to newly created account.. This should still be done manually by an administrator, or… keep reading…

When the setup above is working, we can bring it to a higher level by also putting the rights and permissions into AD-groups, bringing most of Cloud Control security into a single place (Active Directory).

Let’s state we want to differentiate access rights between 2 groups

  • Read only users
  • Administrators

First, go back to your AD-administrator and ask him for 2 more security groups. I will use AMIS_OEM_RO and AMIS_OEM_Admin for this example. Naming could be anything, as long you (and your admins) understand it. Please ensure these groups are created in the same DN as the previous (AMIS_OEM_Login) group. Also make sure you have 2 test accounts, each member of 1 of these groups

The steps to perform:

  1. Log in into Cloud Control as user sysman
  2. Navigate to <Setup>, <Security>, <Roles>
  3. Click on <Create>
  4. The group name should be exactly the same as the security group in your AD. Add a proper description and make sure the box “External Role” is ticked.
  5. Click <Next>
  6. On the next screens you can grant any right and permission you want to this role.
  7. Do not grant the role to any administrator.
  8. On the last screen review the settings and click <Finish>
  9. Perform step 3-7 again for the second role.

Next step is to provide WebLogic with the appropriate security group filter so the security groups can be found.

  1. Login into the WebLogic console as administrator (weblogic)
  2. Click the <Lock & Edit> button on the left to be able to make changes
  3. In the “Domain Structure” click on “Security Realms”, click on “myrealm” and select the tab “Providers”
  4. Click on “EM_AD_Provider”
  5. Select the tab “Provider Specific”
  6. Scroll down to the section “Groups”
  7. In the field “All Groups Filter” enter the filter expression which gives the appropriate groups as created in your Active directory. I use (cn=AMIS_OEM*). Note the asterisk at the end so it will give all AMIS_OEM_xxxxx groups.
  8. Scroll down and click save
  9. Click on the <Activate Changes> button on the left of the screen. You should receive a message that the changes have been activated, but a restart is required to take effect
  10. Restart you Cloud control environment

In most companies the actual login is not equal to the display name of a user (i.e. jgouma vs. Jeroen Gouma). If this is the case we need to activate an extra setting in Cloud Control to proper deal with this situation.

  1. Log into the host as user oracle and navigate to the ./em/oms/bin directory
  2. Execute the following statement to enable username mapping:
./emctl set property -name "oracle.sysman.core.security.auth.enable_username_mapping" -value "true"

Another advantage of using the AD-data is that no data entry on user specific attributes has to be performed. Information regarding phone-number, email address etc. can be retrieved from the AD when the account is created (= first login). This can be achieved by mapping AD-fields to specific attributes of the user account. It is possible to use a single field, a concatenation of fields or combine with literal strings.

The following attributes can be used:

  • USERNAME
  • EMAIL
  • CONTACT
  • LOCATION
  • DEPARTMENT
  • COSTCENTER
  • LINEOFBUSINESS
  • DESCRIPTION

The steps to perform:

  1. Log into the host as user oracle and navigate to the ./em/oms/bin directory
  2. Execute the following statement to enable the ldap userattributes mapping:

Note: all needs to be on line 1, I only splitted the lines for readability…

./emctl set property 
 -name "oracle.sysman.core.security.auth.ldapuserattributes_emuserattributes_mappings" 
 -value "USERNAME={%displayname%},EMAIL={%mail%}"

This example puts the displayname from the active directory into the field username, and the email address will be filled with the mail address from the same source.

Some more examples:
To use a literal string in combination with AD-fields. The result of this would be “Jeroen Gouma AMIS consultant” as username.

./emctl set property 
 -name "oracle.sysman.core.security.auth.ldapuserattributes_emuserattributes_mappings" 
 -value “USERNAME={%firstname% %lastname% AMIS consultant}”


When you need to use a comma (which is field separator) it needs to be escaped with a \. The example below would result in having the text “Gouma, Jeroen , +31306016000″ in the description attribute.

./emctl set property 
 -name "oracle.sysman.core.security.auth.ldapuserattributes_emuserattributes_mappings" 
 -value “DESCRIPTION={%lastname%\, %firstname% \, %phone%}”


Combining a few examples together could lead to the following statement:

./emctl set property 
 -name "oracle.sysman.core.security.auth.ldapuserattributes_emuserattributes_mappings" 
 -value “DESCRIPTION={%lastname%\, %firstname% \, %phone%}”,"USERNAME={%uid%},EMAIL={%mail%},
 CONTACT={%telephone%},DEPARTMENT={%department%},DESCRIPTION={%description%},LOCATION={%postalcode%}"



When all done, it’s time to verify the setup. Login into the Cloud Control console using your test accounts.

When login was successful, logout and login again using the sysman account. Navigate to <Setup>, <Security>,<Administrators>. You will see the new account has been created. Also notice the authentication type has been set to SSO (Single Sign On) indicating this is an external account.

Select the new account and click the <View> button. Verify that the properties have been filled as expected, i.e. email address, location. If fields are not filled properly, check the ldap userattributes mapping executed earlier. You can re-execute this statement at will. Setting will be effective immediately on all accounts created afterwards.

In the <Roles> section you should see the (applicable) Active Directory groups the user is a member of.

Sources:

  1. Enterprise Manager Cloud Control Documentation
  2. Cloud Control Security guide

The post Cloud Control authorization with Active Directory appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/14/cloud-control-authorization-active-directory/feed/ 0
Oracle Restart to autostart your oracle database, listener and services on linux. http://technology.amis.nl/2014/08/11/oracle-restart-to-autostart-your-oracle-database-listener-and-services-on-linux/?utm_source=rss&utm_medium=rss&utm_campaign=oracle-restart-to-autostart-your-oracle-database-listener-and-services-on-linux http://technology.amis.nl/2014/08/11/oracle-restart-to-autostart-your-oracle-database-listener-and-services-on-linux/#comments Mon, 11 Aug 2014 12:09:00 +0000 http://technology.amis.nl/?p=31419 Half a year ago, my colleague Remco wrote an article on auto starting the listener and the databases after a host reboot. As usual with Oracle, there are several solutions. In a previous job, I learned to appreciate Oracle Grid infrastructure to do the same. And then some more. Oracle Grid Infrastructure can be downloaded [...]

The post Oracle Restart to autostart your oracle database, listener and services on linux. appeared first on AMIS Technology Blog.

]]>
Half a year ago, my colleague Remco wrote an article on auto starting the listener and the databases after a host reboot. As usual with Oracle, there are several solutions. In a previous job, I learned to appreciate Oracle Grid infrastructure to do the same. And then some more.

Oracle Grid Infrastructure can be downloaded and used for free. It serves many purposes, especially for ASM and RAC,but as it turns out, it can be installed as ‘software only’ and still serve a purpose known as Oracle Restart.

So why not use the old familiar dbstart and dbstop scripts?

Here’s why: Data Guard. Many applications, including Weblogic Connection Pools, use a long connection string that contains host name and service name of both (or more) instances of a Data Guard installation. Suppose host A with instance Prim represents the primary database. Now suppose the application wants to connect to the host A and SID Prim but it can’t get a connection immediately through the listener (this actually might happen more often than you think). The application will behave as expected, which is to look for host B with SID Prim. That might very well exist, but since that instance is a standby instance, it will answer with a connection refused. And that often is worse than not finding an instance at all.

What we want is to connect to a service rather than to a SID. Services are completely customizable per instance. So why not have a service that only exists if the database is a primary database?

This mechanism exists for years already and used to be taken care of by using triggers, activating the service upon opening the database. Remember, a standby database doesn’t open, so that particular service won’t be started.

Doesn’t it? Not in the past, but now we have active standby. Oops, there’s an unwanted service.

Grid Infrastructure allows you to start services depending on the role of the database: Primary, Physical Standby, Logical Standby or Snapshot Standby.

It also enables you to automatically start the database and listener upon host startup.

It even enables you to immediately and automatically start processes  when they crash. Try killing process pmon for instance. That’s basically killing your instance. You’d have to discover this happened and then restart your database manually using sqlplus.

With this new software, it will be detected automatically and restart your database before you noticed it was down.

Time to show how it’s done.

 

Download the Oracle Grid software and have it available at your host.

Login as user oracle and go to the directory where you unzipped the software, called ‘grid’. Type

 

[oracle@oraclelinux6 grid]$ ./runInstaller

 

image

The options are pretty clear. Choose the last one and click Next.

 

clip_image002

No comment needed..

 

clip_image004

I still prefer the familiar oinstall group and for this example it’s not important anyway since we won’t be using ASM.

Click Next.

Depending on your choices you might get one or more warnings. You can safely ignore them.

 

clip_image006

This is the Oracle default. Make sure you at least have /u01/app, owned by oracle:oinstall.

I highly recommend using these Oracle default directories, it makes life easier at so many levels.

Click Next.

 

clip_image008

Issues will occur. Investigate and fix them, or check Ignore All if you are sure of what you are doing. Personally I don’t need swap on a database server so at least that one I ignore. And in this particular case, I didn’t solve the resolv.conf issue either.

Click Install.

 

clip_image010

An overview of the choices you made. Check them and Click Install.

Now the software will be installed which might take up anything between 1 and 20 minutes. When it’s finished you’ll see the next screen:

 

clip_image011

Open an extra terminal as user root and execute the script /u01/app/11.2.0/grid/root.sh

Look at the output. It should contain the lines

To configure Grid Infrastructure for a Stand-Alone Server run the following command as the root user:

/u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/roothas.pl

This is the most tricky part: even if you think you’ve met all requirements, you might very well run into errors. Luckily it’s quite well documented on the interweb.

Once done, click Finish in the main screen and the screen will close. You are done installing.

Next, we create a database with dbca. You can also use grid control to do so. Once you filled out all parameters and start creating the database, notice there is a line in the progress window, stating

Registering database with Oracle Restart

 

clip_image001

 

And now for the real thing.

The main reason we installed Grid infrastructure is a feature called Oracle Restart. It can be checked and configured using the command ‘srvctl’. Let’s explore its possibilities.

First, check which databases are controlled by Oracle Restart

 

[oracle@linux63 ~]$ . oraenv

ORACLE_SID = [oracle]? orcl

The Oracle base has been set to /u01/app/oracle

[oracle@linux63 ~]$ srvctl config

orcl

[oracle@linux63 ~]$

 

There’s one database registered by Oracle restart, our just created orcl.

Let’s check out this database:

 

[oracle@linux63 ~]$ srvctl config database -d orcl

Database unique name: orcl

Database name: orcl

Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1

Oracle user: oracle

Spfile:

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Database instance: orcl

Disk Groups:

Services:

[oracle@linux63 ~]$ 

 

The original goal of installing Grid infrastructure was to automatically start the database upon starting or rebooting the host.

So I rebooted (not visible here) and checked:

 

[oracle@linux63 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Wed Aug 6 12:05:55 2014

Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

SQL> select status from v$instance;

STATUS

------------

OPEN

SQL> 

 

Good, that worked.

I also claimed that it would start the listener automatically. Has it done so?

 

[oracle@linux63 ~]$ lsnrctl status

LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 06-AUG-2014 12:12:46

Copyright (c) 1991, 2011, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))

TNS-12541: TNS:no listener

TNS-12560: TNS:protocol adapter error

TNS-00511: No listener

Linux Error: 111: Connection refused

[oracle@linux63 ~]$ 

 

That’s disappointing. Or is it? Maybe we should add the listener (default name LISTENER) to the srvctl configuration and start it:

 

[oracle@linux63 ~]$ srvctl add listener

[oracle@linux63 ~]$ srvctl config

orcl

[oracle@linux63 ~]$ 

 

Pity, it doesn’t show the listener. But it might be there:

 

[oracle@linux63 ~]$ srvctl config listener

Name: LISTENER

Home: /u01/app/11.2.0/grid

End points: TCP:1521

[oracle@linux63 ~]$ srvctl start listener

[oracle@linux63 ~]$ lsnrctl status

LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 06-AUG-2014 12:15:48

Copyright (c) 1991, 2011, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))

STATUS of the LISTENER

------------------------

Alias LISTENER

Version TNSLSNR for Linux: Version 11.2.0.3.0 - Production

Start Date 06-AUG-2014 12:15:38

Uptime 0 days 0 hr. 0 min. 9 sec

Trace Level off

Security ON: Local OS Authentication

SNMP OFF

Listener Parameter File /u01/app/11.2.0/grid/network/admin/listener.ora

Listener Log File /u01/app/11.2.0/grid/log/diag/tnslsnr/linux63/listener/alert/log.xml

Listening Endpoints Summary...

(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.73)(PORT=1521)))

Services Summary...

Service "orcl" has 1 instance(s).

Instance "orcl", status READY, has 1 handler(s) for this service...

Service "orclXDB" has 1 instance(s).

Instance "orcl", status READY, has 1 handler(s) for this service...

The command completed successfully

[oracle@linux63 ~]$ 

 

I’ve rebooted the host at this time and sure enough, both the listener and the database started automatically.

So, we’ve met our target: both the listener and the database started without any human intervention. But it would be a shame to stop now since there are many more options in Oracle Restart. One in particular I mentioned in my introduction.

As we all know, or should know, one should connect to a database using a service, not to a SID. Let’s take another look at the Oracle Restart configuration of this specific database:

 

[oracle@linux63 ~]$ srvctl config database -d orcl

Database unique name: orcl

Database name: orcl

Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1

Oracle user: oracle

Spfile:

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Database instance: orcl

Disk Groups:

Services:

[oracle@linux63 ~] $

 

The last line, Services, is empty. Now we’ll add a service with the name ‘production’. As you can see we need to tell to which database this service should refer:

 

[oracle@linux63 ~]$ srvctl add service -s production -d orcl -l PRIMARY

[oracle@linux63 ~]$ 

 

Look at the last parameter, -l PRIMARY. This tells Oracle Restart that this service should be started only if the database_role is PRIMARY. Redundant for a standalone database but in case of a dataguard configuration this is critical: The service ‘production’ will only be available if the server is the primary server. A standby server will never be reachable through this service name.

At the standby host you can add this same service on the standby database configuration. The service will only be started if that standby database becomes the primary database and that’s exactly what we want.

Okay, start the service and check:

 

[oracle@linux63 ~]$ srvctl start service -s production -d orcl

[oracle@linux63 ~]$ lsnrctl status

LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 06-AUG-2014 12:00:59

Copyright (c) 1991, 2011, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))

STATUS of the LISTENER

------------------------

Alias LISTENER

Version TNSLSNR for Linux: Version 11.2.0.3.0 - Production

Start Date 06-AUG-2014 11:59:32

Uptime 0 days 0 hr. 1 min. 26 sec

Trace Level off

Security ON: Local OS Authentication

SNMP OFF

Listener Log File /u01/app/oracle/diag/tnslsnr/linux63/listener/alert/log.xml

Listening Endpoints Summary...

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=linux63.local)(PORT=1521)))

Services Summary...

Service "orcl" has 1 instance(s).

Instance "orcl", status READY, has 1 handler(s) for this service...

Service "orclXDB" has 1 instance(s).

Instance "orcl", status READY, has 1 handler(s) for this service...

Service "production" has 1 instance(s).

Instance "orcl", status READY, has 1 handler(s) for this service...

The command completed successfully

[oracle@linux63 ~]$ 

 

Again, I rebooted the host and the service was there afterward without human intervention.

And that’s it for this blog. There are many more options and I advise you to go and play with them to get a feeling of all the possibilities.

One last command here, to get you started:

 

[oracle@linux63 bin]$ srvctl -h

The post Oracle Restart to autostart your oracle database, listener and services on linux. appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/11/oracle-restart-to-autostart-your-oracle-database-listener-and-services-on-linux/feed/ 1