AMIS Technology Blog http://technology.amis.nl Friends of Oracle and Java Fri, 19 Sep 2014 05:18:41 +0000 en-US hourly 1 http://wordpress.org/?v=4.0 Oracle SOA Suite 11g and 12c: Determining composite dependencies to the level of operations http://technology.amis.nl/2014/09/17/oracle-soa-suite-11g-12c-determining-composite-dependencies-level-operations/ http://technology.amis.nl/2014/09/17/oracle-soa-suite-11g-12c-determining-composite-dependencies-level-operations/#comments Wed, 17 Sep 2014 16:22:07 +0000 http://technology.amis.nl/?p=32349 In large companies, often there are many services and dependencies between services. It is important to track service dependencies in order to for example estimate the impact of changes. Design documents or architecture views can be used for this but as everybody knows, there is often a gap between theory and practice (design and implementation). [...]

The post Oracle SOA Suite 11g and 12c: Determining composite dependencies to the level of operations appeared first on AMIS Technology Blog.

]]>
In large companies, often there are many services and dependencies between services. It is important to track service dependencies in order to for example estimate the impact of changes. Design documents or architecture views can be used for this but as everybody knows, there is often a gap between theory and practice (design and implementation).

In this blog post I provide code to determine dependencies between composites to the level of operation calls. In order to achieve this, I’ll parse the composite.xml files, JCA files (used by adapters) and also the BPEL and BPMN files in order to determine the operations. The script can be used for SOA Suite 11g and 12c composites.

2014-09-16 19_06_22-PowerPoint Slide Show - [dependencies.pptx]

The above picture shows different parts of which a composite is composed and how they are linked. The script first determines references. The references specify which external services are called. Then by using wires, the relevant components are determined. Based on the component type, specific logic is used to extract the operation. Not shown in this picture is how database dependencies can also be determined by the script by parsing the JCA files specified in the reference. If you’re in a hurry, you can go to the ‘Executing the script’ part directly and skip the explanation.

Composites

This blog will focus on composites (which can contain components like BPEL and BPM) using the shared SOA infrastructure and not on for example the Service Bus. Composites use the Software Component Architecture (http://www.oasis-opencsa.org/sca) to wire different components together in a consistent way. Oracle uses XML files to describe composites, components and references. These files can easily be parsed and correlated.

Introduction composite.xml

The main file describing a composite is the composite.xml file. Below is a small sample of a HelloWorld composite containing a single component, a BPEL process.
composite.xml
As can be seen, the composite contains properties, services, component definitions and wires which link components to either services or references. This composite has one service: helloworld_client_ep and no references.

Web service references

When I create a new composite which calls this one (HelloWorldCaller), an example can be seen of a reference in the composite of the newly created process.

A reference

This reference contains information on the development time WSDL to be used (ui:wsdlLocation) and the concrete binding at runtime (binding.ws). The ui:wsdlLocation should of course be an MDS path in a production environment (see https://blogs.oracle.com/aia/entry/aia_11g_best_practices_for_dec).

The name of the service to be called can be abstracted in various ways from for example the WSDL path if the service is hosted on the SOA infrastructure. On different components the URL can be manually set. Since the method to determine the service name differs per customer/technology, I will not further elaborate on this (service virtualization is a best practice). You can however in most cases use the composite.xml file to determine the service called. This is required in order to link services together, which is necessary in order to visualize dependencies. In my script I’ve used a method to determine the service name from the service WSDL: wsdl_to_servicename.

Determining the operation for BPEL processes

The operation is specified in the invoke action inside a BPEL process. The invoke action partnerLink attribute contains the name of the partnerlink. This partnerlink name corresponds to the name of the reference in the composite.xml. This can be used to select specific operations.

Determining the operation for BPMN processes

BPMN processes are structured differently then BPEL processes. Here Conversations are defined in the BPMN XML file. The Conversations are referred to by Conversationals which have a ServiceCallConversationalDefinition. This definition contains the operation which is called.

JCA references

JCA references are references which call JCA adapters such as the JMSAdapter, the DbAdapter or other adapters. The references in this case are not webservices and the binding in the composite looks a bit different.

CaptureReferenceDb

In this case, the important information is contained in the file referenced by the binding.jca element. The JCA file specified contains the following information in my case.

CaptureReferenceDbJCA

From this JCA file, I can determine the connection factory, package and procedure name. Do keep in mind that you should check if the adapter=”db” and the interaction-spec className attribute is oracle.tip.adapter.db.DBStoredProcedureInteractionSpec if you want to check the PackageName property. Other JCA adapters can be parsed in a similar way.

Executing the script

The script can be downloaded here: https://dl.dropboxusercontent.com/u/6693935/blog/soaparser.zip

In order to execute it, you need Python 2.7 (I have not checked if it works with higher or lower versions). I have used PyDev plugin in Eclipse Luna to develop it. I’m no expert Python programmer so do not use this script as an example on how you should write correct Python scripts. You need to change the path to the root where composites can be searched for recursively.

As you can see in the bottom right part of the screenshot, my testcase contained three services. HelloWorld, HelloWorldCaller which called the process operation on HelloWorld and HelloWorldDb which called the database procedure TESTUSER.UTILS.GET_SYSDATE

2014-09-16 19_31_25-PyDev - SOAParser_nl_amis_smeetsm_soaparser.py - Eclipse

Conclusion

In this blog I have supplied a script to analyse composite dependencies. Some reservations are in order though.

Custom code
Of course, when using interesting specific custom logic to dynamically determine calls at runtime, code to determine dependencies also needs to be written specifically for the implementation. In such cases this script will not suffice.

No runtime dependencies
The script does not analyse a runtime situation but code present in a certain directory. If services have their own independent lifecycle, there might not be a specific directory which contains the state of a complete environment.

No BPEL 10g
For BPEL 10g, the bpel.xml file and the adapter WSDL files can be analyzed in a similar way. At the time of writing, I hope not many customers are still using 10g though.

Further

The method of parsing code to determine dependencies can also be used to extent to other technologies. Such as frontend applications. They can be linked to the services providing a more complete image of the application landscape.

When using a simple visualization tool such as Graphviz (http://www.graphviz.org/) you can visualize the dependencies. It requires the dependencies to be provided in a specific format, the DOT language. This language is relatively easy to generate from a script. Such an image can be very interesting to developers, architects and designers to determine who uses a certain service.

The post Oracle SOA Suite 11g and 12c: Determining composite dependencies to the level of operations appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/09/17/oracle-soa-suite-11g-12c-determining-composite-dependencies-level-operations/feed/ 0
The Making of the World Cup Football 2014 Match Center ADF & AngularJS/HTML5 Application http://technology.amis.nl/2014/09/13/the-making-of-the-world-cup-football-2014-match-center-adf-angularjshtml5-application/ http://technology.amis.nl/2014/09/13/the-making-of-the-world-cup-football-2014-match-center-adf-angularjshtml5-application/#comments Sat, 13 Sep 2014 08:55:34 +0000 http://technology.amis.nl/?p=32331 This blog-post accompanies the OTN Article Marrying the Worlds of ADF and HTML 5 – to provide some details about setting up the environment for the ADF & HTML5/AngularJS World Cup Football 2014 Match Center application and about some implementation details in this application. Development Environment In order to inspect the applications, you will need [...]

The post The Making of the World Cup Football 2014 Match Center ADF & AngularJS/HTML5 Application appeared first on AMIS Technology Blog.

]]>
This blog-post accompanies the OTN Article Marrying the Worlds of ADF and HTML 5 – to provide some details about setting up the environment for the ADF & HTML5/AngularJS World Cup Football 2014 Match Center application and about some implementation details in this application.

Development Environment

In order to inspect the applications, you will need to install JDeveloper 11g (11.1.1.7). This comes with its integrated WebLogic Server, and for running all but two of the ADF sample applications, that set up suffices. To also run the World Cup Football 2014 Match Center application itself, you will need to have access to an Oracle Database (XE or otherwise).

The easiest way to get hold of a complete environment – a VM with JDeveloper, the Oracle XE database and all sample applications up and almost running – is to follow the instructions in this article (that leverages Vagrant and Puppet to generate the VM for you and downloads the source code from GitHub): http://technology.amis.nl/2014/07/29/creating-automated-jdeveloper-12-1-3-oracle-xe-11gr2-environment-provisioning-using-vagrant-and-puppet/.

If you fancy a more manual set up procedure – which I can understand but would like to suggest against – you can find plenty of instructions for installing JDeveloper 11g (or 12c) and an Oracle Database. Additionally, you would have to clone the GitHub source repository that accompanies this article (https://github.com/pavadeli/adf-html5) or download the source as a zip-file (https://codeload.github.com/pavadeli/adf-html5/zip/master) and extract the zip-file to your local file system.

The file system would look like as shown in the next figure:

image

All ADF sample applications described in the article are in the sub folders adf, and adf-step1..6. Sub folder database contains the SQL scripts required for the creation of the database schema with the tables and types used by the World Cup 2014 application. Note: only the ADF applications in sub folders adf and adf-step1 actually need the database set up. All other sample applications run without database interaction.

We assume a database user schema (for example WC) has been created in the database and has been granted several system privileges, including connect, create session, create table, create or replace type and some quota on the default tablespace. Use SQL*Plus, SQLDeveloper or your favorite tool to create the database objects using the ddl.sql script. Then use the dml.sql script to load the data into the tables. The file queries.sql contains a number of sample queries against the database schema; these queries have been used in the ADF BC ViewObjects in the World Cup 2014 application.

After starting JDeveloper and opening either one of the two applications that require database access, look under application resources and set the properties for the database connection to the values appropriate for your environment. To test whether the settings are correct, you can run the ADF BC Application Module WorldCupService.

image

 

 

Special Features

This section is work in progress and will contain references to detailed explanation about certain features in the ADF World Cup Football 2014 Match Center application. By inspecting the code for the application you will find out the details on your own – it is all there. Below are some references to resources that were helpful in the creation of this sample application.

Some features that deserve special attention:

* Creation of the Language Selector and the Skin Switcher

* Using contextual events to communicate the current set of tags to the OTNBridge and eventually the tagcloud and to communicate the selected tags from the tagcloud via the OTNBridge to the host ADF Taskflow

* Refreshing the ADF BC ViewObject when a tag has been selected (and this has been published through a contextual event) (and ensuring that the contents of the table is refreshed as well)

* use of client side ADF JavaScript to communicate manual tag creation and deletion

* Opening the popup with taskflow inside from the context info element and passing the proper (match) context to the taskflow:

image

* Visualizing the match progress using panelGridLayout:

image

Resources

GitHub Repository with Vagrant (and Puppet) configuration files for the generation of the Virtual Box VM with the complete development environment including the sample applications: https://github.com/lucasjellema/adf-12-1-3-vm .

Some blog articles discussing the SQL queries behind the tag cloud and the group standings:

SQL Challenge: Find World Cup Football matches with a comeback

SQL Challenge: Drilling down into World Cup Football Tag Cloud

SQL Challenge: Dynamically producing a tag cloud for World Cup Football matches

SQL Challenge – World Cup Football 2014 – Retrieving matches and Calculating Group standings

Use REGEXP_SUBSTR for string tokenizing https://blogs.oracle.com/aramamoo/entry/how_to_split_comma_separated_string_and_pass_to_in_clause_of_select_statement

Add logging: http://blog.iadvise.eu/2013/01/02/starting-with-adf-11g-logging/ in ADF applications

Note: popup and setpropertylistener do not go together well (https://community.oracle.com/thread/677678?start=0&tstart=0, https://community.oracle.com/thread/2235404?start=0&tstart=0, http://amit-adf-work.blogspot.nl/2012/12/adf-issue-with-popup-and.html )

http://www.jobinesh.com/2010/10/how-to-set-bind-variable-values-at.html

http://dailydevfixes.blogspot.nl/2011/07/setting-bind-parameters-on-hierarchy-of.html

http://jdeveloper-adf.googlecode.com/svn/trunk/TGPrototype2/ViewController/src/com/tgslc/defaultManagement/utils/ADFUtils.java

Set width for panelCollection: http://cbhavsar.blogspot.nl/2008/10/using-panelcollection-with-master.html

Help on PanelGridLayout: https://formattc.wordpress.com/tag/panelgridlayout/

Stretching in various ADF Faces components: http://www.adftips.com/2010/11/adf-ui-tips-to-stretch-different-adf.html

Introducing ADF Faces Contextual Events:  http://technology.amis.nl/2013/03/14/adf-re-introducing-contextual-events-in-several-simple-steps/

Add componentUI reference in TagCloudBean; www.jobinesh.com/2011/06/safely-storing-uicomponent-component.html

Add resource bundle and language switcher http://technology.amis.nl/2012/08/11/supporting-multiple-languages-in-adf-applications-backed-by-resource-bundles-and-programmatically-controlling-the-jsf-locale/ and

http://technology.amis.nl/2012/08/09/introduction-to-resource-bundles-in-adf-applications-for-centralizing-management-of-boilerplate-text/

Introduction to skinning: http://technology.amis.nl/2009/07/01/using-adf-faces-11g-skinning-for-setting-the-styles-of-specific-component-instances-or-groups-of-instances/ and

http://docs.oracle.com/cd/E28280_01/web.1111/b31973/af_skin.htm and Skin (Switcher): http://docs.oracle.com/cd/E18941_01/tutorials/jdtut_11r2_83/jdtut_11r2_83.html

Presentation on slideshare with some visualizations of the mechanisms in the World Cup Football 2014 Match Center application: http://www.slideshare.net/lucasjellema/marrying-html5-and-angular-to-adf-oracle-openworld-2014-preview.

The post The Making of the World Cup Football 2014 Match Center ADF & AngularJS/HTML5 Application appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/09/13/the-making-of-the-world-cup-football-2014-match-center-adf-angularjshtml5-application/feed/ 0
Developing the AngularJS Tagcloud application – appendix for Marrying the Worlds of ADF and HTML5 http://technology.amis.nl/2014/09/13/developing-the-angularjs-tagcloud-application-appendix-for-marrying-the-worlds-of-adf-and-html5/ http://technology.amis.nl/2014/09/13/developing-the-angularjs-tagcloud-application-appendix-for-marrying-the-worlds-of-adf-and-html5/#comments Sat, 13 Sep 2014 06:49:44 +0000 http://technology.amis.nl/?p=32319 This blog-post accompanies the OTN Article Marrying the Worlds of ADF and HTML 5 – to provide some details about setting up the HTML5/AngularJS development environment and about the development of the AngularJS application. Get hold of the sources All sources associated with the article are available from GitHub: https://github.com/pavadeli/adf-html5. You can clone this GitHub [...]

The post Developing the AngularJS Tagcloud application – appendix for Marrying the Worlds of ADF and HTML5 appeared first on AMIS Technology Blog.

]]>
This blog-post accompanies the OTN Article Marrying the Worlds of ADF and HTML 5 – to provide some details about setting up the HTML5/AngularJS development environment and about the development of the AngularJS application.

Get hold of the sources

All sources associated with the article are available from GitHub: https://github.com/pavadeli/adf-html5. You can clone this GitHub repository or download the sources in a zip-file that you can subsequently extract locally. The directory structure that you will end up with looks like this:

image

 

The top folders contain the ADF application with embedded AngularJS component, the lower (html-…) directories contain the pure HTML5 & AngularJS applications.

Set up the Development Environment

Just like JDeveloper is the IDE for ADF application development – including facilities for library management, building and deploying the application and with the integrated WebLogic Server as the run time engine, we need an environment to develop the rich client, HTML5 & AngularJS application. We use a different set of tools in this environment – and while you have some choice, we suggest the following combination of tools:

–IDE: Sublime Text Editor

–Package Manager: Node.js – NPM

–Build (Ant-like): Gulp

–Dependency Management (Maven-style): Bower

–Run Time: Google Chrome browser

Install node.js (http://nodejs.org/). This will give you access to the Node.js Package Manager (NPM). With NPM you can install Gulp (build tool, super hip, does for JS what ANT does for Java) and bower (dependency management, is to JS what Maven is for Java)

Open the command line and type:

> npm install -g gulp bower

This will install both gulp and bower on your local file system.

To build the pure HTML5 sample applications in the folders html-step1, html2-step2 and html, you have change the current directory to each of these folders respectively and for each perform these steps:

> npm install

to fetch all design-time dependencies (the tools used via gulp). The file package.json tells NPM what to install:

image

Next, to fetch the run time dependencies (which means: AngularJS):

> bower install

This instructions results in the creation of the bower_components subdirectory with various JavaScript components downloaded from GitHub or some other git endpoint. The packages that Bower should install are configured in the file bower.json:

image

After running bower install you can run bower list to see the list of installed packages:

image

Note that the angular component as well as the bower-tagcanvas are known to bower (as you can check in the bower directory of components at http://bower.io/search/). Details about a [bower]package can be retrieved with bower info [package name]:

image

The next command starts the build server (Gulp):

> gulp serve

This will not take long (maybe two seconds), and will open a browser window. In the command line window, you will see an indication of the host and port at which you can access the server:

image

The actions performed by Gulp are defined in the gulpfile.js file. This file defines for example that gulp serve will publish all files in the current directory (./) and will watch for changes to all files with extension .html and inside the components directory:

image

Run the sample applications

When you started the build server with gulp serve, the run time server was started – this server provides resources to the associated run time presentation engine: the browser. Real AngularJS developers seem to only use Google Chrome, so open that browser, and point it at the URL suggested in the command line window by the Gulp Server – with /tagcloud-html.html appended (for the folder html-step1):

image

You can open the main source file for this application, in for example Sublime Text editor:

image

Add a new value for a link and save the text file. You will notice that without refreshing, the browser updates the page (thanks to a Web Socket channel to the Gulp Server):

image

Stop Gulp and change to the html-step2 directory

image

Open the tagcloud-bridge.html document in the browser window:

image

This application contains the bridge module into which the guest-tag-cloud component has been injected. Both tagclouds shown in the page are completely independent instances of the same AngularJS module.

Note: instead of checking out the html-step1 and html-step2 directories, you could also immediately go to the html directory, start the Gulp server and open the index.html document that gives access to both page:

image

The AngularJS modules – bridge and tagcloud – can be packaged into a couple of JS files that can easily be integrated into other applications, such as an ADF application. You achieve this through the command “gulp”. This will trigger the default action in the gulp.js file, which states that both integration and tagcloud should be built:

image

The result of this action should be written to directory ../adf/ViewController/public_html/scripts, which is part of the ADF application that embeds the tagcloud component.

image

Note that a typical gulp build pipeline may also contains steps for

  • Minification (reducing the size of the JS file, often with up to 70 – 80%)
  • Jslint (static code checking, similar to Sonar in Java )
  • Various preprocessing / checking tasks

Exploring the AngularJS application

Without diving too deeply into AngularJS there are some things to point out in the AngularJS applications.

A good place to start is in the file tagcloud-html.html:

image

At (1) we see a custom HTML tag, tag-cloud. We can use this tag because HTML5 allows custom tags, we engage AngularJS and in the tagcloud.js file – that is imported and defines the included tagcloud module – we have defined an AngularJS directive that instructs Angular to relate this tag to the tagcloud module.

At (2) we see an Angular data binding expression: {{log}} will result in the contents of the log variable in the Angular scope to be returned. At (7) (see below), we see how we put a function on the scope as tagClicked. This function will be invoked whenever a tag is clicked in the tagcloud. The function will create a string and put that in the scope as log. Because of Angular’s two-way databinding, the update of log in the scope by this function will immediately result in an update of the contents of the <pre> tag in the page.

At (3) we see the import of the JavaScript libraries – two were created by bower and one is our hand-crafted Angular module tagcloud, to be discussed below.  At (4) the Angular App myApp is initialized with dependencies on the tagcloud module. A controller is set up for myApp and defined with an anonymous function. This function puts a collection called tags on the scope with values that represent a simulated initialization phase. After a timeout of 2 seconds (2000 ms), a new value for tags is put on the scope (6). Again, through two way data binding (and the watch function defined for tags in the tagcloud.js file), the consumer of the tags variable will be updated on both occasions when the tags variable is set.

image

 

Let’s take a look at tagcloud.js – the file that defines the Angular tagcloud module. At (1) we see the definition of the module tagcloud, which does not have dependencies on any additional modules. At (2) is the derivation of the unique id for each instance – every occurrence of the tag-cloud custom HTML tag. At (3) the directive specifies that an HTML element of tag-cloud (which is the automatic conversion of camel case tagCloud) is to be associated with this module. At (4), the private (data) scope for this module is defined, containing the elements tags and tagClicked. This is crucial in order to be able to have multiple instances of the tagcloud in a single page that are isolated from each other. Each instance has its own tags and its own tagClicked function.

At (5) we indicate the external HTML file that provides the ‘view’ content for our tagcloud module. The tagcloud.html file is shown below.

Directives that want to modify the DOM typically use the link option, shown at (6). One very important element set up in this link() function is the watch function that we associate with the tags collection (on the scope). Whenever the value of tags changes, this function will be executed. In most situations, the result will be that the Start function on the TagCanvas object is invoked – to redraw the tagcloud [with the latest set of tag values].

image

The HTML file that is imported at (5) is actually very simple:

image

A new canvas is rendered with an id value based on the canvasId value in the Angular scope – where we have taken care to make this value unique. The ng-repeat attribute is interpreted by Angular, creating a for each loop that stamps out the <li> element for every element in the tags collection (on the scope). For each tag, an <a> element is rendered with attributes data-weight and click derived from properties of the tag element in the tags collection and the tagClicked() function that was put on the scope at (7) in tagcloud-html.html.

A visualization of the application structure is shown next:

image

Resources

What’s so great about Angular by Ben Lesh – http://www.benlesh.com/2014/04/embular-part-2-whats-great-about-angular.html

Getting Started With Gulp by Travis Maynardhttp://travismaynard.com/writing/getting-started-with-gulp

Gulp as a Development Web Server by Johannes Schicklinghttp://code.tutsplus.com/tutorials/gulp-as-a-development-web-server–cms-20903

Getting Started with Bower – blog article with introduction tutorial to Bower for managing JavaScript dependencies

AngularJS – documentation on the Directive, watches

The post Developing the AngularJS Tagcloud application – appendix for Marrying the Worlds of ADF and HTML5 appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/09/13/developing-the-angularjs-tagcloud-application-appendix-for-marrying-the-worlds-of-adf-and-html5/feed/ 0
Sometimes the cause of a TNS error is …. http://technology.amis.nl/2014/09/03/sometimes-cause-tns-error/ http://technology.amis.nl/2014/09/03/sometimes-cause-tns-error/#comments Wed, 03 Sep 2014 15:14:54 +0000 http://technology.amis.nl/?p=32214 A couple of months ago one of my customers had a failed data ware house report. There was a ORA-12592 (TNS) error message generated. I turned out not to be the only TNS error. During a couple of weeks similar TNS-errors were generated. Not only the ORA-12592 error but also ORA-12514 and ORA-12571 errors. We [...]

The post Sometimes the cause of a TNS error is …. appeared first on AMIS Technology Blog.

]]>
A couple of months ago one of my customers had a failed data ware house report. There was a ORA-12592 (TNS) error message generated.
I turned out not to be the only TNS error. During a couple of weeks similar TNS-errors were generated. Not only the ORA-12592 error but also ORA-12514 and ORA-12571 errors.

We did some extensive sqlnet tracing but we didn’t find the cause there.

Finally the cause turned out to be a very simple and stupid one: the maximum number of processes was reached…. But strangely there was no ora-00020 error generated and found in the alert.log. I should expect such an error in the alert.log when this happens. But it seems that Oracle is not always doing that… (?)

Oracle has a parameter called processes. By default this is set on 150 during creation of the database. You can of course set another value during database creation. If you set this parameter on 200 then if there are more than 200 processes then a new connection can not be made with the database and the user gets some sort of TNS error like mentioned above.

How can you find out that this is the case? You can find out using the following query:

SQL> select * from v$resource_limit where resource_name = ‘processes';

If you get results like these:

RESOURCE_NAME CURRENT_UTILIZATION MAX_UTILIZATION INITIAL_AL LIMIT_VALU
—————————— ——————- ————— ———- ———-
processes 194 200 200 200

then you will get will probably TNS errors.

You can see that the max_utilization is 200 which is the same as the limit_value and the processes parameter. This means you have reached at least one time the maximum number of processes. If also the current_utilization is the same as the limit_value or nearby the limit_value then new users will get TNS errors during login. To solve this you should set the processes parameter on a higher value.

You can do that by the following command:

alter system set processes=300 scope=spfile;
and then restart the database.

The processes parameter is not a dynamic parameter so you are not able to change this parameter without a restart of the database.

So next time when I get TNS errors I will take a look in the v$resource_limit view first, before starting with extensive sqlnet tracing…

The post Sometimes the cause of a TNS error is …. appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/09/03/sometimes-cause-tns-error/feed/ 0
How to make a time consistent export dump using the expdp datapump utility http://technology.amis.nl/2014/09/03/make-time-consistent-export-dump-using-expdp-datapump-utility/ http://technology.amis.nl/2014/09/03/make-time-consistent-export-dump-using-expdp-datapump-utility/#comments Wed, 03 Sep 2014 14:08:13 +0000 http://technology.amis.nl/?p=32192 In those old days when there was the exp utility we made a time consistent export dump by using the consistent=y parameter. But today, in fact a couple of years already, we mostly use the expdp datapump utility. How should we make a time consistent export using datapump? For that we use the flashback_time or [...]

The post How to make a time consistent export dump using the expdp datapump utility appeared first on AMIS Technology Blog.

]]>
In those old days when there was the exp utility we made a time consistent export dump by using the consistent=y parameter.
But today, in fact a couple of years already, we mostly use the expdp datapump utility. How should we make a time consistent export using datapump?
For that we use the flashback_time or flashback_scn parameter. In this post I show you how to set the flashback_time parameter.

The flashback_time parameter needs as input the date-time in the “timestamp” format. If you want a time consistent exportdump of the present time, you should therefore set this parameter as follows:

flashback_time=systimestamp

If you want to use a parameter file, you should make a file with for example this content and give it for example the name scott.par:

schemas=scott
dumpfile=exp_scott.dmp
logfile=exp_scott.log
directory=DATA_PUMP_DIR
flashback_time=systimestamp
..
You then can execute the export using:

expdp system/password parfile=scott.par

If you want a time consistent export on another timestamp, let say september 3rd 2014 on 14:41:00 then you should set the flashback_time parameter as follows:

flashback_time=”to_timestamp(’03-09-2014 14:41:00′, ‘DD-MM-YYYY HH24:MI:SS’)”

From version 11.2 and higher it is also possible to use the so called legacy mode: you can use the parameters from the old exp utilities! You can use the consistent=y parameter again to make a time consistent export:

$ expdp schemas=scott consistent=y dumpfile=exp_scott.dmp logfile=exp_scott.log directory=DATA_PUMP_DIR

This is the output you will get:

Export: Release 11.2.0.4.0 – Production on Wed Sep 3 15:32:03 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Username: system
Password:

Connected to: Oracle Database 11g Release 11.2.0.4.0 – 64bit Production
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: “consistent=TRUE” Location: Command Line, Replaced with: “flashback_time=TO_TIMESTAMP(‘2014-09-03 15:32:03′, ‘YYYY-MM-DD HH24:MI:SS’)”
Legacy Mode has set reuse_dumpfiles=true parameter.
Starting “SYSTEM”.”SYS_EXPORT_SCHEMA_01″: system/******** schemas=RPCR_DEV flashback_time=TO_TIMESTAMP(‘2014-09-03 15:32:03′, ‘YYYY-MM-DD HH24:MI:SS’) dumpfile=rpcr_dev2.dmp logfile=rpcr_dev.log directory=EXP_DIR reuse_dumpfiles=true
Estimate in progress using BLOCKS method…
….

So you see that expdp is translating consistent=y to a flashback_time parameter.
As I said this works only in version 11.2 and higher.

THE HAPPY OLD DAYS ARE BACK AGAIN in version 11.2!! :-)

The post How to make a time consistent export dump using the expdp datapump utility appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/09/03/make-time-consistent-export-dump-using-expdp-datapump-utility/feed/ 0
Monitor Dell PERC when running on Oracle Virtual Server 3.x http://technology.amis.nl/2014/09/03/monitor-dell-perc-running-oracle-virtual-server-3-x/ http://technology.amis.nl/2014/09/03/monitor-dell-perc-running-oracle-virtual-server-3-x/#comments Wed, 03 Sep 2014 11:49:06 +0000 http://technology.amis.nl/?p=32136 In this article I will try to explain how to monitor your Dell Raid Controller with Cloud Control when running Oracle Virtual server 3.x (OVS3.x). Some time ago a physical drive broke down on one of our Dell PowerEdge Servers. Not causing any problem because it was configured in a raid set using the Dell [...]

The post Monitor Dell PERC when running on Oracle Virtual Server 3.x appeared first on AMIS Technology Blog.

]]>
In this article I will try to explain how to monitor your Dell Raid Controller with Cloud Control when running Oracle Virtual server 3.x (OVS3.x).

Some time ago a physical drive broke down on one of our Dell PowerEdge Servers. Not causing any problem because it was configured in a raid set using the Dell build in PowerEdge Raid Controller( hereafter PERC). However, we did not notice this breakdown until also a second physical disk broke down causing the whole raid-set getting unavailable.

The technical part of replacing disks and recovery has been done, but we left with the issue of not being notified of the first failure at all. So I started a quest on the web finding out how to prevent this from happening again.

First solution found was (of course) to install Dell Open Manage. Looks promising, but unfortunately it is not certified (and also not working) on a server running Oracle Virtual Server 3.x. which we use on almost all our hardware. I tried, but after installing the software the server refused to start after a reboot.

Next try: SNMP… Unfortunately all SNMP and PERC related information which can be found is based on the Dell Open manage software, which we could not use as described above…

Crawling on the web I finally bumped into some blog articles stating that the PERC is a branded Megaraid adapter. The same article mentioned something like MegaCLI (Megaraid Command Line Interface). Aha, a new hook into a possible solution?

Yep! With this information as a new starting point I was able to retrieve enough information to use the command line to query the PERC. And if it’s command line, we can script, and if we can script, we can monitor with Oracle Cloud Control (or any other monitoring tool).

OK, enough blabla, let’s walk through the steps required to get this thing moving.

First we need to download 2 small rpm’s to be installed on the server:
– Lib_Utils-1.00-09.noarch.rpm, which can be found here
– MegaCli-8.02.16-1.i386.rpm, which can be found on the support site of LSI. Download this zipfile which contains the RPM (and other tools for different OS’s).

    1. Login on your host as root and navigate to /tmp
    2. Install both rpm’s
[root@host tmp]# yum localinstall Lib_Utils-1.00-09.noarch.rpm –nogpgcheck
[root@host tmp]# yum localinstall MegaCli-8.02.16-1.i386.rpm –nogpgcheck
    1. Create a softlink in /usr/sbin to the MegaCli executable using 1 of these statements
[root@host tmp]# ln /opt/MegaRAID/MegaCli/MegaCli /usr/sbin/MegaCli
[root@host tmp]# ln /opt/MegaRAID/MegaCli/MegaCli64 /usr/sbin/MegaCli
    1. Test the functionality by executing the following command which should show the number of raid controllers in the system
[root@host tmp]# /usr/sbin/MegaCli -adpCount
    1. Execute the following command which will give the number of physical disks. The output will be used in a later stage configuring Cloud Control
[root@host tmp]# /usr/sbin/MegaCli -PdGetNum -a0 -NoLog |grep Number
  1. If the command above executes without error you can execute /usr/sbin/MegaCli –h to get a log of help-information which tells about the huge load of options.

When the part above is executed successfully we can proceed with the next part: Getting Oracle Cloud Control to monitor the PERC.

At this point I assume you already have an Cloud Control agent running correctly on the specific host. If not, you have to install and configure one before you continue.

In Cloud Control 12c we have a beautiful feature called Metric Extensions.

Quote from the documentation:

Metric Extensions enhance Enterprise Manager’s monitoring capability by allowing you to create new metrics to monitor conditions specific to your environment. These Metric Extensions can be added to any target monitored by Enterprise Manager. Once developed and deployed to your targets, the metrics integrate seamlessly with the Oracle-provided metrics.

This means that almost anything you can execute on a command line interface (CLI) and gives a formatted result can be used as metric. This can be at the OS-prompt, SQL, RMAN, ODI, Dell Open Manage, Microsoft SQL etc.

The development cycle for a metric extension looks like this:

Lifecycle of Metric Extension

Metric Extension Lifecycle

Since all commands to the PERC have to be executed as root, I decided (for simplicity) to use the Monitoring Credential facility in Cloud Control for this with the root account. You could also setup to use sudo and a specific useraccount, but that is beyond the scope of this blogpost.

In the next steps we will create a metric which will alert us when the number of available disks changes (i.e. a disk fails or is removed). Based on this example you should be able to create your own variants depending on the requirements.

  1. Log in in Cloud Control
  2. Navigate to <Setup><Security><Monitoring Credentials>
  3. Select the <Host> target type and click on <Manage Monitoring Credentials>.

Next you will see a list of all hosts in Cloud Control with 3 Credential Sets. We will use the set called “Host Credentials For Real-time Configuration Change Monitoring”.

  1. Select the required line (hostname-credentialset) and click <Set Credentials>.
  2. Fill in the username (root) and the corresponding password.
  3. Click on<Test and Save> to store the password.

When the security has been setup, we can start creating the Metric extension.

  1. Navigate to <Enterprise><Monitoring><Metric Extensions>.
  2. Click on <Actions><Create> to start the wizard which will assist you in creating the Metric extension.

On the first screen we set the general setting regarding this metric.

  1. Select the Target Type <Host>
  2. Give the Metric extension a name, I used ME$Raid_PD_Count
  3. Give the metric a usefull Display name, i.e. Raid Physical Disk count
  4. Set the adaptertype to “OS Command – Multiple Columns”
  5. Add a description if desired and leave the Collection schedule on default settings
  6. Click <Next> to proceed to the Adapter screen

The adapter screen defines how a specific query is executed A proper description of the options is on the right side of the screen.

    1. Since we have to execute a (very small) script the command we will use is ‘/bin/bash’
    2. Click on the small pencil behind the script box
      • As Filename we use “RaidPhDiskCount”
      • In the File Contents box paste the following line:
/usr/sbin/MegaCli -PdGetNum -a0 -NoLog |grep Number
    • Click <OK>

You can notice that the “Script” textbox has been filled with “%scriptsDir%/RaidPhDiskCount”. Also has the script been added to the “Custom Files” on the left bottom of the screen.
If you take a closer look to the output generated earlier by “/usr/sbin/MegaCli -PdGetNum -a0 -NoLog |grep Number” you will see it contains some text and a number, divided by a : (colon).

Number of Physical Drives on Adapter 0: 6
  1. Based on the above we put a : (colon) in de Delimiter field.
  2. Click <OK> to proceed to the Columns page

On the Columns page we have to define each column which will be existing in the output of our command. As we can see above the output contains 2 columns, separated by a colon.

For each column in the output we have to define if it is a key column or a data column (containing the measurement data). For a data column we can also specify the default thresholds for warning and critical level. Note: The suggested values for warning and Critical are no typo’s . We will correct this in a later stage.

  1. Click <Add><new metric column> on to add the first column
  2. In the Name box write Description, and the same goes in the Display Name box.
  3. The column Type should be Key Column and the Value Type is String
  4. Click <OK> to save
  5. Click <Add><new metric column> on to add a second column
  6. In the Name box write PhysicalDiskcount, Display Name will be “Physical Disks”
  7. The Column Type will be Data Column with Number as Value Type
  8. Comparison Operator should be set to <, warning level to 1 and critical to 0.7
  9. Change the Alert Message to “Number available disks on raid controller is degraded to %value%”
  10. Click <OK> to save, and <Next> to proceed to the Credentials page

On the Credentials page we select which credential set should be used to measure this specific metric. Earlier we did prepare the “Host Credentials For Real-time Configuration Change Monitoring” set for this.

  1. Select the “Specify Credential Set” radio button and select if the correct credential set if not done automatically.
  2. Click to go to the Test page

The Test page offers the possibility to test the metric and check the output. The metric can be tested against all targets of the correct type if required.

  1. Click <Add> and select 1 (or more) targets where you want to test the metric. Click <Select>
  2. Select the target you want to test against and click <Run test>.
  3. Cloud control will execute the test and present the results in the bottom half of the screen. If an error message is thrown you might use the button to go back in the wizard to correct. After correction get back to this page and retry.
  4. If you´re happy with the test results click <Next> to go to the review page.

As could be expected based on the name you can once more review all settings for this metric and click <Finish> to save and close.

When the Mextric Extension has been tested and saved the next step is to save it as “Deployable Draft”. From this point on, it cannot be modified anymore.

  1. Select the Metric Extension
  2. Click on <Actions><Save as Deployable Draft>

Once a Metric Extension has reached the Deployable Draft status, it can be deployed to 1 (or more) server to test it in real life.

  1. Select the Metric Extension
  2. Click on <Actions><Deploy to Targets…>to open the Deployment screen.
  3. Click on <Add>
  4. Select the target(s) where you want to deploy on and click <Select>
  5. Click <Submit> to start deployment

At this stage the metric is deployed to our server which means that every 15 minutes it is executed, the results are stored in the database and alerts can be generated. However, the metric needs some small tweaks to work properly. Remember we did set a warning level on 1, and critical on 0.7?

    1. In Cloud Control navigate to the homepage of the host involved.
    2. Click on <Host><Monitoring><Metric and collection settings>
    3. On this page you see an overview of the active Metrics on this host. Locate the Metric we just created.
Metric Extension setting

Metric Extension setting

    1. Find out how many physical disk this particular host contains by executing the following command at the specific host (as root)
/usr/sbin/MegaCli -PdGetNum -a0 -NoLog |grep Number”

 

  • I want to be notified as soon the number of disks is lower as it should be (this means a disk broken or removed). In my opinion this is always a critical situation.However, Cloud Control requires that the warning and critical value are filled in and different. For this reason warning threshold should be equal to the number of physical disks, and the critical threshold 0.5 (half a disk :-)) lower. So, if you have 6 physical disks, the warning threshold is 6, and the critical 5.5.

 

The result of this is that, as soon as 1 disk is gone, the value is below the critical threshold which should generate a critical alert.

  1. Click <OK> to continue
  2. Click <OK>
  3. Done!

From this point on Cloud Control will monitor the PERC in your host every 15 minutes and generate an incident as soon something is wrong. Of course, you will need configuration to send out alerts to your mailbox, pager or ticketing system. but I assume (and hope) that this has been done already if you are already using Cloud Control

The post Monitor Dell PERC when running on Oracle Virtual Server 3.x appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/09/03/monitor-dell-perc-running-oracle-virtual-server-3-x/feed/ 0
How to hide login data of sql-scripts on Windows http://technology.amis.nl/2014/09/03/how-to-hide-login-data-of-sql-scripts-on-windows/ http://technology.amis.nl/2014/09/03/how-to-hide-login-data-of-sql-scripts-on-windows/#comments Wed, 03 Sep 2014 07:37:53 +0000 http://technology.amis.nl/?p=32060 One of the often given advices on hardening a database is to run scripts without broadcasting your login data at the same time. According to Arup Nanda in his famous articles on “Project Lockdown” you have three options to run your scripts without letting everybody in on your password secrets: Start your scripts under /nolog [...]

The post How to hide login data of sql-scripts on Windows appeared first on AMIS Technology Blog.

]]>
One of the often given advices on hardening a database is to run scripts without broadcasting your login data at the same time. According to Arup Nanda in his famous articles on “Project Lockdown” you have three options to run your scripts without letting everybody in on your password secrets:

  1. Start your scripts under /nolog and add your login to the SQL-script your running
  2. Start SQL*Plus under /nolog and add the login at the beginning of your shell script
  3. Don’t use a login in your command and sql-scripts but a central, highly secured password file

The last option is quite often described for *nix systems, but examples for Windows are quite hard to find. And that’s what this blog is about, an implementation of Arup Nanda’s third option for Windows, in the form of a simple Windows command script which gets passwords from a highly secured file and uses them to run SQL-scripts.

First of all, choose a directory where to “hide” your password file on the database server. Advisably, it should not be located under an Oracle Home, because intruders would search there first, and it really does not matter where you put it as long as you a) make it readable for the Oracle Software owner and b) make the directory not inheritable with the access rights from C: and c) if possible, don’t call the owning user “oracle”. And maybe you should not call the file “passwords.txt” or anything else that directly indicates its content.

My chosen location is an old AMIS favorite, a directory structure called “oradmin”. It is created under ORACLE_BASE and contains several subdirectories like \bin, \sql, \par, \log and/or \exp.

Oradmin

Figure 1 – The oradmin-directory tree

Here, all administrative (ad-hoc) and batch scripts can be stalled.

I called my password file “input.txt” and placed it in $OB\oradmin\par. The “run_script_as.cmd” which I will describe in this blog is located in the \bin-directory and the sql-scripts to be executed are situated in the \sql-directory. Below you find an example of the (fictional) content of my “input.txt” :

# SID:USERNAME:password:"path to Oracle Home (planned future use)";
EMMA:SYS:pitfall1:C:\oracle\db\11.2.0_SE;
ANNA:MOODLE:MoOd13:C:\oracle\db\11.2.0_EE;
ANNA:SYS:pitfall2:C:\oracle\db\11.2.0_EE;

security_on_input.txt

Figure 2 – Security settings of the input.txt-file

In Figure 2 you can see that the security on this file is very, very strict. The Windows group, which has also been granted access (read/execute access only!) is the renamed “ora_dba”-group and user “moodle” is the alternative “oracle”-user on this Windows machine. Here, even the administrators are “officially” excluded from all access to everything under ORACLE_BASE. This means that the only user able to manage this file and its content is the local user “moodle”.

But now comes the somehow tricky part, creating a Windows command script which uses this “input.txt” and runs SQL-scripts (for batches) with the correct username-password combination on the correct database (my machine for example has 3 different Oracle Homes and 4 databases).

At first I wanted the script to loop through each row in order to find the correct password, but this is not necessary because you know the SID of the database your want the script to be run on, AND you know the username which should run the script (no real secrets per se). You have to be a little more specific and also tell the command script as well which script should be run. These three chunks of information can identify which row in “input.txt” is needed and what your command has to look for. Finally I came up with a script which I called “run_script_as.cmd“:

REM #/******************************************************************************
REM Script Name : run_script_as.cmd
REM Creator : K. Kriebisch, AMIS Services b.v., Nieuwegein (NL)
REM Creation date : 30-07-2014
REM Purpose : used in conjunction with an input-file for passwords
REM to run SQL-scripts on local databases of Windows servers
REM without providing a (visible) password
REM Usage : run_script_as <SID> <USERNAME> <scriptname>
REM Remarks : used to obfuscate the used logins for the scripts
REM The input.txt must be highly secured and is only
REM accessible to the Oracle software owner
REM SQL-script should be located in $OB\oradmin\sql
REM Versions : usable on all Windows versions like XP and higher
REM
REM -------------------------------------------------------------------------------
REM
REM Revision record
REM Date Version Author Modification
REM ---------- ------ ------ -------------------------------------------------
REM 30-07-2014 1.0 KK Created
REM
REM ******************************************************************************/

setlocal
set echo off
REM Here should come some tests whether the usage is correct or not ...

REM ... and the logging is also not yet implemented

REM Setting Defaults, so all variables are filled with a default value
set ORACLE_SID=%1
set USERNAME=%2
set SQL_SCRIPT=%3
set PWD=geen
set SCRIPT_BASE=C:\oracle\oradmin
set SEARCHSTRING=%1:%2

REM Does the search string exist in the input file? If not, stop the script execution
IF not exist ('findstr /L/B /C:%SEARCHSTRING% %SCRIPT_BASE%\par\input.txt') do (
echo %SEARCHSTRING% not found!”
goto end
)

REM If the search string exists, find the token which represents the password in the matching line
FOR /F "eol=; tokens=3 delims=:" %%i in ('findstr /L/B /C:%SEARCHSTRING% %SCRIPT_BASE%\par\input.txt') DO (set PWD=%%i)
REM ... and get the string for the Oracle_home path
FOR /F "eol=; tokens=4 delims=:" %%j in ('findstr /L/B /C:%SEARCHSTRING% %SCRIPT_BASE%\par\input.txt') DO (set ORACLE_HOME=%%j)

REM Combine all information to create the connectstring
set CONNECTSTRING=%USERNAME%/%PWD%@%ORACLE_SID%

REM In order to connect as user SYS don't forget to add as sysdba
if "%2" == "SYS" (set CONNECTSTRING="%USERNAME%/%PWD%@%ORACLE_SID% as sysdba")

REM ... and now we run our desired script!

sqlplus %CONNECTSTRING% @%SCRIPT_BASE%\sql\%SQL_SCRIPT%

REM Goto lable to stop the script if search string can not be found
:end

endlocal

What does this script do?
The center of this script is the row :

IF not exist (‘findstr /L/B /C:%SEARCHSTRING% %SCRIPT_BASE%\par\input.txt’) DO goto end

which should find a specific row in the input file or stop the command script when that is not the case.

‘findstr /L/B /C:%SEARCHSTRING% %SCRIPT_BASE%\par\input.txt’ means “search for the given string in each row of the input file (=> /C:[search string][path and file name]) and try to find a literal match (=> /L) and /B meaning “start looking at the beginning of each line”.

“SEARCHSTRING” is defined as the combination of upper ORACLE_SID (“%1”), a colon and the given upper username (“%2”) which forms the begin of each row in the input file. The IF-statement checks for the string in the whole file from head to bottom. If it finds a matching entry before the end of the file, the script continues processing the following FOR-loop; otherwise it warns the user that this combination does not exist in the input file and then stops the execution of this command file.

When the script reaches the first “FOR /F” command, we already know that the ‘findstr’ command does find (at least) one matching entry, which we can use as input in the “FOR /F”–loop to find the password in that same row. What we are looking for is the third token (= entry) of the row which is the password string and put it in the variable called PWD so rest of the script can use it. And that is what the next row of the script says :

FOR /F “eol=; tokens=3 delims=:” %%i in (‘findstr /L/B /C:%SEARCHSTRING% %SCRIPT_BASE%\par\input.txt’) do (set PWD=%%i)

The /F switch of a FOR-loop command tells the Windows command engine to loop through items in a given file item (= line) by item (/D would loop through a directory listing finding file and subdirectory names etc.). Here, the items are, beter the item is, the result of the ‘findstr’-function executed on our “input.txt” file, a specific, single row.

  1. eol=; means each line ends with a semi-colon and the script will stop when a line begins with it
  2. tokens=3 look only for the 3rd token (and put it in variable %%i)
  3. delimiter=: the tokens are divided by colons

The next line does the same for token no.4, which is the string of Oracle_Home variable.

The password and the Oracle_home variable could be filled in one go, but I’m still fighting with the colon in the string, which I did not want to loose because of the similarity between this input file and the oratab file. But mayby, I should use a different delimiter…

BTW, the loop command would then be

FOR /F “eol=; tokens=3,4 delims=:” %%i %%j in (‘findstr /L/B /C:%SEARCHSTRING% %SCRIPT_BASE%\par\input.txt’) DO (set PWD=%%i set ORACLE_HOME=%%j)

Now we have all the information to run our sql-scripts.

The command script now sets the Oracle parameters (to make sure the correct SQL*plus version and database will be used) and forms the total connectstring. The following command, an exception, is only executed if the username is “SYS”, so it will not override the already created connectstring if the condition is not met.

The default location of the scriptname is in the script base (C:\oracle\oradmin), preferably for an SQL-script in \sql and remember, the value of the variable “SQL_SCRIPT” as been defined as the third input argument (“%3”).

When you put “C:\oracle\oradmin\bin” in the PATH variable of your machine, the script can finally call SQL*Plus(.exe) and the desired sql scripts with :

sqlplus %CONNECTSTRING% @%SCRIPT_BASE%\sql\%SQL_SCRIPT%

All the calling user has to do in order to use this script, is, call it with its three parameters from wherever he/she stands in the directory system of the database server as the script header indicates:

run_script_as EMMA SYS query1

That’s it! No passwords in sql-scripts anymore on this machine!

I know, there’s still a lot which can be beautified in this script, but it works for me ;-)).

So, have fun adopting the script for yourself!

The post How to hide login data of sql-scripts on Windows appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/09/03/how-to-hide-login-data-of-sql-scripts-on-windows/feed/ 2
SQL> Select * From Alert_XML_Errors; http://technology.amis.nl/2014/08/29/sql-select-alert_xml_errors/ http://technology.amis.nl/2014/08/29/sql-select-alert_xml_errors/#comments Fri, 29 Aug 2014 18:58:45 +0000 http://technology.amis.nl/?p=32087 Once you are able to show the xml version of the alert log as data in database table Alert_XML, it would be nice to checkout the errors with accompanying timestamps from within view Alert_XML_Errors. Like this, with the help of 2 types and a pipelined function. su - oracle . oraenv [ orcl ] [oracle@localhost [...]

The post SQL> Select * From Alert_XML_Errors; appeared first on AMIS Technology Blog.

]]>
Once you are able to show the xml version of the alert log as data in database table Alert_XML, it would be nice to checkout the errors with accompanying timestamps from within view Alert_XML_Errors. Like this, with the help of 2 types and a pipelined function.

su - oracle
. oraenv [ orcl ]
[oracle@localhost ~]$ sqlplus harry/*****
....
SQL> desc alert_xml
Name                                      Null?    Type
----------------------------------------- -------- ----------------------------
TEXT                                               VARCHAR2(400 CHAR)

SQL> CREATE OR REPLACE TYPE v2_row AS OBJECT ( text varchar2(400));
/

Type created.

SQL> CREATE OR REPLACE TYPE v2_table AS TABLE OF v2_row;
/

Type created.

SQL> CREATE OR REPLACE FUNCTION Get_Errors
( P sys_refcursor )
RETURN v2_table PIPELINED
IS
out_rec  v2_row := v2_row(NULL);
this_rec alert_xml%ROWTYPE;
currdate VARCHAR2(400) := 'NA';
last_printed_date VARCHAR2(400) := currdate;
testday VARCHAR2(3);
testerr VARCHAR2(4);
firstdate BOOLEAN := TRUE;
BEGIN
currdate := 'NA';
last_printed_date := currdate;
LOOP
FETCH p INTO this_rec;
EXIT WHEN p%NOTFOUND;

this_rec.text := LTRIM(this_rec.text);

-- check if this line contains a date stamp
testday := SUBSTR(this_rec.text,1,3);
IF testday = '201'
THEN
-- show dates as in de text version of the alert log
currdate := to_char(to_date(substr(this_rec.text,1,19),'YYYY-MM-DD HH24:MI:SS'),'Dy Mon DD hh24:mi:ss yyyy','NLS_DATE_LANGUAGE = AMERICAN');
ELSIF
testday = 'Sat'
OR testday = 'Sun'
OR testday = 'Mon'
OR testday = 'Tue'
OR testday = 'Wed'
OR testday = 'Thu'
OR testday = 'Fri'
THEN
currdate := this_rec.text;
END IF;

testerr := SUBSTR(this_rec.text,1,4);
IF testerr = 'ORA-'
OR testerr = 'TNS-'
THEN
IF last_printed_date != currdate
OR ( currdate != 'NA' AND firstdate )
THEN
last_printed_date := currdate;
firstdate := FALSE;
out_rec.text := '****';
PIPE ROW(out_rec);
out_rec.text := currdate;
PIPE ROW(out_rec);
out_rec.text := '****';
PIPE ROW(out_rec);
END IF;
out_rec.text := this_rec.text;
pipe ROW(out_rec);

END IF;
END LOOP;

CLOSE P;
RETURN;
END Get_Errors;
/

Function created.

SQL> CREATE OR REPLACE FORCE VIEW ALERT_XML_ERRORS
AS
SELECT "TEXT"
FROM TABLE (get_errors (CURSOR (SELECT * FROM alert_xml)));

View created.

And checkout the errors now:

SQL> set pagesize 0
SQL> select * from alert_xml_errors;
****
Tue Aug 26 11:01:08 2014
****
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/u02/oradata/orcl/users01.dbf'
ORA-27037: unable to obtain file status
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/u02/oradata/orcl/users01.dbf'
ORA-1157 signalled during: ALTER DATABASE OPEN...
****
Tue Aug 26 11:12:51 2014
****
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/u02/oradata/orcl/users01.dbf'
ORA-27037: unable to obtain file status
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/u02/oradata/orcl/users01.dbf'
ORA-1157 signalled during: ALTER DATABASE OPEN...
****
Tue Aug 26 13:39:36 2014
****
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/u02/oradata/orcl/users01.dbf'
ORA-27037: unable to obtain file status
ORA-01157: cannot identify/lock data file 4 - see DBWR trace file
ORA-01110: data file 4: '/u02/oradata/orcl/users01.dbf'
ORA-1157 signalled during: ALTER DATABASE OPEN...
****

< snip >

SQL>

And yes, the pipelined function will only work till 2020 on the xml version of the alert log – see if you can find the code line! – , and yes, it should be functional on the text version of the alert log too, provided the external table describes like alert_xml.

The post SQL> Select * From Alert_XML_Errors; appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/29/sql-select-alert_xml_errors/feed/ 0
SQL> Select * From Alert_XML; http://technology.amis.nl/2014/08/28/sql-select-alert_xml-preprocessing-adrci/ http://technology.amis.nl/2014/08/28/sql-select-alert_xml-preprocessing-adrci/#comments Thu, 28 Aug 2014 20:46:55 +0000 http://technology.amis.nl/?p=32069 By mapping an external table to some text file, you can view the file contents as if it were data in a database table. External tables are available since Oracle 9i Database, and from Oracle 11gR2 Database on, it is even possible to do some inline preprocessing on the file. The following example of this [...]

The post SQL> Select * From Alert_XML; appeared first on AMIS Technology Blog.

]]>
By mapping an external table to some text file, you can view the file contents as if it were
data in a database table. External tables are available since Oracle 9i Database, and from Oracle
11gR2 Database on, it is even possible to do some inline preprocessing on the file.

The following example of this feature picks up on standard output of shell script “get_alert_xml.sh”.
It isn’t referencing any file, but take notice of the fact that an empty “dummyfile” must still be
present and readable by oracle. By pre-executing some ADRCI commands and redirecting output to screen,
external table Alert_XML will show the last 7 days of entries of the xml version of the alert log.

su - oracle
. oraenv [ orcl ]

$ cd /u01/app/oracle/admin/scripts
$ touch dummyfile
$ echo '#!/bin/sh'                                                                     > get_alert_xml.sh
$ echo 'ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1'                          >> get_alert_xml.sh
$ echo 'DIAG_HOME=diag/rdbms/orcl/orcl'                                               >> get_alert_xml.sh
$ echo 'DAYS=\\"originating_timestamp > systimestamp-7\\"'                            >> get_alert_xml.sh
$ echo '$ORACLE_HOME/bin/adrci exec="set home $DIAG_HOME;show alert -p $DAYS -term;"' >> get_alert_xml.sh
$ chmod 744 get_alert_xml.sh
$ sqlplus / as sysdba
SQL> create directory exec_dir as '/u01/app/oracle/admin/scripts';
SQL> grant read,execute on directory exec_dir to harry;
SQL> connect harry/****
SQL> CREATE TABLE ALERT_XML ( TEXT VARCHAR2(400 CHAR) )
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY EXEC_DIR
ACCESS PARAMETERS
( RECORDS DELIMITED BY NEWLINE
PREPROCESSOR EXEC_DIR:'get_alert_xml.sh'
nobadfile
nodiscardfile
nologfile
)
LOCATION ('dummyfile')
)
REJECT LIMIT UNLIMITED
NOPARALLEL
NOMONITORING;
SQL> select * from alert_xml;

TEXT
--------------------------------------------------------------------------------

ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl:
*************************************************************************
2014-08-26 10:21:19.018000 +02:00
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST
2014-08-26 10:21:20.066000 +02:00

> snip <

SQL>

Check out Alert_XML_Errors here.

—–Add-on 5 September 2014——
if confronted with more than 1 database instance on the database server, you can either use 1 dummyfile for all instances and a different shell script for each instance in order to set the correct DIAG HOME, or you use a different LOCATION file for each instance, and reference this in one shell script. I’d like to opt for the latter, because the LOCATION file only has to contain the instance name, so it’s lesser code in total.

For instance, with LOCATION (‘orcl.txt’), and file orcl.txt just containing the instance name orcl, the following shell script code:

 
ORACLE_SID=`/bin/cat $1`         
ORACLE_DBS=`/usr/bin/expr $ORACLE_SID | /usr/bin/tr '[:lower:]' '[:upper:]' `
DIAG_HOME=diag/rdbms/$ORACLE_DBS/$ORACLE_SID

generates this DIAG_HOME: diag/rdbms/ORCL/orcl

The post SQL> Select * From Alert_XML; appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/28/sql-select-alert_xml-preprocessing-adrci/feed/ 3
Sqlnet tracing during nightly hours… http://technology.amis.nl/2014/08/26/sqlnet-tracing-nightly-hours/ http://technology.amis.nl/2014/08/26/sqlnet-tracing-nightly-hours/#comments Tue, 26 Aug 2014 13:47:25 +0000 http://technology.amis.nl/?p=32038 A TNS error at night… Sometime ago my data warehouse colleague came to me with a TNS error. At night times he runs his batch jobs in order to update his data warehouse. That night one of his jobs did not run properly and generated an ORA-12592 error. He had to restart this job during [...]

The post Sqlnet tracing during nightly hours… appeared first on AMIS Technology Blog.

]]>
A TNS error at night…

Sometime ago my data warehouse colleague came to me with a TNS error. At night times he runs his batch jobs in order to update his data warehouse. That night one of his jobs did not run properly and generated an ORA-12592 error. He had to restart this job during daytime.

It turned out it was not the only occurrence of this TNS error. A couple of days later he again came to me with similar TNS errors which were generated at a similar time. I looked in the alert.log and in the listener.log but nothing could be found. Therefore I decided to switch on sqlnet tracing in order to find out what was happening. However sqlnet tracing generates a lot of data. The TNS errors were generated at night. It is not a good idea to switch on sqlnet tracing during daytimes and then come back the next day and switch it off. You will probably get disk full problems!

Therefore I decided to make some scripts. Using crontab or windows scheduler I switch on sqlnet and listener tracing some time before the TNS error normally occurs and switch it off some time after. I would like to share with you the way I did it.

My configuration to trace.

We run an oracle 11.2.0.4 database on an Oracle Linux 6 server. Our client computer is a windows server computer. On this client some data warehouse tools are installed and run from this client. Also oracle 11.2 client software is installed on that client.

How to switch on sqlnet tracing

I set sqlnet tracing on 3 levels: client level, server level and listener (also on server) level. Sqlnet tracing on the client level can be switched on by setting parameters in the sqlnet.ora file on the client computer. On the server level you have to set parameters in the sqlnet.ora on the server. Setting parameters in the listener.ora file switches on listener tracing. These files can be found in the $ORACLE_HOME/network/admin directory.

Setting sqlnet tracing on the server:

On the server I copied the sqlnet.ora file to a file with the name sqlnet.ora.off. I made another copy of sqlnet.ora and gave it the name sqlnet.ora.on. Both files were put in the $ORACLE_HOME/network/admin directory, the same directory as for the original sqlnet.ora. I edited the sqlnet.ora.on file and added the following parameters to this file:

sqlnet.ora.on on the server:

TRACE_LEVEL_SERVER = 16
TRACE_FILE_SERVER = sqlnet_server.trc
TRACE_DIRECTORY_SERVER = /u03/network/trace
TRACE_UNIQUE_SERVER = ON
TRACE_TIMESTAMP_SERVER = ON

LOG_DIRECTORY_SERVER = /u03/network/log
LOG_FILE_SERVER = sqlnet_server.log

DIAG_ADR_ENABLED = OFF
ADR_BASE = /u01/app/oracle

This is not the place to explain the meaning of these parameters. For more information take a look at note id 219968.1 which can be found on the Oracle Support site or read the Oracle documentation for example: Oracle Database Net Services Administrator’s Guide, chapter 16.8: http://docs.oracle.com/cd/E11882_01/network.112/e41945/trouble.htm#r2c1-t57

However I would like to make some remarks:

TRACE_LEVEL_SERVER = 16
You can set the level of tracing with this parameter. I used the highest level. But it could be a good idea to start with a lower level for example 4 or 6. Higher levels produce more data and therefore more gigabytes.

TRACE_DIRECTORY_SERVER = /u03/network/trace
LOG_DIRECTORY_SERVER = /u03/network/log
I decided to use another mountpoint than the default in order to prevent disk full errors. There was more disk space on the /u03 mountpoint.

TRACE_UNIQUE_SERVER = ON
This causes Oracle to generate for every connection unique trace files.

TRACE_TIMESTAMP_SERVER = ON
If you set this parameter then a timestamp in the form of [DD-MON-YY 24HH:MI:SS] will be recorded for each operation traced by the trace file.

DIAG_ADR_ENABLED = OFF
ADR_BASE = /u01/app/oracle
You should set these two parameters if you are using version 11g or higher. If you use version 10g or lower then you should not add these parameters.

In my first version of the sqlnet.ora.on I also set the parameters:
# TRACE_FILELEN_SERVER = ….
# TRACE_FILENO_SERVER = ….
But it turned out that this was not a very good idea: huge amounts of files were generated. So I decided to throw them out.

Setting tracing on the listener:

I also made a copy of the listener.ora and named it listener.ora.off. I made another copy of this file and named it listener.ora.on. Also these files were put in the $ORACLE_HOME/network/admin directory. I edited the listener.ora.on and added the following parameters:

listener.ora.on on the server:

TRACE_LEVEL_LISTENER = 16
TRACE_FILE_LISTENER = listener.trc
TRACE_DIRECTORY_LISTENER = /u03/network/trace
TRACE_UNIQUE_LISTENER = ON
TRACE_TIMESTAMP_LISTENER = ON

LOG_DIRECTORY_LISTENER = /u03/network/log
LOGGING_LISTENER = ON
LOG_FILE_LISTENER = listener.log

DIAG_ADR_ENABLED_LISTENER = OFF
ADR_BASE_LISTENER = /u01/app/oracle

A remark:
If your listener has another name than the default LISTENER for example LSTNR than you should use parameters as TRACE_LEVEL_LSTNR and so on.

Setting sqlnet tracing on the client:

Also on the client computer I made two copies of sqlnet.ora: sqlnet.ora.off and sqlnet.ora.on. I added the following parameters to the sqlnet.ora.on file:

sqlnet.ora.on on the client:

TRACE_LEVEL_CLIENT = 16
TRACE_FILE_CLIENT = sqlnet_client.trc
TRACE_DIRECTORY_CLIENT = C:\app\herman\product\11.2.0\client_1\network\trace
TRACE_UNIQUE_CLIENT = ON
TRACE_TIMESTAMP_CLIENT = ON

LOG_DIRECTORY_CLIENT = C:\app\herman\product\11.2.0\client_1\network\log
LOG_FILE_CLIENT = sqlnet_client.log

TNSPING.TRACE_DIRECTORY = C:\app\herman\product\11.2.0\client_1\network\trace
TNSPING.TRACE_LEVEL = ADMIN

DIAG_ADR_ENABLED = OFF
ADR_BASE = c:\app\herman

Scripts for switching on sqlnet tracing

Scripts on the server:

On the server I created the following two scripts: sqlnet_trace_on.sh and sqlnet_trace_off.sh

sqlnet_trace_on.sh:

#!/bin/bash
# ******************************************************************************
# Script Name : sqlnet_trace_on.sh
# Purpose : To switch on sqlnet tracing and listener tracing
# Created by : AMIS Services, Nieuwegein, The Netherlands
#
# Remarks : a set of sqlnet.ora.on, sqlnet.ora.off, listener.ora.on and
# listener.ora.off must be available in the
# OH/network/admin-directory
#
#——————————————————————————-
# Revision record
# Date Version Author Modification
# ———- —— —————– ———————————-
# 07-11-2013 1.0 Karin Kriebisch Created, listener tracing
# 06-05-2014 1.1 Herman Buitenhuis sqlnet tracing added
#
#******************************************************************************
#
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export LISTENER_ORA_LOC=$ORACLE_HOME/network/admin
export LISTENER_TRACE_LOC=$ORACLE_HOME/network/log
export LOG=$LISTENER_TRACE_LOC/Listener_Trace_ON.log
#
echo — Initializing Logfile – Switching sqlnet/listener tracing ON — > $LOG
echo `date` >>$LOG
echo ================================================================ >>$LOG
echo >>$LOG
#
echo Copy listener.ora.on to listener.ora >>$LOG
#
cp $LISTENER_ORA_LOC/listener.ora.on $LISTENER_ORA_LOC/listener.ora >>$LOG
#
echo Copy sqlnet.ora.on to sqlnet.ora >>$LOG
#
cp $LISTENER_ORA_LOC/sqlnet.ora.on $LISTENER_ORA_LOC/sqlnet.ora >>$LOG
#
#
echo Restart LISTENER >>$LOG
$ORACLE_HOME/bin/lsnrctl stop >>$LOG
$ORACLE_HOME/bin/lsnrctl start >>$LOG
echo `date` >>$LOG
#
echo Check LISTENER status after 30 seconds >>$LOG
sleep 30
$ORACLE_HOME/bin/lsnrctl status >>$LOG
#
echo `date` >>$LOG
echo === sqlnet and listener Tracing switched ON === >>$LOG

sqlnet_trace_off.sh:

#!/bin/bash
# ******************************************************************************
# Script Name : sqlnet_trace_off.sh
# Purpose : To switch off sqlnet tracing and listener tracing
# Created by : AMIS Services, Nieuwegein, The Netherlands
#
# Remarks : a set of sqlnet.ora.on, sqlnet.ora.off, listener.ora.on and
# listener.ora.off must be available in the
# OH/network/admin-directory
#
#——————————————————————————-
# Revision record
# Date Version Author Modification
# ———- —— —————– ———————————-
# 07-11-2013 1.0 Karin Kriebisch Created, listener tracing
# 06-05-2014 1.1 Herman Buitenhuis sqlnet tracing added
#
#******************************************************************************
#
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export LISTENER_ORA_LOC=$ORACLE_HOME/network/admin
export LISTENER_TRACE_LOC=$ORACLE_HOME/network/log
export LOG=$LISTENER_TRACE_LOC/Listener_Trace_OFF.log
#
echo — Initializing Logfile – Switching sqlnet/listener tracing OFF — > $LOG
echo `date` >>$LOG
echo ================================================================ >>$LOG
echo >>$LOG
#
echo Copy listener.ora.off to listener.ora >>$LOG
#
cp $LISTENER_ORA_LOC/listener.ora.off $LISTENER_ORA_LOC/listener.ora >>$LOG
#
echo Copy sqlnet.ora.off to sqlnet.ora >>$LOG
#
cp $LISTENER_ORA_LOC/sqlnet.ora.off $LISTENER_ORA_LOC/sqlnet.ora >>$LOG
#
#
echo Restart LISTENER >>$LOG
$ORACLE_HOME/bin/lsnrctl stop >>$LOG
$ORACLE_HOME/bin/lsnrctl start >>$LOG
echo `date` >>$LOG
#
echo Check LISTENER status after 30 seconds >>$LOG
sleep 30
$ORACLE_HOME/bin/lsnrctl status >>$LOG
#
echo `date` >>$LOG
echo === Switched sqlnet/listener Tracing OFF === >>$LOG

Scripts on the windows client:

On the windows client I made the following two scripts: sqlnet_trace_on.cmd and sqlnet_trace_off.cmd.

sqlnet_trace_on.cmd:

REM Script Name: sqlnet_trace_on.cmd
REM Purpose : to switch on sqlnet tracing on the windows client
REM Created by : AMIS Services, Nieuwegein, The Netherlands
REM
REM Remarks : sqlnet.ora.on, sqlnet.ora.off must be available in the
REM OH/network/admin-directory
REM
REM Revision record
REM Date Version Author Modification
REM ———- —— —————– ———————————-
REM 06-05-2014 1.0 Herman Buitenhuis Creation, sqlnet tracing
REM

set ORACLE_HOME=C:\app\herman\product\11.2.0\client_1\
set SQLNET_ORA_LOC=%ORACLE_HOME%/network/admin
set SQLNET_TRACE_LOC=%ORACLE_HOME%/network/log
set LOG=%SQLNET_TRACE_LOC%/sqlnet_trace_on.log

echo — Initializing Logfile – Switching sqlnet tracing ON — > %LOG%

@echo off
For /f “tokens=2-4 delims=/ ” %%a in (‘date /t’) do (set mydate=%%c-%%a-%%b)
For /f “tokens=1-2 delims=/:” %%a in (‘time /t’) do (set mytime=%%a%%b)
echo %mydate%_%mytime% >>%LOG%

@echo on

echo ================================================================ >>%LOG%
echo >>%LOG%
echo Copy sqlnet.ora.on to sqlnet.ora >>%LOG%

cd %SQLNET_ORA_LOC%
copy sqlnet.ora.on sqlnet.ora

@echo off
For /f “tokens=2-4 delims=/ ” %%a in (‘date /t’) do (set mydate=%%c-%%a-%%b)
For /f “tokens=1-2 delims=/:” %%a in (‘time /t’) do (set mytime=%%a%%b)
echo %mydate%_%mytime% >>%LOG%
@echo on

echo === Switched sqlnet Tracing ON === >>%LOG%

sqlnet_trace_off.cmd:

REM Script Name: sqlnet_trace_off.cmd
REM Purpose : to switch off sqlnet tracing on the windows client
REM Created by : AMIS Services, Nieuwegein, The Netherlands
REM
REM Remarks : sqlnet.ora.on, sqlnet.ora.off must be available in the
REM OH/network/admin-directory
REM
REM Revision record
REM Date Version Author Modification
REM ———- —— —————– ———————————-
REM 06-05-2014 1.0 Herman Buitenhuis Creation, sqlnet tracing
REM

set ORACLE_HOME=C:\app\herman\product\11.2.0\client_1\
set SQLNET_ORA_LOC=%ORACLE_HOME%/network/admin
set SQLNET_TRACE_LOC=%ORACLE_HOME%/network/log
set LOG=%SQLNET_TRACE_LOC%/sqlnet_Trace_OFF.log

echo — Initializing Logfile – Switching sqlnet tracing OFF — > %LOG%

@echo off
For /f “tokens=2-4 delims=/ ” %%a in (‘date /t’) do (set mydate=%%c-%%a-%%b)
For /f “tokens=1-2 delims=/:” %%a in (‘time /t’) do (set mytime=%%a%%b)
echo %mydate%_%mytime% >>%LOG%

@echo on
echo ================================================================ >>%LOG%
echo >>%LOG%
echo Copy sqlnet.ora.off to sqlnet.ora >>%LOG%

cd %SQLNET_ORA_LOC%
copy sqlnet.ora.off sqlnet.ora

@echo off
For /f “tokens=2-4 delims=/ ” %%a in (‘date /t’) do (set mydate=%%c-%%a-%%b)
For /f “tokens=1-2 delims=/:” %%a in (‘time /t’) do (set mytime=%%a%%b)
echo %mydate%_%mytime% >>%LOG%
@echo on

echo === Switched sqlnet Tracing OFF === >>%LOG%

Switching on sqlnet tracing manually…

Using the scripts you can switch on and switch off sqlnet tracing.

On the server you switch on sqlnet and listener tracing by the following command:

./sqlnet_trace_on.sh

You can switch off tracing by:

./sqlnet_trace_off.sh

On the client you can run the scripts sqlnet_trace_on.cmd and sqlnet_trace_off.cmd. However there is an important thing to say: Because of windows security, you should run these scripts in a cmd box with “run as administrator”! If you don’t do that you get “Access is denied” errors.

Switching on sqlnet tracing automatically

Using crontab you can automatically switch on and switch off sqlnet tracing on the server. For example if you want to daily switch on sqlnet tracing on 02:00 and switch it off on 03:00 you add (with “crontab –e”) the following lines to the crontab file:

# switch on/off sqlnet/listener tracing
00 02 * * * /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/sqlnet_trace_on.sh
00 03 * * * /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/sqlnet_trace_off.sh
#

On the windows client you can use the windows task scheduler to switch on and switch off sqlnet tracing. However because of windows security you can get access denied errors. In order to solve this I had to contact the windows system administrator. He changed the security settings of the %ORACLE_HOME%/network/admin directory. And then it worked without any problems.

I switch on tracing on the client before I did the restart of the listener. So I scheduled the script sqlnet_trace_on.cmd on 01:55 and sqlnet_trace_off.cmd on 02:55.

Using the above script and method I was able to do my sqlnet and listener tracing at night. And also sleep very well! :-)

I would like to thank my colleague Karin Kriebisch. She made the first initial version of the script.

The post Sqlnet tracing during nightly hours… appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/26/sqlnet-tracing-nightly-hours/feed/ 1
SOA Suite 12c: Using Enterprise Scheduler Service to schedule deactivation and activation of inbound adapter bindings http://technology.amis.nl/2014/08/23/soa-suite-12c-using-enterprise-scheduler-service-to-schedule-deactivation-and-activation-of-inbound-adapter-bindings/ http://technology.amis.nl/2014/08/23/soa-suite-12c-using-enterprise-scheduler-service-to-schedule-deactivation-and-activation-of-inbound-adapter-bindings/#comments Sat, 23 Aug 2014 20:46:10 +0000 http://technology.amis.nl/?p=32027 The Enterprise Scheduler Service that is available in Fusion Middleware 12.1.3 supports a number of administration activities around the SOA Suite. We will look at one particular use case regarding environment management using the ESS. Suppose we have an inbound database adapter. Suppose we have created the PortalSlotRequestProcessor SOA composite that uses a database poller [...]

The post SOA Suite 12c: Using Enterprise Scheduler Service to schedule deactivation and activation of inbound adapter bindings appeared first on AMIS Technology Blog.

]]>
The Enterprise Scheduler Service that is available in Fusion Middleware 12.1.3 supports a number of administration activities around the SOA Suite. We will look at one particular use case regarding environment management using the ESS. Suppose we have an inbound database adapter. Suppose we have created the PortalSlotRequestProcessor SOA composite that uses a database poller looking for new records in a certain table PORTAL_SLOT_ALLOCATIONS (this example comes from the Oracle SOA Suite 12c Handbook, Oracle Press). The polling frequency was set to once every 20 seconds. And that polling goes on and on for as long as the SOA composite remains deployed and active.

Imagine the situation where every day during a certain period, there is a substantial load on the SOA Suite, and we would prefer to reduce the resource usage from non-crucial processes. Further suppose that the slot allocation requests arriving from the portal are considered not urgent, for example because the business service level agreed with our account managers is that these requests have to be processed within 24 hours – rather than once every 20 seconds. We do not want to create a big batch, and whenever we can, we strive to implement straight through processing. But between 1 and 2 AM on every day, we would like to pause the inbound database adapter.

In this section, we will use the Enterprise Scheduler Service to achieve this. We will create the schedules that trigger at 1 AM every day, used for deactivating the adapter, and 2 AM, used for activating the adapter. In fact, in order to make testing more fun, we will use schedules that trigger at 10 past the hour and 30 past the hour. These schedules are then associated in the Enterprise Manager Fusion Middleware Control with the inbound database adapter binding PortalSlotRequestPoller.

Create Schedules

An ESS Schedule is used to describe either one or a series of moments in time. A schedule can be associated with one or many Job definitions to come to describe when those jobs should be executed. A recurring schedule has a frequency that describes how the moments in time are distributed over time. A recurring schedule can have a start time and an end time to specify the period during which the recurrence should take place.

To create the schedules that will govern the inbound database adapter, open the EM FMW Control and select the node Scheduling Services | ESSAPP. From the dropdown list at the top of the page, select Job Requests | Define Schedules, as is shown in this figur

 

image

Click on the icon to create a new schedule. Specify the name of the schedule as At10minPastTheHour. Set the display name to “10 minutes past each hour”. The schedule has to be created in the package [/oracle/apps/ess/custom/]soa. This is a requirement for schedules used for adapter activation.

Select the frequency as Hourly/Minute, Every 1 Hour(s) 0 Minute(s) and the start date as any date not too far in the future (or even in the past) with a time set to 10 minutes past any hour.

image

Note that using the button Customize Times, we can have a long list of moments in time generated and subsequently manually modify them if we have a need for some exceptions to the pattern.

Click on OK to save this schedule.

Create a second schedule called At30minPastTheHour. The definition is very similar to the previous one, except for the start time that should 30 minutes past some hour.

image

Click OK to save this schedule definition.

Note that more sophisticated recurrence schedules can be created through the Java API exposed by ESS as well as through the IDE support in JDeveloper. These options that allow specific week days or months to be included or excluded can currently not set set through the EM FMW Control.

Apply Schedules for Activation and Deactivation of Inbound Database Adapter

Select node SOA | soa-infra | default | PortalSlotRequestProcessor – the composite we created in the previous chapter. Under Services and References, click on the PortalSlotRequestPoller, the inbound database adapter binding.

clip_image002

The PortalSlotRequestProcessor appears. Click on the icon for adapter schedules.

image

In the Adapter Schedules popup that appears, we can select the schedule that is to be used for deactivating and for activating the adapter binding. Use the At10minPastTheHour schedule for deactivation and At30minPastTheHour for activation. Press Apply Schedules to confirm the new configuration.

clip_image003

From this moment on, the inbound database adapter binding that polls table PORTAL_SLOT_ALLOCATIONS is active only for 40 minutes during every hour, starting at 30 minutes past the hour.

For example, at 22:14, the binding is clearly not active.

image

 

Test switching off and on of Database Adapter binding

When the schedules for activation and deactivation have been applied, they are immediately in effect. You can test this in the Dashboard page for the inbound database adapter binding, as is shown here

clip_image002[5]

Here we see how a single record was processed by the adapter binding, insert at 10:09PM. Four more records were inserted into table PORTAL_SLOT_ALLOCATIONS at 10:13 and 10:14. However, because the adapter binding is currently not active, so these records have not yet been processed.

image

image

At 30 minutes past the hour – 10:30 in this case – the adapter becomes active again and starts processing the records it will then find in the table. Because the adapter was configured to pass just a single record to a SOA composite and not process more than two records in a single transaction, it will take two polling cycles to process the four records that were inserted between 10:10 and 10:30. These figures illustrate this.

image

image

clip_image004

 

The SOA composite instances that are created for these four records retrieved in two poll cycles:

image

and the flow trace for the instance at 10:30:09 looks like this – processing two separate database records:

image

imageWhen you check in the ESS UI in EM FMW Control, you will find two new Job Definitions, generic Jobs for executing SOA Suite management stuff:

ess_adapteractivation1

In the Job Requests overview, instances of these jobs appear, every hour one of each. And the details of these job requests specify which adapter binding in which composite is the target of the SOA administrative action performed by the job.

ess_adapteractivation2

The post SOA Suite 12c: Using Enterprise Scheduler Service to schedule deactivation and activation of inbound adapter bindings appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/23/soa-suite-12c-using-enterprise-scheduler-service-to-schedule-deactivation-and-activation-of-inbound-adapter-bindings/feed/ 0
SOA Suite 12c: Invoke Enterprise Scheduler Service from a BPEL process to submit a job request http://technology.amis.nl/2014/08/23/soa-suite-12c-invoke-enterprise-scheduler-service-from-a-bpel-process-to-submit-a-job-request/ http://technology.amis.nl/2014/08/23/soa-suite-12c-invoke-enterprise-scheduler-service-from-a-bpel-process-to-submit-a-job-request/#comments Sat, 23 Aug 2014 10:50:19 +0000 http://technology.amis.nl/?p=31992 The Fusion Middleware 12.1.3 platform contains the ESS or Enterprise Scheduler Service. This service can be used as an asynchronous, schedule based job orchestrator. It can execute jobs that are Operating System jobs, Java calls (local Java or EJB), PL/SQL calls, and Web Service calls (synchronous, asynchronous and one-way) including SOA composite, Service Bus and [...]

The post SOA Suite 12c: Invoke Enterprise Scheduler Service from a BPEL process to submit a job request appeared first on AMIS Technology Blog.

]]>
The Fusion Middleware 12.1.3 platform contains the ESS or Enterprise Scheduler Service. This service can be used as an asynchronous, schedule based job orchestrator. It can execute jobs that are Operating System jobs, Java calls (local Java or EJB), PL/SQL calls, and Web Service calls (synchronous, asynchronous and one-way) including SOA composite, Service Bus and ADF BC web services.

Jobs and schedules can be defined from client applications through a  Java API or through the Enterprise Manager FMW Control user interface. Additionally, ESS exposes a web service through which (pre defined) jobs can be scheduled. This web service can be invoked from BPEL processes in SOA composites. In this article I will briefly demonstrate how to do the latter: submit a request to the Enterprise Scheduler Service to execute a job according to a specified schedule.

Because the job cannot be executed anonymously, the ESS Scheduler Service has an attached WSM policy to enforce credentials to be passed in. As a consequence, the SOA composite that invokes the service needs to have a WSM policy attached to the reference binding for the ESS Service in order to provide those required credentials. This article explains how to do that.

Steps:

  • Preparation: create an ESS Job Definition and a Schedule – in my example these are SendFlightUpdateNotification (which invokes a SOA composite to send an email) and Every5Minutes
  • Ensure that the ESS Scheduler Web Service has a WSM security policy attached to enforce authentication details to be provided (see description in this article: FMW 12.1.3 – Invoking Enterprise Scheduler Service Web Services from SoapUI)
  • Create a SOA composite application with a one way BPEL process exposed as a SOAP Web Service
  • Add a Schedule Job activity to the BPEL process and configure it to request the SendFlightUpdateNotification according to the Every5Minutes schedule; pass the input to the BPEL process as the application property for the job
  • Set a WSDL URL for a concrete WSDL – instead of the abstract one that is configured by default for the ESS Service
  • Attach a WSM security policy to the Reference Binding for the ESS Scheduler Web Service
  • Configure username and password as properties in composite.xml file – to provide the authentication details used by the policy and passed in security headers
  • Deploy and Test

 

Preparation: create an ESS Job Definition and a Schedule

in my example these are SendFlightUpdateNotification (which invokes a SOA composite to send an email)

image

and Every5Minutes

image

 

Ensure that the ESS Scheduler Web Service has a WSM security policy attached

to enforce authentication details to be provided (see description in this article: FMW 12.1.3 – Invoking Enterprise Scheduler Service Web Services from SoapUI)

image

Create a SOA composite application

with a one way BPEL process exposed as a SOAP Web Service

image

Add a Schedule Job activity to the BPEL process

image

and configure it to request the SendFlightUpdateNotification according to the Every5Minutes schedule;

image

image

Leave open the start time and end time (these are inherited now from the schedule)

SNAGHTML62b8333

Open the tab application properties.

SNAGHTML62bc65a
Here we can override the default values for Job application properties with values taken for example from the BPEL process instance variables:

image

SNAGHTML62ce36c

 

note: in order to select the Job and Schedule, you need to create a database MDS connection to the MDS partition with the ESS User Meta Data

SNAGHTML62abfb6

 

When you close the Schedule Job definition, you will probably see this warning:

image

Click OK to acknowledge the message. We will soon replace the WSDL URL on the reference binding to correct this problem.

The BPEL process now looks like this:

image

Set a concrete WSDL URL on the Reference Binding for the ESS Service

Get hold of the URL for the WSDL for the live ESS Web Service.

image

image

image

image

Then right click the ESS Service Reference Binding and select Edit from the menu. Set the WSDL URL in the field in the Update Reference dialog.

 

image

Attach a WSM security policy to the Reference Binding for the ESS Scheduler Web Service

Because the ESS Scheduler Web Service is protected by a WSM Security Policy, it requires callers to pass the appropriate WS Security Header. We can simply attach a WSM policy [of our own]to achieve that effect. We can even do so through EM FMW Control, in the run time environment, rather than right here at design time. But this time we will go for the design time, developer route.

Right click the EssService reference binding. Select Configure SOA WS Policies | For Request from the menu.

image

The dialog for configuring SOA WS Policies appears. Click on the plus icon for the Security category. From the list of security policies, select oracle/wss_username_token_client_policy. Then press OK.

image

The policy is attached to the reference binding.

SNAGHTML66e5071

Press OK again.

What we have configured at this point will cause the OWSM framework to intercept the call from our SOA composite to the EssService and inject WS Security policies into it. Or at least, that is what it would like to do. But the policy framework needs access to credentials to put in the WS Security header. The normal approach with this is for the policy framework to inspect the configured credential store for the username and password to use. The default credential store is called basic.credentials,  but you can specify on the policy that it should a different credential store. See this article for more details: http://biemond.blogspot.nl/2010/08/http-basic-authentication-with-soa.html .

There is a short cut however, that we will use here. Instead of using a credential store, our security policy can also simply use a username and password that are configured as properties on the reference binding to which the policy is attached. For the purpose of this article, that is far more convenient.

Click on the reference binding once more. Locate the section Composite Properties | Binding Properties in the properties palette, as shown here.

image

Click on the green plus icon to add a new property. Its name is oracle.webservices.auth.username and the value is for example weblogic. Then add a second property, called oracle.webservices.auth.password and set its value:

SNAGHTML6760e82

You will notice that these two properties are not displayed in the property palette. However annoying that is, it is not a problem: the properties are added to the composite.xml file all the same:

image

Deploy and Test

The work is done. Time to deploy the SOA composite to the run time.

Then invoke the service it exposes:

image

Wait for the response

image

and inspect the audit trail:

image

When we drill down into the flow trace and inspect the BPEL audit details, we will find the response from the ESS service – that contains the request identifier:

image

At this point apparently a successful job request submission has taken place with ESS. Let’s check in the ESS console:

image

Job request 605 has spawned 606 that is currently waiting:

image

A little later, the job request 606 is executed:

image

We can inspect the flow trace that was the result of this job execution:

image

Note that there no link with the original SOA composite that invoked the scheduler service to start the job that now result in this second SOA composite instance.

After making two calls to the SOA composite that makes the call to the scheduler and waiting a little, the effects are visible of a job that executes every five minutes (and that is started twice):

image

The post SOA Suite 12c: Invoke Enterprise Scheduler Service from a BPEL process to submit a job request appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/23/soa-suite-12c-invoke-enterprise-scheduler-service-from-a-bpel-process-to-submit-a-job-request/feed/ 1
FMW 12.1.3 – Invoking Enterprise Scheduler Service Web Services from SoapUI http://technology.amis.nl/2014/08/23/fmw-12-1-3-invoking-enterprise-scheduler-service-web-services-from-soapui/ http://technology.amis.nl/2014/08/23/fmw-12-1-3-invoking-enterprise-scheduler-service-web-services-from-soapui/#comments Sat, 23 Aug 2014 07:03:01 +0000 http://technology.amis.nl/?p=31922 The Fusion Middleware 12.1.3 platform contains the ESS or Enterprise Scheduler Service. This service can be used as asynchronous, schedule based job orchestrator. It can execute jobs that are Operating System jobs, java calls (local Java or EJB), PL/SQL calls, and Web Service calls (synchronous, asynchronous and one-way) including SOA composite, Service Bus and ADF [...]

The post FMW 12.1.3 – Invoking Enterprise Scheduler Service Web Services from SoapUI appeared first on AMIS Technology Blog.

]]>
The Fusion Middleware 12.1.3 platform contains the ESS or Enterprise Scheduler Service. This service can be used as asynchronous, schedule based job orchestrator. It can execute jobs that are Operating System jobs, java calls (local Java or EJB), PL/SQL calls, and Web Service calls (synchronous, asynchronous and one-way) including SOA composite, Service Bus and ADF BC web services.

Jobs and schedules can be defined from client applications through a  Java API or through the Enterprise Manager FMW Control user interface. Additionally, ESS exposes a web service through which (pre defined) jobs can be scheduled. This web service can be invoked from BPEL processes in SOA composites – or from any component that knows how to invoke a SOAP Web Service.

In this article I will briefly demonstrate how to invoke the ESS Web Service from SoapUI. I will not describe how to create the Job Definition – I will assume two pre existing Job Definitions: HelloWorld (of type PL/SQL job) and SendFlightUpdateNotification of type SOA composite based one way Web Service. Both Job Definitions contain application properties – parameters that can be set for every job instance and that are used in the job execution. When invoking the ESS Web Service to schedule a job, values for these properties can be passed in.

There is one tricky aspect with ESS: jobs cannot be run as anonymous users. So if ESS does not know who makes the request for scheduling a job, it will not perform the request. It returns an error such as

oracle.as.scheduler.RuntimeServiceAccessControlException: ESS-02002 User anonymous does not have sufficient privilege to perform operation submitRequest JobDefinition://oracle/apps/ess/custom/saibot/SendFlightUpdateNotification.

To ensure we do not run into this problem, we have to attach a WSM security policy to the ESS Web Service and pass a WS Security Header with valid username and password in our request. Then the job request is made in the context of a validated user and this problem goes away.

The steps to go through:

  • preparation: create Job definitions in ESS that subsequently can be requested for scheduled execution (not described in this article)
  • attach the WSM policy oracle/wss_username_token_service_policy to the ESS Web Service
  • retrieve the WSDL (address) for the ESS Web Service
  • create a new SoapUI project based on the WSDL
  • create a request for the submitRequest operation
    • add WS Addressing headers to request
    • add WS Security header to request
  • run request and check the results – the response and the newly scheduled/executed job

Attach the WSM policy to the ESS Web Service

In EM FMW Control, click on the node for the Scheduling Service | ESSAPP on the relevant managed server. From the dropdown menu on the right side of the page, selection option Web Services

image

You will be taken to the Web Service overview page. Click on the link for the SchedulerServiceImplPort.

image

This brings you to another overview page for the SchedulerServiceImplPort. Open the tab labeled WSM Policies:

image

Click on the icon labeled Attach/Detach. Now you find yourself on the page where policies can be attached to this Web Service (port binding). Find the security policy oracle/wss_username_token_service_policy in the list of available policies. Click on the Attach button to attach this policy to the ESS Web Service.

image

 

Click on OK to confirm this new policy attachment.

image

At this point, the ESS Scheduler Service can only be invoked by parties that provide a valid username and password. As a result, the Web Service’s operations are executed in the context of a real user – just like job related operations performed through the EM FMW Control’s UI for ESS are or actions from a client application through the Java API.

Retrieve the WSDL (address) for the ESS Web Service

Click on the link for the WSDL Document SchedulerServiceImplPort:

image

The WSDL opens. We can see from the WSDL that the WS Security policy has been added. We will need the URL for this WSDL document to create the SoapUI project.

image

 

Create a new SoapUI project

Open SoapUI and create a new project. Set the address of the WSDL document that you retrieved in the previous step as the initial WSDL in this new project:

SNAGHTML59dbff1

Edit the request for the submitRequest operation

The request to the submitRequest operation is the request that will cause a new Job Request to be created (and therefore a job to be executed, one or potentially many times). Open the request that was generated by SoapUI.

image

You need to provide the details for the predefined job that already exists in ESS, so ESS will know what to do in processing this request. In this example, I want to run the HelloWorld job from package /oracle/apps/ess/custom through the (out of the box installed) EssNativeHostingApp application. I also provide a value for the application property mytestIntProp:

image

All details have been provided in the request message itself. However, trying to submit this request will fail for two reasons: no security details (a WS Security header) are passed and no WS Addressing details are provided – and the ESS Web Service requires those as well.

image

Let’s add the security side of things.

In the request properties palette, provide the username and password for a valid user account; it is easiest to try this out with the administrator account, probably something like weblogic/weblogic1

image

Then, right click on the request message and click on the option Add WSS Username Token

image

Specify Password Text as the password type

SNAGHTML5abd94b

SoapUI will add the header to the message:

image

When you now try again to submit the request, you will receive a fault regarding a WS Addressing header:

image

This is easily remedied. Click on the WS A tab at the bottom to the request pane:

image

The WS Addressing header properties palette is shown. Ensure that the checkbox for enabling WS-A addressing is checked and also that the checkbox Randomly generate MessageId is checked:

 

image

 

Now you can submit the request once more. And this time it will succeed. The response message indicates a successful submission of the job request, and it provides an identifier for that request:

image

In the EM FMW Control pages for ESS, we can inspect all job requests and locate our number 409:

SNAGHTML5b0bcc7

We can drill down to find out more details about this job request and its execution:

image

Note the application property value that was passed in from SoapUI to override the default value specified in the Job definition.

Whatever the PL/SQL procedure is supposed to do, has been done by now.

Resources

Documentation on ESS Web Service: http://docs.oracle.com/middleware/1213/ess/ESSDG/webservice.htm.

The post FMW 12.1.3 – Invoking Enterprise Scheduler Service Web Services from SoapUI appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/23/fmw-12-1-3-invoking-enterprise-scheduler-service-web-services-from-soapui/feed/ 0
SOA Suite 12c: Configuring GMail as the inbound email provider for UMS (IMAP, SSL) http://technology.amis.nl/2014/08/17/soa-suite-12c-configuring-gmail-as-the-inbound-email-provider-for-ums-imap-ssl/ http://technology.amis.nl/2014/08/17/soa-suite-12c-configuring-gmail-as-the-inbound-email-provider-for-ums-imap-ssl/#comments Sun, 17 Aug 2014 15:06:48 +0000 http://technology.amis.nl/?p=31773 In a recent article, I discussed how to configure the SOA Suite 12c for sending emails using GMail: http://technology.amis.nl/2014/08/05/setup-gmail-as-mail-provider-for-soa-suite-12c-configure-smtp-certificate-in-trust-store/. An interesting aspect of that configuration is the loading of the GMail SSL certificate into the Keystore used by WebLogic, in order for the SSL based interaction with GMail to successfully be performed. The configuration of [...]

The post SOA Suite 12c: Configuring GMail as the inbound email provider for UMS (IMAP, SSL) appeared first on AMIS Technology Blog.

]]>
In a recent article, I discussed how to configure the SOA Suite 12c for sending emails using GMail: http://technology.amis.nl/2014/08/05/setup-gmail-as-mail-provider-for-soa-suite-12c-configure-smtp-certificate-in-trust-store/. An interesting aspect of that configuration is the loading of the GMail SSL certificate into the Keystore used by WebLogic, in order for the SSL based interaction with GMail to successfully be performed. The configuration of GMail for inbound interactions requires a similar procedure for the certificate for the imap.gmail.com server.

This article quickly presents the steps required for getting this inbound interaction going, from the expected error:

image

<Aug 17, 2014 3:50:22 PM CEST> <Error> <oracle.sdpinternal.messaging.driver.email.inbound.ImapEmailStore> <SDP-26123>
ould not initialize Email Store for: user saibot.airport@gmail.com, server imap.gmail.com, folder INBOX, sslEnabled tr
javax.mail.MessagingException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.prov
er.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target;
  nested exception is:
        javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun
ecurity.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
        at com.sun.mail.imap.IMAPStore.protocolConnect(IMAPStore.java:665)
        at javax.mail.Service.connect(Service.java:295)
        at javax.mail.Service.connect(Service.java:176)
        at oracle.sdpinternal.messaging.driver.email.inbound.ImapEmailStore.initStore(ImapEmailStore.java:159)
        at oracle.sdpinternal.messaging.driver.email.inbound.ImapEmailStore.initStore(ImapEmailStore.java:106)

to the final successful reception of an email:

image

 

Load Certificate into Keystore

The interaction between the UMS server and GMail’s IMAP API takes place over SSL. That means that the WebLogic managed server on which the UMS service runs has to have the SSL certificate for the IMAP server loaded in its local keystore – in the exact same way that we needed to load the SMTP server’s certificate in order to be able send emails via GMail (http://technology.amis.nl/2014/08/05/setup-gmail-as-mail-provider-for-soa-suite-12c-configure-smtp-certificate-in-trust-store/).

The steps are a little familiar by now,  at least to me.

Download the certificate from Google and store it in a file. Depending on your operating system, this can be done in various ways. On Linux, here is a possible command:

openssl s_client -connect imap.gmail.com:993 > gmail-imap-cert.pem

image

The file gmail-imap-cert.pem should be created now. Note: this openssl action can take a long time or not even finish at all. You can end it after a few seconds (CTRL+C for example) because the important part is done very quickly and right at the beginning.

image

Open the file you retrieved with OpenSSL – gmail-imap-cert.pem in my case – in an editor (such as vi).

Remove all the lines before the line that says —–BEGIN CERTIFICATE—– – but leave this line itself! Also remove all lines after the line with —–END CERTIFICATE—– but again, leave this line itself. Save the resulting file, for example as gmail-imp-certificate.txt (but you can pick any name you like).

SNAGHTML20f4d3f

image

WebLogic (on which SOA Suite is running) out of the default installation uses a special keystore. It does not use the cacerts store that is installed with the JDK or JRE but instead uses a file called DemoTrust.jks and typically located at %WL_HOME/server/lib/DemoTrust.jks. This trust store is “injected” into the JVM when the WebLogic domain is started: “-Djavax.net.ssl.trustStore=/opt/oracle/middleware12c/wlserver/server/lib/DemoTrust.jks”. We have the option of removing this start up parameter: remove “-Djavax.net.ssl.trustStore=%WL_HOME%\server\lib\DemoTrust.jks” in setDomainEnv.cmd  and then add the certificates to the default Java keystore (cacerts) or, the easier option, we can add the certificate to the DemoTrust keystore that WebLogic uses.

The command for doing this, looks as follows, in my environment at least:

/usr/java/latest/jre/bin/./keytool -import -alias imap.gmail.com -keystore /opt/oracle/middleware12c/wlserver/server/lib/DemoTrust.jks -file /var/log/weblogic/gmail-imap-certificate.txt

image

The default password for the keystore is DemoTrustKeyStorePassPhrase.

You will be asked explicitly whether you trust this certificate [and are certain about adding it to the keystore]. Obviously you will have to type y in order to confirm the addition to the keystore:

image

When done, we can check the contents of the keystore using this command:

/usr/java/latest/jre/bin/./keytool -list -keystore  /opt/oracle/middleware12c/wlserver/server/lib/DemoTrust.jks

SNAGHTML20d47e3

Next, you have to restart the WebLogic Managed Server – and perhaps the AdminServer as well (I am not entirely sure about that, but I did it anyway)

 

Email Driver Properties

In EM FMW Control, open the User Messaging Service node in the navigator and select the usermessagingdriver-email for the relevant managed server. From the context menu, select Email Driver Properties. When there no configuration yet, you will create a new one. If you already configured the SOA Suite for outbound mail traffic, you can edit that configuration for the inbound direction.

 

image

In the property overview, there are some properties to set:

image

The Email Receiving protocol for GMail is IMAP. The Incoming Mail Server is imap.gmail.com. The port should be set to 993 and GMail wants to communicate over SSL, so the checkbox should be checked. The Incoming MailIDs are the email addresses that correspond to the lust names under Incoming User IDs. For GMail these can both be the full GMail email-addresses, such as saibot.airport@gmail.com, an account created for the Oracle SOA Suite 12c Handbook that I am currently writing. There are several ways to configure the password. The least safe one is by selecting Use Cleartext Pasword and simply typing the password for the GMail account in the password field. The password is then stored somewhere on the WebLogic server in readable form.

Press the OK button at the top of the page to apply all configuration changes.

image

 

SOA composite application with Inbound UMS Adapter binding

I have created a very simple composite. The (really the only) interesting aspect is the Inbound UMS Adapter on the left. This adapter binding when deployed negotiates with the UMS services on the WebLogic platform to have the configured mailbox polled and to have an instance of this composite created for every mail that was received. Note that we could have configured message filters to only trigger this composite for specific senders or subjects.

image

The inbound UMS adapter is configured largely with default settings – apart from the name (ReceiveEmail), the steps through the wizard are these:

SNAGHTML1f3fc90

SNAGHTML1f4916f

SNAGHTML1f4a58b

Specify which of the accounts that are configured on the UMS email-driver is associated with this particular adapter binding (note: this means that the value provided here for the end-point has to be included in the Incoming MailIDs property set on the email driver)

SNAGHTML1f4badf

Let’s process the mail content as a string – no attempt to natively transform. Note that many associated properties are available inside the SOA composite from the jca header properties.

SNAGHTML1f69ae1

Do not need Message Filters for this simple test:

SNAGHTML1f78a9c

Nor any custom Java to determine whether to process it. See for example this article for details on this custom Java callout: http://technology.amis.nl/2013/04/07/soa-suite-definitive-guide-to-the-ums-adapter-11-1-1-7/ 

SNAGHTML1f7d246

Press Finish to complete the adapter configuration.

Deploy the SOA composite to the SOA Suite run time. Send an email to the address that is being polled by the inbound UMS adapter:

image

Wait for a little while (about 15 seconds on average, with our current settings). Then check the EM FMW Control for new instances of our composite:

image

And check the contents of the message processed by the Mediator:

image

and scroll:

image

Yes! We did it.

The post SOA Suite 12c: Configuring GMail as the inbound email provider for UMS (IMAP, SSL) appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/17/soa-suite-12c-configuring-gmail-as-the-inbound-email-provider-for-ums-imap-ssl/feed/ 0
ADF DVT: Editor for easily creating custom base map definition files (hotspot editor) http://technology.amis.nl/2014/08/17/adf-dvt-editor-for-easily-creating-custom-base-map-definition-files-hotspot-editor/ http://technology.amis.nl/2014/08/17/adf-dvt-editor-for-easily-creating-custom-base-map-definition-files-hotspot-editor/#comments Sun, 17 Aug 2014 12:46:16 +0000 http://technology.amis.nl/?p=31722 Using a custom image as the base map for the ADF DVT Thematic Map component, such as is supported as of release 12.1.3, is very interesting. Visualization is extremely powerful for conveying complex aggregated information. Using maps to associate information  with particular locations – using shape, color, size as well – is very valuable. Being [...]

The post ADF DVT: Editor for easily creating custom base map definition files (hotspot editor) appeared first on AMIS Technology Blog.

]]>
Using a custom image as the base map for the ADF DVT Thematic Map component, such as is supported as of release 12.1.3, is very interesting. Visualization is extremely powerful for conveying complex aggregated information. Using maps to associate information  with particular locations – using shape, color, size as well – is very valuable. Being able to not only use a geographical map but any image (with sensibly identifiable locations) is even better.

Creating the custom base map with the Thematic Map component is quite easy. See for example this article for a demonstration: http://technology.amis.nl/2014/08/17/adf-dvt-creating-a-thematic-map-using-a-custom-base-map-with-hotspots/ .There really is only one inconvenience along the way: the creation of an XML file that describes the custom map (image) and the hotspots for associating markers with. That is not necessarily very hard to do, but it takes some time and effort and is error prone.

To overcome that (small) obstacle, I have  created a simple tool – a custom base map file editor. It runs as an ADF Web application. An image file is uploaded to it. The image is displayed and the user can click on all the hotspots on the image. Meanwhile, the XML file is composed.

Here is a visual example of the use of the tool:

Download an image that you want to use as a custom base map:

image

Run the custom base map editor tool. Upload the image to be used:

image

click on the button Process Image.

image

The image is now displayed in the browser.

image

The user can click on the relevant locations on the image. The tool identifies the hotspots from the mouse clicks and creates the custom XML file in the code editor component.

image

You can edit the contents of the code editor, for example to provide the values for the longLabel attribute.

The contents of the code editor can be copied and pasted into the custom XML file. You will only have to change the file reference to point to the correct local directory.

Resources

You will find the sources for this tool in GitHub at: https://github.com/lucasjellema/adf_dvt_12_1_3_custom_basemap_hotspot-editor. The sources constitute a JDeveloper application. Two Java classes, a JSF file and JavaScript library make up the custom base map editor.

image

Steps to get going:

  • Clone this repository or download the zip-file and expand locally
  • Open the CustomBaseMapEditor.jws file in JDeveloper 12.1.3 (or higher)
  • Identify a local directory that you will use for holding the image files
  • Configure that local image directory in class FileHandler (public final static String imageDirectory)
  • Run custom-basemap-editor.jsf
  • When the page opens, upload an image, press the button Process File. The image is shown in the browser. Click on it to define hotspots; in the code editor you will find the custom base map xml required for the Thematic Map component

The post ADF DVT: Editor for easily creating custom base map definition files (hotspot editor) appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2014/08/17/adf-dvt-editor-for-easily-creating-custom-base-map-definition-files-hotspot-editor/feed/ 0