AMIS Oracle and Java Blog https://technology.amis.nl Friends of Oracle and Java Sat, 18 Apr 2015 13:25:39 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.1 Searching Oracle Service Bus Pipeline Alert contents https://technology.amis.nl/2015/04/18/searching-oracle-service-bus-pipeline-alert-contents/ https://technology.amis.nl/2015/04/18/searching-oracle-service-bus-pipeline-alert-contents/#comments Sat, 18 Apr 2015 13:25:39 +0000 https://technology.amis.nl/?p=35582 There are several ways monitor messages passing through the Service Bus. Using pipeline alerts is one of them. Pipeline alerts can be searched in the Enterprise Manager based on several parameters such as summary or when they have occurred. Usually an important part of the message payload is saved in the content of the alert. [...]

The post Searching Oracle Service Bus Pipeline Alert contents appeared first on AMIS Oracle and Java Blog.

]]>
There are several ways monitor messages passing through the Service Bus. Using pipeline alerts is one of them. Pipeline alerts can be searched in the Enterprise Manager based on several parameters such as summary or when they have occurred. Usually an important part of the message payload is saved in the content of the alert. This content can not be searched from the Enterprise Manager. In this post I will provide an example for logging Service Bus request and response messages using pipeline alerts and a means to search alert contents for a specific occurrence. The example provided has been created in SOA Suite 12.1.3 but the script also works in SOA Suite 11.1.1.6.
titleimage

Service Bus Pipeline Alerts

The Oracle Service Bus provides several monitoring mechanisms. These can be tweaked in the Enterprise Manager.

CaptureDifferentWaysToMonitor

In this example I’m going to use Pipeline Alerts. Where you can find them in the Enterprise Manager has been described on: https://technology.amis.nl/2014/06/27/soa-suite-12c-where-to-find-service-bus-pipeline-alerts-in-enterprise-manager-fusion-middleware-control/. I’ve created a small sample process called HelloWorld. This process can be called with a name and returns ‘Hello name’ as a response. The process itself has a single AlertDestination and has two pipeline alerts. One for the request and one for the response. These pipeline alerts write the content of the header en body variables to the content field of the alert.

CaptureContent

When I call this service with ‘Maarten’ and with ‘John’, I can see the created pipeline alerts in the Enterprise Manager.

CaptureSeeAlerts

Next I want to find the requests done by ‘Maarten’. I’m not interested in ‘John’. I can search for the summary, but this only indicates the location in the pipeline where the alert occurred. I want to search the contents or description as it is called in the Enterprise Manager. Since clicking on every entry is not very time efficient, I want to use a script for that.

CaptureAlertDetail

Search for pipeline alerts using WLST

At first I thought I could use a method like on: http://docs.oracle.com/cd/E21764_01/web.1111/e13701/store.htm#CNFGD275 in combination with the location of the file-store which is used for the alerts; servers/[servername]/data/store/diagnostics. The dump however of this filestore was not readable enough for me and this method required access to the filesystem of the applicationserver. I decided to walk the WLST path.

The below WLST lists the pipeline alerts where ‘Maarten’ is in the contents / description. I used the following script. The script works on Service Bus 11.1.1.6 and 12.1.3. You should of course replace the obvious variables like username, password, url, servername and searchfor.

import datetime

#Conditionally import wlstModule only when script is executed with jython
if __name__ == '__main__':
    from wlstModule import *#@UnusedWildImport

print 'starting the script ....'
username = 'weblogic'
password = 'Welcome01'
url='t3://localhost:7101'
servername='DefaultServer'
searchfor='Maarten'

connect(username,password,url)

def get_children():
    return ls(returnMap='true')

domainRuntime()
cd('ServerRuntimes')
servers=get_children()

for server in servers:
    #print server
    cd(server)
    if server == servername:
        cd('WLDFRuntime/WLDFRuntime/WLDFAccessRuntime/Accessor/DataAccessRuntimes/CUSTOM/com.bea.wli.monitoring.pipeline.alert')
        end = cmo.getLatestAvailableTimestamp()
        start = cmo.getEarliestAvailableTimestamp()
        cursorname = cmo.openCursor(start,end,"")
        if cmo.hasMoreData(cursorname):
            records=cmo.fetch(cursorname)
            for record in records:
				#print record
				if searchfor in record[9]:
					print datetime.datetime.fromtimestamp(record[1]/1000).strftime('%Y-%m-%d %H:%M:%S')+' : '+record[3]+' : '+record[13]
        cmo.closeCursor(cursorname)
    cd('..')

The output in my case looks like:

2015-04-18 12:59:21 : Pipeline$HelloWorld$HelloWorldPipeline : HelloWorldPipelineRequest  
2015-04-18 12:59:21 : Pipeline$HelloWorld$HelloWorldPipeline : HelloWorldPipelineResponse  
2015-04-18 13:18:39 : Pipeline$HelloWorld$HelloWorldPipeline : HelloWorldPipelineRequest  
2015-04-18 13:18:39 : Pipeline$HelloWorld$HelloWorldPipeline : HelloWorldPipelineResponse  

Now you can extend the script to provide more information or lookup the relevant requests in the Enterprise Manager.

titleimage

The post Searching Oracle Service Bus Pipeline Alert contents appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/04/18/searching-oracle-service-bus-pipeline-alert-contents/feed/ 0
Kwaliteit in dienstverlening is een zaak van perceptie https://technology.amis.nl/2015/04/17/kwaliteit-dienstverlening-een-zaak-van-perceptie/ https://technology.amis.nl/2015/04/17/kwaliteit-dienstverlening-een-zaak-van-perceptie/#comments Fri, 17 Apr 2015 07:15:48 +0000 https://technology.amis.nl/?p=35560 Kwaliteit in dienstverlening is een zaak van perceptie. Het gaat over voldoen aan verwachtingen. Het imago van een organisatie, verhalen van referenten, persoonlijke informatie-uitwisseling met account managers en de inhoudelijk betrokkenen spelen allemaal een rol. Deze elementen in combinatie met persoonlijke ervaringen vormen een verwachtingspatroon. Een verwachtingspatroon dat bij iedere stakeholder anders is. Net zoals [...]

The post Kwaliteit in dienstverlening is een zaak van perceptie appeared first on AMIS Oracle and Java Blog.

]]>
Kwaliteit in dienstverlening is een zaak van perceptie. Het gaat over voldoen aan verwachtingen. Het imago van een organisatie, verhalen van referenten, persoonlijke informatie-uitwisseling met account managers en de inhoudelijk betrokkenen spelen allemaal een rol. Deze elementen in combinatie met persoonlijke ervaringen vormen een verwachtingspatroon. Een verwachtingspatroon dat bij iedere stakeholder anders is. Net zoals de wijze waarop de uitvoering van de dienstverlening wordt ervaren. Daarbij komt dat verwachtingen niet altijd expliciet worden uitgesproken of concreet gemaakt zijn. Sommige verwachtingen zijn reëel, anderen ook niet. Dat maakt de waardering van dienstverlening een lastig fenomeen. Het valt niet mee om het ‘goed’ te doen.

Wat is goed?

KwaliteitEen lastige vraag. Wanneer is het bijvoorbeeld goed toeven op een terras? Daarvoor zijn moeilijk harde indicatoren te geven. Wanneer zet je een parasol op? De zon, de temperatuur, de tijd van het jaar, de wind, het gezelschap, de kleding die men aan heeft, het tijdstip van de dag zijn allemaal factoren die meespelen. Het vraagt om gevoel voor de situatie én subtiele afstemming met de gasten. En als dat goed gebeurt, én de sfeer goed is, én de tafels en de glazen schoon, én, én, én, … dan is het goed toeven op het terras.

Resultaat

Als IT dienstverlener is het vandaag de dag ook niet meer genoeg om “gewoon” vakmensen in te zetten. Een klant wil worden geholpen bij het behalen van zijn doelen, zijn beoogde resultaat. Dat vraagt om meer dan alleen vakmanschap en inzet. Om resultaat te behalen is het belangrijk om effectief te werken. En dat zit hem niet alleen in het eigen werk. Het gaat om het resultaat dat in een keten wordt bereikt. En om de keten optimaal te laten functioneren is het van groot belang dat iedere schakel niet alleen goed functioneert, maar ook feedback geeft aan het geheel. Wat kan er in andere delen van het voortbrengingsproces worden aangepast zodat de eigen taak efficiënter kan worden uitgevoerd? Dat vraagt om omgevingsbewustzijn, gezonde nieuwsgierigheid en assertiviteit.

Topsport

Een tevreden klant creëren kun je vergelijken met topsport. Er wordt veel verwacht van externe specialisten, er wordt nog meer verwacht van een toonaangevende dienstverlener. Het draait daarbij niet meer slechts om individuele kwaliteiten. Het gaat om teamspirit, het kennen van elkaars beperkingen en kwaliteiten én daarop voorsorteren, inspelen en reageren. Ook inzicht in ‘de tegenstander’, zowel voor als tijdens de wedstrijd, en de aanwezige omgevingsfactoren zijn cruciaal om echt “in de wedstrijd te zitten”.

Teamsport

Samen voldoen aan verwachtingen die niet altijd expliciet zijn uitgesproken. Dat vergt heel wat van een team. Hoe lang voer ik mijn individuele actie door? Wanneer betrek ik mijn medespelers? Hoe hou ik overzicht? Wie kan me helpen? Hoe breng ik een ander in een betere positie? Teamsport is leren, incasseren, analyseren, verwerken, samenwerken, elkaar steunen, hulp vragen, begrip tonen. Maar ook: emotie. Iedereen zit in het spel met zijn of haar eigen ideeën, verwachtingen, zorgen, wensen, vooroordelen en frustraties. Factoren die het spel sterk beïnvloeden, maar vaak onzichtbaar zijn. Soms begrijpelijk en verklaarbaar, soms ook onbegrijpelijk en onverklaarbaar. Soms voel je iets aankomen, maar soms helemaal niet. In ons eigen team, maar ook in dat van de tegenstander. Dit is waar we het verschil kunnen maken, acteren we als individu, als groep, of als team?

Gemeenschappelijk doel

Een goed team heeft een gemeenschappelijk doel. Het bereiken van het gemeenschappelijke doel geeft als het goed is ruimte aan individuele wensen: leuk werk, uitdagende projecten, persoonlijke ontwikkeling, kennisdeling. Voor iedereen zit er iets in. Maar teamleden moeten onderlinge afhankelijkheid accepteren om het doel te bereiken en bereid zijn het teambelang te laten prevaleren boven eigen ideeën. Ze moeten zich realiseren dat de combinatie van kwaliteiten, kennis en kunde hen sterker maakt. Dat is het recept voor resultaat. En dat blijft lastig, want om dat te bereiken moet je soms je eigen comfortzone verlaten, afstand nemen van je eigen taak en bedenken wat op dat moment je maximale bijdrage is aan het doel.

De juiste intentie

Als iedereen, naast zijn primaire taak, scherp en alert is op het speelveld en de omgeving, zien we meer en zien we het sneller. We zien reacties op acties. Als we daar dan ook nog op acteren, kunnen we meer, bereiken we meer en voldoen we beter aan de onuitgesproken verwachtingen. Dat vergt continue afstemming met de omgeving en de stakeholders, expliciet en impliciet. Dat kost moeite. Dat vraagt onderling begrip. Dat vraagt om het zien van de goede intentie in iedere actie van je medespelers. Soms pakken acties misschien ongelukkig uit, soms lukt het niet om iets gedaan te krijgen. Maar ga er vanuit dat alle spelers in je team de juiste intentie hebben, gericht op het gemeenschappelijk doel. Vraag je af waarom iemand iets zegt, wat er achter zit. Vraag ernaar, vraag door, spreek elkaar aan, biedt hulp, geef tips en complimenten.

Meerwaarde

Als iedereen iets extra’s doet, weet ik zeker dat IT dienstverleners invulling kunnen geven aan onuitgesproken verwachtingen. Dat onze relaties krijgen wat ze nodig hebben, niet alleen waar ze om vragen. Ik weet zeker dat we meer creëren als we niet alleen onze eigen taak uitvoeren, maar zicht hebben op en acteren op wat er om ons heen gebeurd. Dan krijg je synergie. Dat is niet concreet, maar wel waarneembaar. En dat is waar onze klanten kwaliteitsgevoel van krijgen. Moeilijk te beschrijven en omschrijven, maar heel herkenbaar. En het is geweldig als het lukt om dat te realiseren, voor alle betrokkenen.

The post Kwaliteit in dienstverlening is een zaak van perceptie appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/04/17/kwaliteit-dienstverlening-een-zaak-van-perceptie/feed/ 0
My first experiences with ThreadLogic https://technology.amis.nl/2015/04/16/my-first-experiences-with-threadlogic/ https://technology.amis.nl/2015/04/16/my-first-experiences-with-threadlogic/#comments Thu, 16 Apr 2015 18:20:41 +0000 https://technology.amis.nl/?p=35538 A while ago I came into touch with ThreadLogic. Most of the people whom I was talking about it, did not know the tool. This unfamiliarity with ThreadLogic made me decide to write this blog. I think that every WebLogic Administrator should know ThreadLogic and that it is also a very interesting tool for Fusion Middleware [...]

The post My first experiences with ThreadLogic appeared first on AMIS Oracle and Java Blog.

]]>
A while ago I came into touch with ThreadLogic. Most of the people whom I was talking about it, did not know the tool. This unfamiliarity with ThreadLogic made me decide to write this blog. I think that every WebLogic Administrator should know ThreadLogic and that it is also a very interesting tool for Fusion Middleware Developers.

But let me start at the beginning. A while ago Michael Sahadat, a SOA/Integration Architect at Oracle, came over to help me solve a performance issue. He was using ThreadLogic and explained me how it helped us at the end to detect the performance bottleneck. But that I will explain later on. First I will tell about ThreadLogic.

Image 1

What is ThreadLogic?

ThreadLogic is a Thread Dump Analysis tool. Thread Dump Analysis is a key tool for performance tuning and troubleshooting of Java based applications.

Most TDA tools don’t mention the type of activity within a thread, should it be treated as normal or deserving a closer look? Can a pattern or anti-pattern be applied against them? Any possible optimizations? Are there any hot spots? Any classification of threads based on their execution cycles? ThreadLogic is created by the Oracle Fusion Middleware Architect Team (A-Team) to address these deficiencies.

Once a thread dump is parsed and threads details are populated, each of the thread is then analyzed against matching advisories and tagged appropriately. The threads are also associated with specific Thread Groups based on functionality or thread group name.

The current version of ThreadLogic is V2.0.217. Since version 2.0.215 it contains support for SOA 12c.

How did ThreadLogic help us?

After creating two threaddumps in WebLogic we loaded them in ThreadLogic. ThreadLogic immediately gave a Warning. Bottleneck among threads.

Image 2
Opening the dump tree and selecting the Advisory Map show a Map with information about the health of the system under investigation. Each of the advisory has a health level indicating severity of the issue found, pattern, name, keyword and related advice. As you can see in the picture below, our system has a number of FATAL and WARNING issues.

Image 3
When we selected Monitors in the tree, we saw that there was one red marked thread. 49 other threads where waiting for this one. Something’s seems to be wrong with the SchemaManager.

Image 4
After searching on Oracle Support we came to the following know issue: SOA Server Hangs Waiting for WSDLManager or SchemaManager (Doc ID 1546325.1)

Image 5
The solution for our performance issue turned out to install a missing patch.
Without getting in all the details of ThreadLogic I hope I have done my bit to increase the knowledge of ThreadLogic.

References:

ThreadLogic — Project Kenai – Java.net
Introducing ThreadLogic
ThreadLogic documentation page

The post My first experiences with ThreadLogic appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/04/16/my-first-experiences-with-threadlogic/feed/ 0
Donderdag 16 april/Thursday April 16th – Speedy Joe’s – Using asynchronous interaction in Java EE to turn the world’s slowest restaurant into a super performant place https://technology.amis.nl/2015/04/16/donderdag-16-aprilthursday-april-16th-speedy-joes-using-asynchronous-interaction-in-java-ee-to-turn-the-worlds-slowest-restaurant-into-a-super-performant-place/ https://technology.amis.nl/2015/04/16/donderdag-16-aprilthursday-april-16th-speedy-joes-using-asynchronous-interaction-in-java-ee-to-turn-the-worlds-slowest-restaurant-into-a-super-performant-place/#comments Thu, 16 Apr 2015 05:17:44 +0000 https://technology.amis.nl/?p=35558 De Java SIG (Special Interest Group) van AMIS organiseert op donderdag 16 april een bijeenkomst (publiek toegankelijk) over asynchrone interacties in Java EE (web) applicaties. Synchrone interacties in de echte wereld en in IT applicaties kunnen de boel behoorlijk ophouden. Synchroon betekent wachten – en resources vasthouden. Dat kan een probleem vormen voor schaalbaarheid en [...]

The post Donderdag 16 april/Thursday April 16th – Speedy Joe’s – Using asynchronous interaction in Java EE to turn the world’s slowest restaurant into a super performant place appeared first on AMIS Oracle and Java Blog.

]]>
De Java SIG (Special Interest Group) van AMIS organiseert op donderdag 16 april een bijeenkomst (publiek toegankelijk) over asynchrone interacties in Java EE (web) applicaties. Synchrone interacties in de echte wereld en in IT applicaties kunnen de boel behoorlijk ophouden. Synchroon betekent wachten – en resources vasthouden. Dat kan een probleem vormen voor schaalbaarheid en performance. In deze sessie leer je dit probleem te tackelen.

imageDeze SIG is interessant voor ontwikkelaars met JAVA kennis en ervaring (JavaScript en PL/SQL). Wat weet je en kun je na deze SIG? Na de sessie heb je inzicht in de positieve effecten van asynchrone interactiepatronen; kennis en een beetje ervaring met het implementeren van asynchrone interacties tussen browser en middle tier (AJAX, WebSockets), binnen de middle tier (WebSockets, JMS, CDI Events, Timer EJB) en tussen middle tier en database (background jobs, DB QRCN, http calls); een mogeljik nieuw element in je toolbox voor applicatie ontwerp en implementatie.

We gaan aan de slag om van moderne mechanismen gebruik te maken – zowel in de client (browser) als in de middle tier (Java) als in de database en in de koppeling tussen de tiers – om tot asynchrone interacties te komen. Denk aan AJAX, Web Sockets, Web Workers, Java EE technologie als JMS, EJB (MDB, Asynchronous EJB, Timer EJB), CDI (events) en JDBC en Database opties (jobs). De Speedy Joe’s Web Applicatie wordt gebruikt als voorbeeld van een traditioneel synchrone aanpak die wordt omgevormd naar een op alle lagen asynchrone applicatie. Je leert hoe de genoemde mechanismen kunnen worden ingezet en hoe ze samenwerken. In korte tijd komt een groot aantal fundamentele voorzieningen van browsers, Java EE en Oracle Database langs – allereerst in een demo en vervolgens ga je zelf aan de slag.

Het restaurant Speedy Joe’s en de bijbehorende Java EE Web applicatie lopen zo ongeveer parallel – beide hebben meerdere tiers, beide hebben overdracht van elementen tussen de tiers. En beide moeten efficïent met hun resources omgaan.

image

Deze sessie is geïnspireerd op het eerdere OTN artikel (http://www.oracle.com/technetwork/articles/soa/jellema-async-processing-2164889.html) en de presentatie tijdens JFall 2014 (http://www.nljug.org/jfall/session/speedy-perception-trumps-speedy-reception-smart-as/137/). De sources die worden besproken zijn ook beschikbaar op GitHub: https://github.com/lucasjellema/DinnerAtSpeedyJoes.

Neem je laptop mee

Wij leveren een VirtualBox image. Zorg voor voldoende vrije schijfruimte op je laptop (ongeveer 20 GB).

 

Je kunt je voor deze bijeenkomst aanmelden via: http://www.amis.nl/nl-NL/evenementen/java-sig

The post Donderdag 16 april/Thursday April 16th – Speedy Joe’s – Using asynchronous interaction in Java EE to turn the world’s slowest restaurant into a super performant place appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/04/16/donderdag-16-aprilthursday-april-16th-speedy-joes-using-asynchronous-interaction-in-java-ee-to-turn-the-worlds-slowest-restaurant-into-a-super-performant-place/feed/ 0
Some tips on creating, editing and manipulating and merging MP4 video files https://technology.amis.nl/2015/04/13/some-tips-on-creating-editing-and-manipulating-and-merging-mp4-video-files/ https://technology.amis.nl/2015/04/13/some-tips-on-creating-editing-and-manipulating-and-merging-mp4-video-files/#comments Mon, 13 Apr 2015 21:29:01 +0000 https://technology.amis.nl/?p=35549 Over the last few days, I have spent quite some time on creating several videos, demonstrating the use of technology. Creating this videos took a lot of time – especially the final step: merging five MP4 files together. This article provides some very brief pointers. First of all, I have used SnagIt to create the [...]

The post Some tips on creating, editing and manipulating and merging MP4 video files appeared first on AMIS Oracle and Java Blog.

]]>
Over the last few days, I have spent quite some time on creating several videos, demonstrating the use of technology. Creating this videos took a lot of time – especially the final step: merging five MP4 files together. This article provides some very brief pointers.

First of all, I have used SnagIt to create the initial screen-cams. Recent releases of SnagIt (note: this is not a free tool, although it is good value for its money) support not just the creation of screenshots (its initial purpose) but shooting screencam-videos as well. The videos can be saved in MP4 format – video and audio.

Next, I have used Microsoft MovieMaker – a free tool from the Windows Essentials tool set (http://windows.microsoft.com/en-us/windows-live/movie-maker) – for editing the video files. Especially trimming and splitting (to remove sections) has proven quite useful. I saved the edited video files as MP4 from MovieMaker.

image

At this point, it seemed that I ran into a little degradation of video quality. When I next tried to merge together the five MP4 files that I had created using MovieMaker, the video quality was no longer acceptable.

At this point, the third and final (free) tool enters the picture: My MP4Box GUI is a Graphical User Interface for the command line tool mp4box; it is simple. And it can join together MP4 video files quite smoothly. Fast, no frills. Make sure that all MP4 files have the same aspect ratio and the same frame rate.  Download My MP4Box GUI from http://www.videohelp.com/software/My-MP4Box-GUI.

Screenshot:

SNAGHTMLe58c5b

The post Some tips on creating, editing and manipulating and merging MP4 video files appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/04/13/some-tips-on-creating-editing-and-manipulating-and-merging-mp4-video-files/feed/ 0
Combine version control (SVN) and issue management (JIRA) to improve traceability https://technology.amis.nl/2015/04/12/combine-version-control-svn-and-issue-management-jira-to-improve-traceability/ https://technology.amis.nl/2015/04/12/combine-version-control-svn-and-issue-management-jira-to-improve-traceability/#comments Sun, 12 Apr 2015 15:22:03 +0000 https://technology.amis.nl/?p=35437 Version control and bug tracking systems are found in almost every software development project. Both contain information on release content. In version control, it is usual (and a best practice) to supply an issue number when code is checked in. Also it allows identification of code which is in a release (by looking at release [...]

The post Combine version control (SVN) and issue management (JIRA) to improve traceability appeared first on AMIS Oracle and Java Blog.

]]>
Version control and bug tracking systems are found in almost every software development project. Both contain information on release content. In version control, it is usual (and a best practice) to supply an issue number when code is checked in. Also it allows identification of code which is in a release (by looking at release branches). Issue management allows providing metadata to issues such as the fix release and test status. This is usually what release management thinks is in a release.

In this article I will provide a simple example on how you can quickly add value to your software project by improving traceability from code to release. This is done by combining the information from version control (SVN) and issue management (JIRA) to generate release notes and enforcing some version control rules.

To allow this to work, certain rules need to be adhered to

  • code is committed using a commit message or tag which allows linking of code to issue or change
  • it should be possible to identify the code which is part of a release from version control
  • the bug tracking system should allow a selection of issues per release

release notes large

Version control; link code to function

In this example I’ll talk about Subversion. Git also supports a similar mechanism of commit hooks. SVN can easily be installed and a repository be created by doing what is described on: http://www.civicactions.com/blog/2010/may/25/how_set_svn_repository_7_simple_steps

First you need to make sure you can link your code to your functionality. This is easily done with commit messages. In a small team you can quickly agree on a set standard and use that. When the team grows larger and more distributed, enforcing standards, becomes more of a challenge. SVN provides pre-commit hooks which can provide the needed functionality to require a certain format in the commit message. This avoids deviations of the agreed standard and allows more easily to extract (reliable) information from version control commit messages.

After creation of this repository, there will be a ‘hooks’ folder underneath the specified directory. Templates for hooks are provided there. Those are in shell scripts however and I prefer Perl for this. Mind though that the pre-commit hook script (even if it is a Perl file) should be executable!

In the below script I check for the format of a JIRA issue. You can also look at: http://stackoverflow.com/questions/10499098/restricting-subversion-commits-if-the-jira-issue-key-is-not-in-the-commit-messag. This allows commits to be prevented by directly checking JIRA. If you want to allow check-ins specifying a JIRA ID while not checking JIRA itself, you can use the below example. It also checks the directory (myproject directly under the repository root). Usually multiple projects use the same repository and you don’t want to bother everyone with your beautiful commit standards.

#!/usr/bin/perl -w  
 use strict;  
 my $repos  = $ARGV[0];  
 my $txn   = $ARGV[1];  
 my $svnlook = '/usr/bin/svnlook';  
 my $require = '\[([A-Z_0-0]+-[0-9]+)\]';  
 my $checklog = "N";  
 foreach my $line (`$svnlook changed -t "$txn" "$repos"`)  
 {  
     chomp($line);  
     if ($line !~ /^\s*(A|D|U|UU|_U)\s*(.+)$/)  
     {  
         die "!!Script Error!! Can't parse line: $line\n";  
     } else {  
         if ($2 =~ /^myproject.*$/)  
         {  
             $checklog = "Y";  
         }  
     }  
 }  
 if ($checklog ne "N")  
 {    my $log = `$svnlook log -t $txn $repos`;  
     if ($log =~ /$require/) {  
         exit 0;  
     } else {  
         die "No JIRA issue specified. Commit aborted!\n";  
     }  
 }
[maarten@localhost trunk]$ svn commit -m'Please kick me'
Adding trunk/test.txt
Transmitting file data .svn: Commit failed (details follow):
svn: Commit blocked by pre-commit hook (exit code 255) with output:
No JIRA issue specified. Commit aborted!

[maarten@localhost trunk]$ svn commit -m'[ABC-1]: Nice commit message'
Adding trunk/test.txt
Transmitting file data .
Committed revision 386327.

Extract issue numbers

From JIRA

The JIRA API can be used to extract issues using a selection. Your selection might differ. Below is just an example giving me issues of project ABC assigned to user smeetsm with password “password”. It is a nice example of how simple the JIRA API is. Also it gives an example on how to extract specific information using the commandline from a JSON string. You can see the same regular expression as the one used in the pre-commit hook.

/usr/bin/curl -u smeetsm:password http://jira/rest/api/2/search?jql=project=ABC%20and%20assignee=smeetsm | grep -Eho ‘”key”:”([A-Z_0-0]+-[0-9]+)”‘

This command can have output like:

“key”:”ABC-1″
“key”:”ABC-2″
“key”:”ABC-3″

If you pipe this to a file (issues.txt), you can easily convert this to an XML by using something like:

echo \<issues\>;cat issues.txt | sed ‘s/”key”=\”\(.*\)”/\<issue\>\1\<\/issue\>/'; echo \</issues\>

This will have as output:

<issues>
<issue>ABC-1</issue>
<issue>ABC-2</issue>
<issue>ABC-3</issue>
</issues>

I choose this method of converting the JSON to XML since I wanted minimal overhead in my process (quick, easy, as few as possible external dependencies).

From SVN

You can use the following Python script to parse the SVN log and get the issues checked in from there. The script requires Python 2.7 and the lxml library. The lxml library (+installer) can be downloaded at: https://pypi.python.org/pypi/lxml/ or you can download it using the Python package manager PIP (supplied with Python 2.7.9+).

 import os  
 import xml.etree.ElementTree as ET  
 import re  
 import subprocess  
 def getsvnlog():  
     p = subprocess.Popen(['/usr/bin/svn','log','--verbose','--xml','-r','{2015-03-09}:HEAD','file:///home/maarten/myrepository/myproject'],stdout=subprocess.PIPE, stderr=subprocess.PIPE)  
     out, err = p.communicate()  
     return out  
 def getfilesfromlogentry(logitem):  
     result=[]  
     for path in logitem.findall("./paths/path[@kind='file']"):  
         result.append(path.text)  
     return result  
 def getfieldfromlogentry(logitem,fieldname):  
     result=[]  
     for item in logitem.findall("./"+fieldname):  
         result.append(item.text)  
     return result  
 def parse_svnlog():  
     svnlog = getsvnlog()  
     root = ET.fromstring(svnlog)  
     uniquelist={}  
     print "<issues>"  
     for logitem in root.findall("./logentry"):  
         for msg in getfieldfromlogentry(logitem,"msg"):  
             p = re.compile('([A-Z_0-0]+-[0-9]+)')  
             iterator = p.finditer(msg)  
             for match in iterator:  
                 print "<issue>"+ msg[match.start():match.end()]+"</issue>"  
     print "</issues>"  
     return root  
 parse_svnlog()

I have specified a duration between 2015-03-09 and now (HEAD) to identify the release. Identifying a release is usually done by looking at a release branch but the method is similar. You can again see the same regular expression which has been used in the pre-commit hook and in the JIRA API call.

This will yield a result like:

<issues>
<issue>ABC-1</issue>
<issue>ABC-3</issue>
<issue>ABC-4</issue>
</issues>

Generate release notes

Once you have issue numbers from version control and from issue management, you can do interesting things like generating release notes or just a report. The nice thing here is that by comparing the issues from version control and issue management, you can draw interesting conclusions.

If for example you have the following issues from SVN (svnissues.xml):

<issues>
<issue>ABC-1</issue>
<issue>ABC-3</issue>
<issue>ABC-4</issue>
</issues>

And the following from JIRA (jiraissues.xml):

<issues>
<issue>ABC-1</issue>
<issue>ABC-2</issue>
<issue>ABC-3</issue>
</issues>

You’ll notice ABC-4 is only present in SVN and ABC-2 is only present in JIRA. Why is that? Has the developer checked in code he was not supposed to? Has the developer checked in the code in the correct release branch? Is the JIRA issue status correct? It is something which should be investigated and corrected.

You can use the following Python script combined with the following XSL to produce output. The layout and contents of the release notes is of course greatly simplified. This is usually very customer specific.

transform.pl

from xml.dom.minidom import *
import lxml.etree as ET
dom1 = ET.Element("dummy")
xslt = ET.parse("transform.xsl")
transform = ET.XSLT(xslt)
print(ET.tostring(transform(dom1), pretty_print=True))

The XSLT shows how you can load XML files and use a reusable template call to compare the results.

transform.xsl

<?xml version="1.0"?>  
 <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">  
 <xsl:variable name="issues1" select="document('issuessvn.xml')"/>  
 <xsl:variable name="issues2" select="document('issuesjira.xml')"/>  
 <xsl:template match="/">  
 <html>  
 <body>  
 <xsl:for-each select="$issues1/issues/issue">  
  <xsl:call-template name="getIssue">  
  <xsl:with-param name="search" select="."/>  
  <xsl:with-param name="content" select="$issues2"/>  
  <xsl:with-param name="ident1" select="'SVN'"/>  
  <xsl:with-param name="ident2" select="'JIRA'"/>  
  <xsl:with-param name="showfound" select="true()"/>  
  <xsl:with-param name="shownotfound" select="true()"/>  
  </xsl:call-template>  
 </xsl:for-each>  
 <xsl:for-each select="$issues2/issues/issue">  
  <xsl:call-template name="getIssue">  
  <xsl:with-param name="search" select="."/>  
  <xsl:with-param name="content" select="$issues1"/>  
  <xsl:with-param name="ident1" select="'JIRA'"/>  
  <xsl:with-param name="ident2" select="'SVN'"/>  
  <xsl:with-param name="showfound" select="false()"/>  
  <xsl:with-param name="shownotfound" select="true()"/>  
  </xsl:call-template>  
 </xsl:for-each>  
 </body>  
 </html>  
 </xsl:template>  
 <xsl:template name="getIssue">  
 <xsl:param name="search"/>  
 <xsl:param name="content"/>  
 <xsl:param name="ident1"/>  
 <xsl:param name="ident2"/>  
 <xsl:param name="showfound"/>  
 <xsl:param name="shownotfound"/>  
 <xsl:choose>  
 <xsl:when test="$content/issues/issue[text()=$search]">  
  <xsl:if test="$showfound">  
  <p>Issue <xsl:value-of select="$search"/> found in <xsl:value-of select="$ident1"/> and <xsl:value-of select="$ident2"/></p>  
  </xsl:if>  
 </xsl:when>  
 <xsl:otherwise>  
  <xsl:if test="$shownotfound">  
  <p>Issue <xsl:value-of select="$search"/> found in <xsl:value-of select="$ident1"/> but not in <xsl:value-of select="$ident2"/></p>   
  </xsl:if>  
 </xsl:otherwise>  
 </xsl:choose>  
 </xsl:template>  
 </xsl:stylesheet>

Finally

Take a look at the sample generated release notes below. Of course a very simple sample only focusing on version control and issue management.

Issue ABC-1 found in SVN and JIRA
Issue ABC-3 found in SVN and JIRA
Issue ABC-4 found in SVN but not in JIRA
Issue ABC-2 found in JIRA but not in SVN

You now have a means to check whether the developer was allowed to check the code into version control and what the status of the change/bug was. You are now also able to identify which parts of other issues might also be part of the release (by accident?). If you allow developers to indicate which issues are part of the release, they will most likely not be 100% accurate (describe release content with developer prejudice). If you automate this, you can be at least more accurate. Because you check version control against issue management, you also have a means to make the issue management information more accurate. Maybe for example someone forgot to update the issue status or put in a incorrect fix release. Both improve traceability from code to release.

Small note

You can do many things with version control hooks. You can do code compliance checks, check character sets, check filename conventions. All of these will help improve code quality. You can provide people with all kinds of interesting reports from version control and issue management about developer productivity and the quality of work they provide. Be careful with this and keep the custom scripts small and maintainable (unless of course you want to stay there forever).

The post Combine version control (SVN) and issue management (JIRA) to improve traceability appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/04/12/combine-version-control-svn-and-issue-management-jira-to-improve-traceability/feed/ 0
Demonstration of Oracle Stream Explorer for live device monitoring – collect, filter, aggregate, pattern match, enrich and publish https://technology.amis.nl/2015/04/12/demonstration-of-oracle-stream-explorer-for-live-device-monitoring-collect-filter-aggregate-pattern-match-enrich-and-publish/ https://technology.amis.nl/2015/04/12/demonstration-of-oracle-stream-explorer-for-live-device-monitoring-collect-filter-aggregate-pattern-match-enrich-and-publish/#comments Sun, 12 Apr 2015 08:25:23 +0000 https://technology.amis.nl/?p=35433 This article describes a use case for Oracle Stream Explorer – Oracle’s business user friendly interface on top of OEP – Oracle Event Processor. We assume a large number of devices – such as printers, copiers, sensors, detectors, coffee machines – spread across the globe – and the cloud. All devices continuously report their status, [...]

The post Demonstration of Oracle Stream Explorer for live device monitoring – collect, filter, aggregate, pattern match, enrich and publish appeared first on AMIS Oracle and Java Blog.

]]>
This article describes a use case for Oracle Stream Explorer – Oracle’s business user friendly interface on top of OEP – Oracle Event Processor. We assume a large number of devices – such as printers, copiers, sensors, detectors, coffee machines – spread across the globe – and the cloud.

image

All devices continuously report their status, by sending a message every other second that contains their device identifier, a code that can indicate the healthy status or an error and some additional details. The sheer number of devices combined with the continuous stream of reports they sent in set the challenges perimeters within which we have to implement fast and effective monitoring. Our specific challenge is: “whenever a device reports an error code three times within 10 seconds, we consider that device broken, and action should be taken” (that also means that we do not spring into action on the first or even second fault report from a device). Additionally: we only require a single action for a broken device – once the action is initiated, we do not have to start an action again for that same device – unless of course it is broken again at a much later point in time.

image

The concrete implementation described in this article looks as follows:

image

For the sake of a simple demonstration, we read device message reports from a csv file, instead of a live stream such as a JMS destination or an HTTP channel. Note that the Stream Explorer implementation would be exactly the same for these other stream types. Stream Explorer processes the device signals. For signals that satisfy the requirements of a broken device, the information is enriched from a database with device details – such as the physical location of the device – and finally an EDN event is composed and published. This event is consumed by a SOA Composite application in the SOA Suite 12c environment. This composite can virtually do anything, including assigning a task, starting a BPM process or sending an email.

The implementation described in this article is also demonstrated in a video on YouTube:

Demo of Stream Explorer in action for monitoring devices

Implementing the Stream Explorer application

With Stream Explorer – everything starts from a Stream – a source of events or messages such as a JMS Queue or Topic, the SOA Suite Event Delivery Network, an HTTP channel or a CSV file such as in this case. Through one or more Explorations – that each can do filtering, aggregation, enrichment and pattern matching – finally conclusions can be published to a target. Targets can be JMS destinations, HTTP channels, a CSV file and the Event Delivery Network of SOA Suite.

In a number of steps, we will go from the CSV file with device signals to the EDN events that represent broken devices.

image

The first step will be an exploration that filters the non-ok signals from the stream:

image

The second step will find failing devices by counting the number of non-ok signals in a 10 second period and filtering on any device with a count greater than or equal to 3:

image

Next, to prevent any failing device from being reported more than once (in a certain period of time) we perform deduplication, using one of the special patterns shipped out of the box in Stream Explorer:

image

The remaining messages report a unique failing device and we need to enrich those messages with details about the device location, taken from a Reference defined for a database table:

image

The enriched messages are routed to a target: an EDN event that is published to the SOA Suite runtime whose address is configured:

image

The next sections show – just as the video does – how these explorations are created in Stream Explorer.

Open Stream Explorer and log in

image

image

Open the Catalog.

DeviceSignals Stream and NonOkSignals Exploration

Create a new Stream, of type CSV.

image

Press Next. Select the CSV file:

SNAGHTML14b4e240

You may want to briefly inspect this file:

image

Back in the wizard, refine the Data Shape definition and give it a name:

image

Press Create to complete the Stream definition.

Next, the Exploration wizard is started. Set the name of the Exploration to NonOkSignals:

image

Press the Create button.

The Exploration editor is showing and the first messages read from the CSV file are produced in the exploration:
image

Define a filter for the exploration, to only produce messages for non ok error codes:

image

Publish the exploration and return to the catalog.

Exploration for Failing Devices

Create a new Exploration. Call it FaultyDevice[s]and use NonOkSignals as the source.

image

Press Create to navigate to the editor for the exploration.

Specify a summary: count DeviceId , group by DeviceId. Next, add a filter, to produce results only when for a DeviceId we have counted 3 or more NonOkSignals. Finally, specify a time window over which to calculate the count. Set the range of the window to 20 seconds and set the evaluation frequency to [once every]3 [seconds].

image

Publish the exploration. Note that if you wait for some time, you will see devices being reported in the Live Output Stream section of this page:

image

Return to the Catalog.

Deduplicate Devices

Create a Pattern – an exploration based on a predefined pattern. Pick the Eliminate Duplicates pattern.

image

Select FaultyDevice as the source for this pattern exploration. Select DeviceId as the key to determine the uniqueness by (eliminate the second and subsequent events from the FaultyDevice stream). Set the window to 1 minute. This means that any subsequent events for a device are blocked during 1 minute after the initial event for the device. After that minute, the slate is cleared for that device.

 

Publish this exploration too.image

Return to the catalog.

Enrich the FailingDeviceEvents and Pulish them as EDN Event

Create a Reference:

image

Note: before doing this, a Data Source to the schema that contains the database table should have been set up in OEP – either through the WLEvent Visualizer or directly in the OEP configuration file:

image

The table used as a reference is shown here:

image

Define the Reference – set its name and its type:

image

Then select the table that provides the data for this reference:

image

And press Create.

Now create a new exploration – our last one in this article:

image

Set its name and select Exploration4 – the name auto-assigned to the [Eliminate Duplicates] Pattern based exploration – as the source:

image

Add the Reference DeviceDetails as a second source for this exploration:

image

Then specify the correlation condition:

image

This completes the logic for the Exploration. We do need to add a target to it – to make the outcomes from this Exploration produce EDN events.

Click on Configure Target. The select EDN as the target type in the wizard:

image

Type in the connection details for the SOA Suite managed server. The EDN event definitions will be loaded.

image

Note: at this stage, there seems to be an issue when the EDL files import XSD definitions that themselves import XSD definitions. Additionally, it seems that EDN events with nested elements are not handled well by Stream Explorer.

In this case, my SOA Suite domain contains just a single EDN event definition that satisfies the conditions mentioned above. Select this definition:

image

Next, provide the mapping between properties in the current exploration and the elements in the EDN event:

image

And press Finish.

Publish the Exploration to make it active.

After some time, new failing devices will have be spotted and EDN events will have been published. These EDN events in this have trigger the SOA Composite DeviceMaintenance. Here we see an example of some instances as well as the details of one of these instances.

image

The device signals picked by Stream Explorer and processed through four explorations result in this SOA composite being invoked – only once – for every device that has found to be failing. The hard in terms of live observing a great number of devices simultaneously and continuously is taken on by Stream Explorer. And configuring these explorations turns out to be quite straightforward and rather declarative.

Resources

Zip with CSV file with device data and SQL scripts for the creation of the device details table: deviceMonitoringResources .

The post Demonstration of Oracle Stream Explorer for live device monitoring – collect, filter, aggregate, pattern match, enrich and publish appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/04/12/demonstration-of-oracle-stream-explorer-for-live-device-monitoring-collect-filter-aggregate-pattern-match-enrich-and-publish/feed/ 0
Creating and scaling Dynamic Clusters using wlst https://technology.amis.nl/2015/04/10/creating-and-scaling-dynamic-clusters-using-wlst/ https://technology.amis.nl/2015/04/10/creating-and-scaling-dynamic-clusters-using-wlst/#comments Fri, 10 Apr 2015 08:29:30 +0000 https://technology.amis.nl/?p=35421 In my previous article, Creating and scaling Dynamic Clusters in Weblogic 12c, I described the creation and scaling of Dynamic Clusters. I used the Weblogic Console to create the Dynamic Clusters and change the number of servers. Most of the time you will use some wlst scripting to create and manage your Weblogic environments. In [...]

The post Creating and scaling Dynamic Clusters using wlst appeared first on AMIS Oracle and Java Blog.

]]>
In my previous article, Creating and scaling Dynamic Clusters in Weblogic 12c, I described the creation and scaling of Dynamic Clusters. I used the Weblogic Console to create the Dynamic Clusters and change the number of servers.

Most of the time you will use some wlst scripting to create and manage your Weblogic environments.
In this article I will show you how to create Dynamic Clusters en how you can scale them.

The example scripts from the Oracle documentation where used as base for the following script.
It is just a simple create script to show you how easy it is to create a Dynamic Cluster via wlst. So no fancy functions and exception handling in there. Yet …

createDynamicCluster.py

print '--- Set properties for dynamic Cluster creation'
clusterName='dyna-cluster'
serverTemplate='dyna-server-Template'
serverNamePrefix='dyna-server-'
listenAddress='192.168.100.4${id}'
listenPort=8000
listenPortSSL=9000
maxServerCount=2

print '--- Connect to the AdminServer'
try:
connect('weblogic','Welcome01','t3://wls01.domain.local:7001')
except err:
print "--- Can't connect to AdminServer, "+err
sys.exit(2)

print '--- Start an edit session'
edit()
startEdit()

print '--- Creating the server template '+serverTemplate+' for the dynamic servers and set the attributes'
dynamicServerTemplate=cmo.createServerTemplate(serverTemplate)
dynamicServerTemplate.setListenAddress(listenAddress)
dynamicServerTemplate.setListenPort(listenPort)
dynamicServerTemplateSSL=dynamicServerTemplate.getSSL()
dynamicServerTemplate.setListenPort(listenPortSSL)

print '--- Creating the dynamic cluster '+clusterName+', set the number of dynamic servers and designate the server template to it.'
dynamicCluster=cmo.createCluster(clusterName)
dynamicServers=dynamicCluster.getDynamicServers()
dynamicServers.setMaximumDynamicServerCount(maxServerCount)
dynamicServers.setServerTemplate(dynamicServerTemplate)

print '--- Designating the Cluster to the ServerTemplate'
dynamicServerTemplate.setCluster(dynamicCluster)

print '--- Set the servername prefix to '+serverNamePrefix
dynamicServers.setServerNamePrefix(serverNamePrefix)

print '--- Set Calculate Listen Port and Machinename based on server template'
dynamicServers.setCalculatedMachineNames(true)
dynamicServers.setCalculatedListenPorts(true)

print '--- Save and activate the changes'
save()
activate()
serverConfig()

Running the script with wlst will produce the following output and will create a Dynamic Cluster with two Dynamic Servers.

[oracle@wls01 ~]$ ${WL_HOME}/common/bin/wlst.sh createDynamicCluster.py
Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

--- Set properties for dynamic Cluster creation
--- Connect to the AdminServer
Connecting to t3://wls01.domain.local:7001 with userid weblogic ...
Successfully connected to Admin Server "AdminServer" that belongs to domain "demo_domain".

Warning: An insecure protocol was used to connect to the
server. To ensure on-the-wire security, the SSL port or
Admin port should be used instead.

Start an edit session
Location changed to edit tree. This is a writable tree with
DomainMBean as the root. To make changes you will need to start
an edit session via startEdit().

For more help, use help('edit')

Starting an edit session ...
Started edit session, please be sure to save and activate your
changes once you are done.
--- Creating the server template dyna-server-Template for the dynamic servers and set the attributes
--- Creating the dynamic cluster dyna-cluster, set the number of dynamic servers and designate the server template to it.
--- Designating the Cluster to the ServerTemplate
--- Set the servername prefix to dyna-server-
--- Set Calculate Listen Port and Machinename based on server template
--- Save and activate the changes
Saving all your changes ...
Saved all your changes successfully.
Activating all your changes, this may take a while ...
The edit lock associated with this edit session is released
once the activation is completed.
Activation completed

As you might expect, it is way faster than clicking through the Weblogic Console.
Next step will be to scale the Dynamic Cluster up to four Dynamic Servers.

scaleDynamicCluster.py

print '--- Set properties for dynamic Cluster creation'
clusterName='dyna-cluster'
maxServerCount=4

print '--- Connect to the AdminServer'
try:
connect('weblogic','Welcome01','t3://wls01.domain.local:7001')
except err:
print "Can't connect to AdminServer, "+err
sys.exit(2)

print '--- Start an edit session'
edit()
startEdit()

print '--- Change the maximum number of dynamic servers'
cd('/Clusters/%s' % clusterName )
dynamicServers=cmo.getDynamicServers()
dynamicServers.setMaximumDynamicServerCount(maxServerCount)

print '--- Save and activate the changes'
save()
activate()
serverConfig()

Running the script with wlst will produce the following output and will scale up to four Dynamic Servers.

[oracle@wls01 ~]$ ${WL_HOME}/common/bin/wlst.sh scaleDynamicCluster.py

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

--- Set properties for dynamic Cluster creation
--- Connect to the AdminServer
Connecting to t3://wls01.domain.local:7001 with userid weblogic ...
Successfully connected to Admin Server "AdminServer" that belongs to domain "demo_domain".

Warning: An insecure protocol was used to connect to the
server. To ensure on-the-wire security, the SSL port or
Admin port should be used instead.

--- Start an edit session
Location changed to edit tree. This is a writable tree with
DomainMBean as the root. To make changes you will need to start
an edit session via startEdit().

For more help, use help('edit')

Starting an edit session ...
Started edit session, please be sure to save and activate your
changes once you are done.
--- Change the maximum number of dynamic servers
--- Save and activate the changes
Saving all your changes ...
Saved all your changes successfully.
Activating all your changes, this may take a while ...
The edit lock associated with this edit session is released
once the activation is completed.
Activation completed

As mentioned before, the scripts are very limited and just show you how easy it is to create Dynamic Clusters using wlst. The scripts can be made as comprehensive as you need (want) them to be.
I will create some more examples and post them as I get them ready.

Imagine the possibilities when you create scripts you can connect to your monitoring system. Capacity on demand!

The post Creating and scaling Dynamic Clusters using wlst appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/04/10/creating-and-scaling-dynamic-clusters-using-wlst/feed/ 0
Use Oracle Stream Explorer and the Service Execution Reporter policy to analyze service behavior – find too-late-closing flights on Saibot Airport https://technology.amis.nl/2015/04/08/use-oracle-stream-explorer-and-the-service-execution-reporter-policy-to-analyze-service-behavior-find-too-late-closing-flights-on-saibot-airport/ https://technology.amis.nl/2015/04/08/use-oracle-stream-explorer-and-the-service-execution-reporter-policy-to-analyze-service-behavior-find-too-late-closing-flights-on-saibot-airport/#comments Wed, 08 Apr 2015 09:07:00 +0000 https://technology.amis.nl/?p=35393 This article shows how using the Service Execution Reporting policy – first introduced in this article: https://technology.amis.nl/2015/04/01/oracle-soa-suite-12c-create-deploy-attach-and-configure-a-custom-owsm-policy-to-report-on-service-execution/ – and the bridge created from the reporter through JMS to Stream Explorer – demonstrated in this article: https://technology.amis.nl/2015/04/06/live-monitoring-of-soa-suite-service-execution-with-stream-explorer-leveraging-custom-owsm-policy-and-jms/ – we can create a business monitor. The reports on service executions can be interpreted in a functional way [...]

The post Use Oracle Stream Explorer and the Service Execution Reporter policy to analyze service behavior – find too-late-closing flights on Saibot Airport appeared first on AMIS Oracle and Java Blog.

]]>
This article shows how using the Service Execution Reporting policy – first introduced in this article: https://technology.amis.nl/2015/04/01/oracle-soa-suite-12c-create-deploy-attach-and-configure-a-custom-owsm-policy-to-report-on-service-execution/ – and the bridge created from the reporter through JMS to Stream Explorer – demonstrated in this article: https://technology.amis.nl/2015/04/06/live-monitoring-of-soa-suite-service-execution-with-stream-explorer-leveraging-custom-owsm-policy-and-jms/ – we can create a business monitor. The reports on service executions can be interpreted in a functional way to produce business insight.

In this article we will specifically monitor airplanes at the gate – an example inspired  by the Saibot Airport case in the Oracle SOA Suite 12c Handbook. Clearly, the time at the gate should be minimized. We will keep an eye on planes that remain at the gate for too long.

image

When a flight opens at the gate – the sendFlightStatusUpdate operation on the FlightService is invoked. Subsequently, as the flight starts boarding, has completed boarding and is closed (and departs), the same operation is invoked. The new status is reported to the service and routed onwards by the service to interested parties.

Using the Service Execution Reporter policy, we report calls to the sendFlightStatusUpdate operation and make sure that carrier, flight number and the new status are included in the report. In Stream Explorer, we create a Stream for consuming the service execution report messages from a JMS Queue. The Stream Explorer [data]shape contains properties for carrier, flight number and status. An exploration is based on the stream – filtering only on reports from the sendFlightStatusUpdate operation on the FlightService.

When this exploration is tested, we create a second exploration based on the missing event detection pattern. This exploration will detect cases where the report of a flight changing its status to open (at the gate, starting the departure procedure) is not followed quickly enough by a report of that same flight changing its status to closed. When this situation is detected, it is reported – and action can be taken.

We will see how we change the status of several flights to open in a short period of time. Then, for all but one of the flights, we change the status to closed. The Stream Explorer exploration will report the one flight for which the status was not updated [in time], proving that we can perform such business monitoring.

A video illustrating the end result achieved in this blog article is available from YouTube.

Configure Service Execution Reporter policy for the sendFlightStatusUpdate operation

We will assume here that the policy has been added to the SOA Suite runtime as is described in this article – by adding the JAR file and importing the policy description.

The policy needs to be attached to the FlightService and the configuration needs to be overridden to cater for the sendFlightStatusUpdate operation. This is done in the EM FMW Control. Select the FlightService SOA Composite. Click on the FlightService Web Service binding. Open the Policies tab. Attach the amis/monitoring policy. Click on the link to Override Policy Configuration, as shown in the next figure.

image

The Security Configuration Details popup appears. Here we can specify the values of the policy properties as they should be in the context of the FlightService. Make sure that the operationsMap property is set with the right configuration regarding the sendFlightStatusUpdateRequest message type and the associated sendFlightStatusUpdate operation.

image

Press Apply to save the changes.

Call the sendFlightStatusUpdate operation for example from SoapUI:

image

 

and verify whether the report is written to the log file as expected:

image

 

Apparently, the messages required to perform monitoring on flights that do not leave the gate soon enough are available on the JMS Queue. Let’s harvest and analyze them from Stream Explorer.

Create the Stream Explorer Stream and Exploration

Open Stream Explorer. Create a Stream for the JMS Queue to which the Service Execution Reporter publishes messages. Note: remove any existing streams on top of this queue to prevent the streams from competing for the queue’s messages.

image

The wizard for a new Stream opens.

Set the name and a description for the stream.

image

Then click Next.

Configure the JMS queue details:

image

And press Next.

Define the Shape (the data structure to capture the values from the MapMessages on the JMS Queue):

image

and define all properties – using the names of the properties written to the MapMessage:

image

Finally, click Create.

The wizard to create the Exploration appears. Define a name and a description:

image

Click on Next.

Define no special filters, aggregation or time constraints to just report all reports. Now make a few calls to the sendFlightStatusUpdate operation. Each call should produce a service execution report message that shows up in the exploration:

image

 

Create the Pattern Based Exploration to Detect Missing ‘flight closed’ Messages

The exploration we need now is one that is based on the Detect Missing Event pattern. The missing event in this case is the report of a status update to ‘closed’ for a flight (carrier plus number) that was reported as being ‘opened’ – within the specified time. In a normal airport, we would perhaps use 40 minutes as the maximum period. In this demo case, we will use 40 seconds as the cut off time.

First of all, we need to publish the exploration AllServiceExecutionReport – in order to use it as the source for our next exploration:

image

 

From this exploration we will siphon off the messages that relate flight status updates in a new exploration FlightStatusOpenAndClosedUpdateReports.

image

Configure filters to focus on messages from service  default/FlightService/FlightService and where operation equals sendFlightStatusUpdate and stage equals request.

image

Note: I would have wanted to add a filter on status open or close. However, Stream Explorer does not let me create such a filters at the present time.

Publish this exploration:

image

The challenge I have to address at this point is: identify cases where the status of a flight is updated to open and where there is not subsequent update of the status of that same flight to closed within 40 seconds. While there is no exact fit, this sounds very much like to the Detect Missing Event pattern that Stream Explorer supports. I will create an exploration based on that pattern to see how close I can some to implementing my requirement.

Now create another new Exploration – of type Pattern:

image

Configure the Exploration – set a name, select FlightStatusOpenAndClosedUpdateReports as the input stream. Select the fields businessAttribute1 , 2 and 3 – for carrier, flightnumber and status respectively – as the Tracking Fields and set the Heartbeat Interval to 40 seconds.

.

 

image

 

And at this point you probably realize that this is not entirely the correct pattern to detect. What we have specified here is that we want to get notified whenever it takes more than 40 seconds for a message with certain values for businessAttribute1, 2 and 3 to be followed by another message for the same values for the three business attributes. However, we want to raise the alarm only if there is not a message with status (businessAttribute3) closed within 40 seconds of a message with status open for a specific flight, identified by businessAttribute1 and 2. And this is a type of missing event detection that is one step too complex for Stream Explorer to handle. Its missing event detection pattern focuses on the simple case of a message with specified indicators that is not succeeded by a message with exactly the same set of indicators.

However, Stream Explorer brought us quite a long way. And it allows us to export the exploration – as an OEP application that can be imported into JDeveloper to be refined through normal OEP development. In JDeveloper, we can make a fairly small change that will turn the exploration into an OEP application that does exactly what we need it to do.

Export the Exploration:

image

Click on the Export link in the wizard page:

image

And save the file:

SNAGHTML14a2272b

Open JDeveloper. Create a new, empty application – for example of type Custom Application.

Click on File | Import:

image

and select the option OEP Bundle into new project:

SNAGHTML14a2fb99

Select the file exported from Stream Explorer earlier on:

SNAGHTML14a3d286

And the project is created from the JAR file:

image

Inspect the sources that were created by Stream Explorer. One processor for each exploration. The final one with the CQL logic for detecting missing events:

image

It is in this CQL query that we need to make some changes to achieve the functionality we desire. The CQL query is updated to detect specific situations where a flight status update event that reports the ‘open’ status is not followed – within 40 seconds – by a flight status update event that updates the flight to ‘closed’:

image

This rather small change is all it takes to take the Stream Explorer application and refine it to the point where it fulfills our needs.

The partition is defined by businessAttribute1 and 2 (carrier and flight number). The PATTERN is composed from event OPEN and event NOT_CLOSED. OPEN is defined as a flight status update with status is ‘open’. NOT_CLOSED is any message that does not indicate a ‘closed’ status for the same flight. Normally there would not be such a message. However, every 40 seconds, a timer event is added. This event satisfies the NOT_CLOSED condition. When the timer event comes sooner than the desired ‘closed’ status update, the pattern is satisfied and a result is produced.

In order to verify the effects of this change, I add a CSV Outbound Adapter to write the results to a file:

image

image

I then create a deployment profile for the project and deploy the OEP bundle to the OEP server – the same one that also runs Stream Explorer or a different one.

From SoapUI, I then send messages that set the flight status to open for a number of flights:

image

The file to which the results are written is almost empty:

image

I close a number of the flights, but not all of them:

image

One flight remains open. Will the OEP application detect the flight that was not closed within 40 seconds?

Of course it does:

image

The post Use Oracle Stream Explorer and the Service Execution Reporter policy to analyze service behavior – find too-late-closing flights on Saibot Airport appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/04/08/use-oracle-stream-explorer-and-the-service-execution-reporter-policy-to-analyze-service-behavior-find-too-late-closing-flights-on-saibot-airport/feed/ 0
Kent het C-level ook de kracht van Scrum? https://technology.amis.nl/2015/04/07/kent-het-c-level-ook-de-kracht-van-scrum/ https://technology.amis.nl/2015/04/07/kent-het-c-level-ook-de-kracht-van-scrum/#comments Tue, 07 Apr 2015 13:40:52 +0000 https://technology.amis.nl/?p=35119 Een van de eerste dingen die ik doe als zaterdag de krant wordt bezorgd, is de column van Ben Tiggelaar lezen. Een paar weken geleden schreef Ben over de rituelen van succesvolle mensen. Veel mensen die excelleren op gebieden waar creativiteit en denkvermogen wordt gevraagd, blijken baat te hebben bij dagelijkse rituelen en structuur. Vaak [...]

The post Kent het C-level ook de kracht van Scrum? appeared first on AMIS Oracle and Java Blog.

]]>
Een van de eerste dingen die ik doe als zaterdag de krant wordt bezorgd, is de column van Ben Tiggelaar lezen. Een paar weken geleden schreef Ben over de rituelen van succesvolle mensen. Veel mensen die excelleren op gebieden waar creativiteit en denkvermogen wordt gevraagd, blijken baat te hebben bij dagelijkse rituelen en structuur. Vaak gewone, soms zelfs saaie bezigheden die onderdeel van zijn hun patroon dat dag in dag uit hetzelfde is.

Ik herken dit fenomeen ook bij het werken met Scrum en ik ben ervan overtuigd dat het een van de succesfactoren is. Een groot deel van het Scrum-proces is altijd hetzelfde en wordt een ingesleten gewoonte. Iedere dag een Stand-Up om 10:00, iedere twee weken op donderdag de Sprintwissel, de refinementsessies op dinsdag- en vrijdagmiddag.

De juiste dingen doen

Ook het kiezen van datgene wát er gedaan moet worden is een aspect dat iedere twee weken terugkomt. Scrum heeft als doel alleen díe dingen te doen die de hoogste toegevoegde waarde hebben voor een organisatie. De Product Owner is degene die hierover beslist.

De kracht van Scrum in kleine organisaties

Bij kleinere organisaties werkt een Scrum team voor een Product Owner die als persoon op C-level verantwoording draagt. In die organisaties zie je dat de kracht van Scrum als vanzelf heel goed tot zijn recht komt. In zijn afweging wat er van de Product Backlog daadwerkelijk in een volgende sprint gerealiseerd gaat worden, neemt de Product Owner het actuele wel en wee van zijn organisatie mee. Hij kiest welke functionaliteit of welke voorziening het beste aansluit bij de huidige actualiteit én waar de organisatie naar toe wil. Iedere twee weken maakt hij die afweging opnieuw.

En in grote organisaties?

Bij grotere organisaties gaat het anders.

  • Er zijn vrijwel altijd meerdere teams of groepen die Agile werken.
  • Daarbij wordt de rol van Product Owner ingevuld door mensen ónder C-level, denk aan middle managers, projectleiders of domeinexperts.
  • Binnen deze grotere organisaties twijfelen werknemers vaker of hun bedrijf wel aandacht heeft voor de goede dingen. Dit wordt vooral bij het koffiezetapparaat besproken.

Natuurlijk, dat is een open deur: “Meer mensen dus ook meer meningen”. Maar is dat de enige verklaring?

Ik denk van niet. Want vrijwel altijd hebben al die verschillende Product Owners, of het nu middle managers, projectleiders of domein experts zijn – hun mandaat van C-level gekregen in de vorm van een budget voor een bepaald thema of doel. Dit budget is dus gekoppeld aan dat specifieke thema of doel, maar hoe vaak worden die budgetten opnieuw beoordeeld? Één, hooguit twee keer per jaar. Dat is helemaal niet wat Scrum voorstaat. Er gebeurt zoveel in die tussenliggende periode dat de kans groot is dat het C-level op basis van actuele kennis een andere beslissing zou nemen. Daarom kunnen de mensen op de werkvloer op basis van hun gevoel over wat nu belangrijk is, de projectactiviteiten inderdaad niet goed plaatsen.

Rituelen

succes-is-een-gewoonteIk ben benieuwd wat er gaat gebeuren als het C-level bijvoorbeeld iedere vier weken afstemt aan welke doelen/thema’s (projecten) er middelen ter beschikking worden gesteld. Afwegen wat de komende maand het belangrijkste is, met zowel de korte maar zeker ook de lange termijn doelen. Dat is niet makkelijk, hoor ik u denken, als je weet hoeveel moeite het kost om de jaarlijkse budgetcyclus tijdig rond te krijgen. Maar, en dan kom ik weer terug bij belang van rituelen voor succesvolle mensen, als je dit soort processen vaak doet, wordt de vorm een gewoonte en kun je je energie en intellect wijden aan de inhoud: het nemen van creatieve en inhoudelijke goede beslissingen, wat brengt ons verder, wat niet. En hoe vaker je dit doet, hoe beter je er in wordt. Dat is een hoopvol voortuitzicht.

The post Kent het C-level ook de kracht van Scrum? appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/04/07/kent-het-c-level-ook-de-kracht-van-scrum/feed/ 0
Creating and scaling Dynamic Clusters in Weblogic 12c https://technology.amis.nl/2015/04/07/creating-and-scaling-dynamic-clusters-in-weblogic-12c/ https://technology.amis.nl/2015/04/07/creating-and-scaling-dynamic-clusters-in-weblogic-12c/#comments Tue, 07 Apr 2015 07:02:50 +0000 https://technology.amis.nl/?p=35396 Introduced in Weblogic 12.1.2, dynamic clusters is a great feature to scale your private cloud. Dynamic clusters provides you easy scaling of Weblogic clusters by adding and removing managed server instances on demand. They contain one or more dynamic servers. These dynamic servers are based on a single servertemplate that guarantees that every member of [...]

The post Creating and scaling Dynamic Clusters in Weblogic 12c appeared first on AMIS Oracle and Java Blog.

]]>
Introduced in Weblogic 12.1.2, dynamic clusters is a great feature to scale your private cloud.
Dynamic clusters provides you easy scaling of Weblogic clusters by adding and removing managed server instances on demand. They contain one or more dynamic servers. These dynamic servers are based on a single servertemplate that guarantees that every member of the cluster is exactly the same.

Creating Dynamic Clusters

Let’s take a look at some of the possibilities as we create a dynamic cluster.

I have created a virtualbox environment.
This environment consists of four VM’s with the following specs.

  • 2 vCPU’s
  • 4 Gb memory
  • 50 Gb disk
  • Oracle Linux 6.6
  • Java 1.7.0_75
  • Weblogic 12.1.3.0.2

I created a simple domain called demo_domain with only an AdminServer and four machines.
After unpacking the domain to the four servers, the nodemanagers where started and are reachable by the AdminServer.

Domain-pic1Now let go through the process of creating a dynamic cluster.

Open the Weblogic Console and navigate to Environment -> Clusters
Lock and Edit the domain in the Change Center
note. I make it a good practice always creating a domain in production mode, even in Development and Test.

Create a new dynamic cluster

Domain-cap1

New -> Dynamic Cluster

Provide the Clustername

Domain-cap2
Cluster name: dyna-cluster
Click Next

We will start of with a cluster containing two dynamic servers.

Domain-cap3
Number of Synamic Servers: 2
Server Name Prefix: dyna-server-
Click Next

For this demo all machines will take part.

Domain-cap4
Select ‘Use any machine configured in this domain’
Click Next

Assign each dynamic server unique listen ports

Domain-cap5
Listen Port for First Server: 8000
SSL Listen Port for First Server: 9000
Click Next

Summary screen

Domain-cap6
Click Finish

With the creation of the Dynamic Cluster there is also a Server Template created for it.

Server templates

A single server template provides the basis for the creation of the dynamic servers. Using this single template provides the possibility of every member being created with exactly the same attributes. Where some of the server-specific attributes like Servername, listen-ports, machines, etc. can be calculated based upon tokens.
You can pre-create server templates and let Weblogic clone one when a Dynamic Cluster is created.
When none is available a server template is create with the Dynamic Cluster. The name and the listen ports are the only server template attributes that you provide during Dynamic Cluster creation.

Before we activate the changes to the domain, we are going to make a change to the server template.
As an example we are going to demonstrate the use of tokens for server-specific configuration.

Navigate to Environment -> Clusters -> Server Templates

Domain-cap8
Click on the name: dyna-server-Template

We are going to use the ${ID} token in the Listen Address

Domain-cap10
Listen Address: 192.168.100.4${ID}
Click Save

The last digit of the listen address is used to make the listen address dynamic.

Activate changes in the Change Center of the Weblogic Console.
After activation the cluster and two managed servers are created.

Domain-cap12Domain-cap11

We can now start the two servers.

In the previous steps we have added a dynamic cluster with two dynamic servers, based on a single server template, to the domain.

Domain-pic2

Scaling a Dynamic Cluster

When the capacity is insufficient and you need to scale-up, you can add dynamic servers on demand.
It requires only a few clicks.

Navigate to Environment -> Clusters

Domain-cap12
Click dyna-cluster

On the Configuration tab go to the Servers tab

Domain-cap13
Change the Maximum Number of Dynamic Servers to: 4
Click save

Activate changes in the Change Center of the Weblogic Console.
After activation two Dynamic Servers are added to the Dynamic Cluster.

Start the two new Dynamic Servers and you have doubled you capacity.

Domain-cap14

Domain-pic3
Scaling down works exactly the same.
Just lower the Maximum Number of DynamicServers and activate.

A few points to keep in mind when scaling up or down.

Up

  • New dynamic servers are not started upon creation
  • Think before you act with the use of Tokens.
    For example.
    In our demo, the number of Dynamic servers can’t grow beyond nine servers, since we use the ${ID} as last digit of the listen address.

Down

  • Dynamic Servers above the new Maximum have to be shutdown before the change can be activated.
  • Dynamic Servers are removed in order, Last -> First
    (In our demo dyna-server-4 gets removed first, then dyna-server-3, etc..)
  • You cannot remove a Dynamic Server directly from the Environment -> Servers page

The post Creating and scaling Dynamic Clusters in Weblogic 12c appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/04/07/creating-and-scaling-dynamic-clusters-in-weblogic-12c/feed/ 2
Live Monitoring of SOA Suite Service Execution with Stream Explorer – leveraging Custom OWSM Policy and JMS https://technology.amis.nl/2015/04/06/live-monitoring-of-soa-suite-service-execution-with-stream-explorer-leveraging-custom-owsm-policy-and-jms/ https://technology.amis.nl/2015/04/06/live-monitoring-of-soa-suite-service-execution-with-stream-explorer-leveraging-custom-owsm-policy-and-jms/#comments Mon, 06 Apr 2015 12:24:50 +0000 https://technology.amis.nl/?p=35319 This article demonstrates how live monitoring of SOA Suite service execution can be implemented using a custom OWSM policy that reports to a JMS queue and with a simple Stream Explorer exploration that aggregates these JMS messages: The ingredients are: a SOA Suite 12c runtime environment a Stream Explorer installation and two files available with [...]

The post Live Monitoring of SOA Suite Service Execution with Stream Explorer – leveraging Custom OWSM Policy and JMS appeared first on AMIS Oracle and Java Blog.

]]>
This article demonstrates how live monitoring of SOA Suite service execution can be implemented using a custom OWSM policy that reports to a JMS queue and with a simple Stream Explorer exploration that aggregates these JMS messages:

image

The ingredients are:

  • a SOA Suite 12c runtime environment
  • a Stream Explorer installation

and two files available with this article:

  • CustomPolicyAssertionArchive.jar (that contains the custom policy implementation)
  • AMIS_Custom_Policies.zip (that contains the policy definition)

and a JSON configuration of the policy binding.

Using the ingredients we will walk through the following stages and steps:

Stage 1:

  • Copy JAR file to the WLS_SOA_domain/lib directory (and restart the domain)
  • Import the ZIP file into the EM FMW Control (to define the new policy)
  • Attach the policy to a SOA Composite and configure the operations map property
  • Invoke the SOA Composite and check the SOA domain log file (to find service execution reports logged in the file)

Stage 2:

  • Configure JMS artifacts to provide the conduit for the service execution reports (JMS Server, Module, Connection Factory and Queue)
  • Update the configuration of the policy binding with the JMS destination
  • Invoke the SOA Composite and check the JMS Queue monitoring page in the WebLogic Administration Console (to find messages produced for web service calls)

Stage 3:

  • Run Stream Explorer and create a Stream on top of the JMS Queue
  • Create an Exploration on top of the Stream to report aggregated service execution metrics (per service and per operation over the last 30 minutes)
  • Invoke several operations on the SOA Composite (several times) and see how the StreamExplorer exploration is updated to provide the latest insight

This provides the foundation for a wide range of applications of the Service Execution Reporter policy along with Stream Explorer. In future articles, we will see the type of focused monitoring this foundation enables us to perform.

 

Stage 1 – Basic application of Service Execution Reporter policy

This previous article describes how the Service Execution Reporter policy is developed. The policy is deployed to a JAR file that you can download here: CustomPolicyAssertionArchive (extract it from the ZIP file). The configuration of the policy is laid down in a ZIP file that you can download here: AMIS_Custom_Policies.

The JAR file has to be copied to the WLS_SOA_domain/lib directory. Using the target information in the EM FMW Control – see next figure – I find out about the exact the file location for the WebLogic domain that hosts the SOA Suite:

image

the lib directory under this domain home is where the jar file should be moved.

Subsequently, the domain has to be restarted in order to make the contents of the jar file available in the SOA Suite run time.

Import the ZIP file into the EM FMW Control (to define the new policy)

Start EM FMW Control.

image

navigate to Web Logic Domain – soa_domain | Web Services | WSM Policies.

Click on Import

image

Import the by clicking on Import

image

and selecting the right zip file:

image

The report back:

image

and the policy is listed:

image

 

Attach the policy to a SOA Composite and configure the operations map property

Open the SOA Composite, such as the FlightService composite shown below. Click on the Service Binding to which the policy is to be attached:

image

Open the Policies tab:

image

Click on the button Attach/Detach to open the dialog where policies can be attached to the service binding.

image

Select the amis/monitoring policy. Click on Attach to bind this policy to the service binding.

Click on OK to confirm the policy attachment.

Click on Override Policy Configuration to set the property values that apply specifically to this policy attachment:

image

 

The properties that are defined in the policy configuration file – SOASuiteServiceExecutionReporterPolicyFile.xml – are listed and the current values are shown. These values can now be overridden for this attachment of the policy to the FlightService.

image

The full value of the operationsMap property in this case is:

{
    "getFlightDetailsRequest" : {
        "operation" : "getFlightDetails",
        "oneWay" : "false",
        "request" : {
            "doReport" : "true",
            "payload" : [
                {
                    "name" : "carrierCode",
                    "xpath" : "/soap:Envelope/soap:Body/flig:getFlightDetailsRequest/flig:FlightCode/com:CarrierCode",
                    "namespaces" : [
                        {
                            "prefix" : "soap",
                            "namespace" : "http://schemas.xmlsoap.org/soap/envelope/"
                        },
                        {
                            "prefix" : "flig",
                            "namespace" : "com.flyinghigh/operations/flightservice"
                        },
                        {
                            "prefix" : "com",
                            "namespace" : "com.flyinghigh/operations/common"
                        }
                    ]
                },
                {
                    "name" : "flightNumber",
                    "xpath" : "/soap:Envelope/soap:Body/flig:getFlightDetailsRequest/flig:FlightCode/com:FlightNumber",
                    "namespaces" : [
                        {
                            "prefix" : "soap",
                            "namespace" : "http://schemas.xmlsoap.org/soap/envelope/"
                        },
                        {
                            "prefix" : "flig",
                            "namespace" : "com.flyinghigh/operations/flightservice"
                        },
                        {
                            "prefix" : "com",
                            "namespace" : "com.flyinghigh/operations/common"
                        }
                    ]
                }
            ]
        },
        "response" : {
            "doReport" : "true",
            "payload" : [
                {
                    "name" : "flightStatus",
                    "xpath" : "/soap:Envelope/soap:Body/flig:getFlightDetailsResponse/flig:FlightStatus",
                    "namespaces" : [
                        {
                            "prefix" : "soap",
                            "namespace" : "http://schemas.xmlsoap.org/soap/envelope/"
                        },
                        {
                            "prefix" : "flig",
                            "namespace" : "com.flyinghigh/operations/flightservice"
                        },
                        {
                            "prefix" : "com",
                            "namespace" : "com.flyinghigh/operations/common"
                        }
                    ]
                }
            ]
        }
    },
    "retrievePassengerListForFlightRequest" : {
        "operation" : "retrievePassengerListForFlight",
        "oneWay" : "false",
        "request" : {
            "doReport" : "true",
            "payload" : [
                {
                    "name" : "carrierCode",
                    "xpath" : "/soap:Envelope/soap:Body/flig:retrievePassengerListForFlightRequest/flig:FlightCode/com:CarrierCode",
                    "namespaces" : [
                        {
                            "prefix" : "soap",
                            "namespace" : "http://schemas.xmlsoap.org/soap/envelope/"
                        },
                        {
                            "prefix" : "flig",
                            "namespace" : "com.flyinghigh/operations/flightservice"
                        },
                        {
                            "prefix" : "com",
                            "namespace" : "com.flyinghigh/operations/common"
                        }
                    ]
                }
            ]
        },
        "response" : {
            "doReport" : "true"
        }
    }
}

Obviously, you will have to provide the values that make sense for the services you want to the attach the policy to. Note: if you do not define the operationMap property for a particular policy binding, the service execution is reported. However, these reports obviously cannot report the operation name (only the message type) nor any values from the payload.

Click on Apply to confirm the property values.

At this point, the policy is primed for action for the FlightService.

Invoke the SOA Composite and check the SOA domain log file (to find service execution reports logged in the file)

By invoking the various FlightService operations, we can now see the policy in action.

image

The effect of this call is reported by the custom policy in the log-file:

image

A call to another operation results in a similar report:

image

in the log file:

image

The third operation – sendFlightStatusUpdate – is not configured at all in the operationsMap property. When this operation is invoked:

image

The report:

SNAGHTML1007f5a8

Stage 2 – Configuration of resources to route Service Execution Reports to JMS

The reports produced by the policy can be reported to a JMS destination in addition to the log file output. And we need that. So we first need to prepare a simple JMS Queue that we can next configure on the policy to have the JMS reporting going.

Open the WebLogic Administration Console. Open the Services | Messaging node in the Domain Structure Navigator. Create a new JMS Server:

image

Set the name. Then press Next. Select the managed server running the SOA Suite (the engine that runs the SOA Composite applications) as the target.

image

Press Finish.

image

Click on the Services | Messaging | JMS Modules node. Click on the New button to create a new JMS Module.

image

Set the name of the JMS module:

image

Click on Next.

Select the managed server running the SOA Suite as the target for the JMS Module:

image

and press Next.

image

Check the checkbox and press Finish.

image

Open the tab Subdeployments:

image

Click on New to create  a new subdeployment. Set the name:

image

And click on Next.

Select the JMS Server that was created earlier on as the target:

image

Click Finish:

image

Open the Configuration tab. Click on the new button to  create the Connection Factory:

image

Select the right radio button and click Next.

image

Set the name and the JNDI Name:

image

and click Next.

The target for the JMS Module is shown:

image

Click Finish. Create a new resource of type Queue:

image

Set the name and the JNDI Name:

image

Press Next.

Select the appropriate subdeployment and JMS Server (those that were created earlier):

image

Press Finish.

All four JMS artifacts are now created:

image

 

Update the configuration of the policy binding with the JMS destination

The policy was initially uploaded with a global configuration that includes the properties JMSDestination and JMSConnectionFactory set to empty strings. To configure the appropriate JMS artifact references, open the EM FMW Control and navigate to Web Logic Domain – soa_domain | Web Services | WSM Policies.

image

Locate the policy amis/monitoring. Click on Open link. Open the Assertion tab and click on Configuration.

image

Set the properties JMSDestination and JMSConnectionFactory  to “jms/ServiceExecutionReportingQueue” and “jms/ServiceExecutionReportingCF” respectively :

image

Click OK to apply these values.

 

Invoke the SOA Composite and check the JMS Queue monitoring page in the WebLogic Administration Console

From SOAP UI make one call to the service by the SOA composite that has the Service Execution Reporter attached.

image

Both the request en response message will pass through the policy and trigger both an entry in the log file as well as a message sent to the JMS queue. We can verify the latter in the WebLogic Admin Console by checking the Monitoring tab for the queue:

image

Drilling down provides a little more insight into the messages that were published to the queue:

image

image

Invoke the SOA Composite’s service from SoapUI a few more times and the message count on the Monitoring tab for the JMS queue will increase further.

Clearly we have established JMS publication of a MapMessage for each service execution of the FlightService (and any other service that has the ServiceExecutionReporter policy attached.

 

Stage 3 – Monitor Service Execution using Oracle Stream Explorer explorations

The final piece of today’s puzzle is the step from the JMS Queue with its MapMessags to the Stream Explorer exploration that provides a count of recent service executions.

Run Stream Explorer

image

and create a Stream on top of the JMS Queue. Click on Create New Item and Select Stream as the new Item Type to create.

Enter a name and a description and select the Stream’s source type as JMS:

image

Click Next.

Configure the JMS destination (the queue to use as the source) as shown next:

image

The URL is for the WebLogic managed server that hosts the JMS Queue; the admin username and password are used here to access the JMS Queue.

Click Next.

Specify the name for the ‘shape’ – the data structure in Stream Explorer to capture the events from the stream.

image

Select Manual Mapping and define the properties of the shape – corresponding with the properties written in the JMS Map – which are:

service, operation, ecid, stage, executionTimestamp – and whichever payload elements are configured for extraction in the operationsMap.

image

Press Create to create the Stream.

The wizard for creating the Exploration kicks in immediately after completing the Stream definition.

image

Specify name and description and optionally some tags.

image

Press Create. This takes you to the Exploration editor.

A lot is specified for the exploration:

  • The Summary to calculate is a count of the number of events – grouped by service and operation.
  • Filter only the events that have the stage set to request
  • Calculate the Summary over the last one hour and update the count every 10 seconds

image

Invoke several operations on the SOA Composite (several times) and see how the StreamExplorer exploration is updated to provide the latest insight:

 

image

Here we see how first (bottom two entries) some calls were made to the operation retrievePassengerListForFlight – the last two within 10 seconds of each other because an entry with COUNT_of-service equal to 2 is missing. Subsequently, up to 7 calls were made to the getFlightDetails operation – not interrupted by calls to other operations in the FlightService. Noe that calls 5 and 6 were close together – within 10 seconds of each other.

Let’s attach the policy to another SOA composite – just for kicks:

image

image

image

image

Then invoke an operation on the ConversionService composite:

image

followed by a few calls to the FlightService – and see the result in the Stream Explorer report:

image

 

It should hopefully be clear now that we have an way to observe and analyze service execution behavior using Stream Explorer and leveraging the output from the custom Service Execution Reporter policy.

image

The post Live Monitoring of SOA Suite Service Execution with Stream Explorer – leveraging Custom OWSM Policy and JMS appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/04/06/live-monitoring-of-soa-suite-service-execution-with-stream-explorer-leveraging-custom-owsm-policy-and-jms/feed/ 3
Oracle SOA Suite 12c – Create, Deploy, Attach and Configure a Custom OWSM Policy – to report on service execution https://technology.amis.nl/2015/04/01/oracle-soa-suite-12c-create-deploy-attach-and-configure-a-custom-owsm-policy-to-report-on-service-execution/ https://technology.amis.nl/2015/04/01/oracle-soa-suite-12c-create-deploy-attach-and-configure-a-custom-owsm-policy-to-report-on-service-execution/#comments Wed, 01 Apr 2015 18:46:04 +0000 https://technology.amis.nl/?p=35186 This article describes how to develop a straightforward custom assertion that can be used as part of custom OWSM policy to be attached to Web Services in WebLogic, such as services exposed by SOA Composite applications and Service Bus projects as well as custom JAX-WS or ADF BC Web Services. The custom assertion that I [...]

The post Oracle SOA Suite 12c – Create, Deploy, Attach and Configure a Custom OWSM Policy – to report on service execution appeared first on AMIS Oracle and Java Blog.

]]>
This article describes how to develop a straightforward custom assertion that can be used as part of custom OWSM policy to be attached to Web Services in WebLogic, such as services exposed by SOA Composite applications and Service Bus projects as well as custom JAX-WS or ADF BC Web Services. The custom assertion that I demonstrate here reports the execution of web service operations to a JMS Destination and/or the system output. It shows how to access property values set on the policy binding (values specific for the service the policy is attached to) and how to inspect the headers and contents of the request and response messages. Most custom assertions will use a subset of the mechanisms shown in this example. As always, the source code is available for download. Note: this article was edited on April 6th to reflect better code structure.

Custom assertions can be used in policies that are applied to web services. Depending on the type and configuration of the policy and assertions, they can be triggered at different moments and perform different tasks. These assertions are similar to aspects (in AOP) that take care of cross cutting concerns and that do not interfere with the internals of a service. Policies are attached (and detached) at runtime by the administrators. The assertion discussed in this article is to be attached to the service binding at the inbound end of a SOA composite application (or at a Service Bus proxy service that serves the same purpose). The assertion will report every incoming request as well as each response returned from the service binding. This information can be leveraged outside the scope of this article to monitor the runtime service environment.

The steps describes in this article in the process of creating and putting into action the custom assertion are:

  • Create Custom Policy:
    • Assertion Java Class
    • Policy XML File
    • Policy Configuration XML File
  • Deploy Policy Artifacts to Runtime Fusion Middleware platform (and restart the WebLogic Servers)
  • Import Policy Definition into Runtime Fusion Middleware platform
  • Attach the Policy to a Service Binding in an existing SOA Composite application and configure the policy binding properties
  • Invoke the service exposed by the [Service Binding in the existing] SOA Composite application
  • Verify the results produced by the policy attachment

Create the Custom Policy

The main part of the custom assertion definition is a Java class. See for details the sources that can be downloaded from GitHub.The project contains a helper class – CustomAssertion – that takes care of some generic plumbing that are required for the AssertionExecutor superclass that needs to be extended. The class SOASuiteServiceExecutionReporter contains the custom logic that is to be executed whenever the policy assertion is triggered. In the current case, this logic consists of retrieving some key elements about the service request – service name, operation name, ECID, timestamp and selected payload details – and reporting them. Initially, this report consists of a few lines in the system output (i.e. the domain log file). Later on, we will send the report to a JMS destination.

The init() method is invoked by the OWSM framework when the policy is attached to a web service and whenever the configuration of the policy attachment is updated (i.e. its property values are changed). The init() method reads and processes the policy attachment configuration and initializes the SOASuiteServiceExecutionReporter, priming it for the correct actions whenever service executions trigger its execute method.

image

This code snippet relies heavily on the super class (CustomAssertion ) that returns the values for the properties from the iAssertion.

image

It also leverages the method initializeMessageTypesMapFromJson. This method performs the parsing of the operationMap property in the policy binding configuration. The properties are defined in the policy definition file – see below – and are set to binding specific values in the EM FMW control (or from WLST).

Properties can be simple string values. By using JSON snippets for the values of these properties, we can pass quite complex and extensive data structures into the policy attachment. In the current case, we use a JSON style property to specify for a policy binding which message types are processed ; each message type is a key in the JSON object and for each message type are defined: the name of the operation, an indication of the operation one-way is and an XPath expression to derive a value from the message payload to be reported.

This JSON structure looks like this- here message type getFlightDetailsRequest is mapped to operation getFlightDetails; from the request message, the value of element /soap:Envelope/soap:Body/flig:getFlightDetailsRequest/flig:FlightCode/com:CarrierCode should be reported:

image

The parsing of the JSON property is done using standard JSON-P support in this case, in the helper class ServiceReporterSettings:

image

In this code snippet, the JSON structure of the operationsMap property is parsed, interpreted and turned into a corresponding set of Java Objects. The data structures and class definitions are outlined in the next illustration:

image

 

 

The Execute method – processing every service execution

The execute method is invoked when the service receives a request or returns a response or a fault. The method gets passed in an IContext object. This object provides access to most relevant details about the request or response message – including the complete SOAP Envelope and the Transport Headers. Note that the GUID attribute contains the FMW ECID attribute value; the value is the same for the request message and the corresponding response (or fault) message.

image

 

One aspect of the custom assertion is the determination of the message type that is handled. The message type is read from the SOAP Body:

image

Here we use the getDataNode() helper method that is used to execute XPath queries against the mBody element, to derive the first child node within the SOAP Body.

When payload elements are to be extracted, this is done in a similar fashion:

image

The report on the service execution is created like this:

image

The policy can be attached to one service or  – more common – to may services. Each policy attachment (aka policy binding) can be configured with property values that are specific for the service and how the policy should act in the context of the service.

The file SOASuiteServiceExecutionReporterPolicyFile.xml contains the definition of the custom policy. This file is deployed to the runtime environment and also uploaded to the FMW Control, as we will see later on. This file defines the policy, its meta data including its properties etc.

image

The file policy-config.xml is another link in the chain. It joins the policy definition from the previous file with the Java Class.

image

 

 

Deploy Policy Artifacts to Fusion Middleware Infrastructure

Create a deployment profile (simple Java Archive) for the JDeveloper project. Deploy the project to a JAR file using this profile.

image

Copy JAR file to the WLS DOMAIN\lib directory.

Using the target information in the EM FMW Control, I find out about the exact the file location for the WebLogic domain that hosts the SOA Suite:

image

the lib directory under this domain home is where the jar file should be moved.

Restart the WebLogic domain.

 

Import Policy Definition into Fusion Middleware Infrastructure

Start EM FMW Control.

image

navigate to Web Logic Domain – soa_domain | Web Services | WSM Policies.

Click on Import

image

Import zip file with appropriate structure (this means it should contain a folder structure of META-INF\policies\some-custom-folder-name\policyname.xml:

image

by clicking on Import

image

and selecting the right zip file:

image

The report back:

image

and the policy is listed:

image

Details:

note that the policy is enabled, local optimization is off and the policy applies to service bindings (not SCA components, although that could be an option too) and is in the category Service Endpoint

image

and on the assertion:

image

The policy is ready for attachment to service bindings.

 

Attach Policy to SOA Composite Service Bindings

Open the SOA Composite, such as the FlightService composite shown below. Click on the Service Binding to which the policy is to be attached:

image

Open the Policies tab:

image

Click on the button Attach/Detach to open the dialog where policies can be attached to the service binding.

image 

Select the amis/monitoring policy. Click on Attach to bind this policy to the service binding.

Click on OK to confirm the policy attachment.

Click on Override Policy Configuration to set the property values that apply specifically to this policy attachment:

image

The properties that are defined in the policy configuration file – SOASuiteServiceExecutionReporterPolicyFile.xml – are listed and the current values are shown. These values can now be overridden for this attachment of the policy to the FlightService.

image

Click on Apply to confirm the property values.

At this point, the policy is primed for action for the FlightService.

Test the Custom Policy Activity

By invoking the various FlightService operations, we can now see the policy in action.

image

The effect of this call is reported by the custom policy in the log-file:

image

A call to another operation results in a similar report:

image

in the log file:

 

image

Note: even services to which the policy is attached without any additional configuration override will have their execution reported. However, these reports obviously cannot report the operation name (only the message type) nor any values from the payload. Here a report from the ConversionService that has the policy attached – without any configuration.

image

Resources

JSON parsing in Java – http://www.oracle.com/technetwork/articles/java/json-1973242.html and http://docs.oracle.com/javaee/7/api/javax/json/JsonReader.html.

Documentation for Fusion Middleware 12c (12.1.3)

Developing Extensible Applications for Oracle Web Services Manager –  http://docs.oracle.com/cd/E57014_01/owsm/extensibility/owsm-extensibility-create.htm#EXTGD153

Overriding Policy Configuration Properties – http://docs.oracle.com/cd/E57014_01/owsm/security/override-owsm-policy-config.htm#CACGHIFE

Managing Web Service Policies with Fusion Middleware Control – http://docs.oracle.com/cd/E57014_01/owsm/security/manage-owsm-policies.htm#OWSMS5573

XML Schema Reference for Predefined Assertions – http://docs.oracle.com/cd/E57014_01/owsm/security/owsm-assertion-schema.htm

Stepping Through Sample Custom Assertions – https://docs.oracle.com/middleware/1213/owsm/extensibility/owsm-extensibility-samples.htm#EXTGD162

The post Oracle SOA Suite 12c – Create, Deploy, Attach and Configure a Custom OWSM Policy – to report on service execution appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/04/01/oracle-soa-suite-12c-create-deploy-attach-and-configure-a-custom-owsm-policy-to-report-on-service-execution/feed/ 0
Continuous delivery Maturity model – Nederlandse versie https://technology.amis.nl/2015/04/01/continuous-delivery-maturity-model-nederlandse-versie/ https://technology.amis.nl/2015/04/01/continuous-delivery-maturity-model-nederlandse-versie/#comments Wed, 01 Apr 2015 08:07:08 +0000 https://technology.amis.nl/?p=35108 De methoden en technieken voor Continuous Delivery winnen steeds meer aan belangstelling. Deze aanpak blijkt vaak de succesvolle strategie achter het realiseren van echte “business agility”. Veel organisaties weten heel goed waarom Continuous Delivery belangrijk is, maar worstelen met de vraag “hoe doe je dit dan?”. Hoe start je met Continuous Delivery en hoe zorg [...]

The post Continuous delivery Maturity model – Nederlandse versie appeared first on AMIS Oracle and Java Blog.

]]>
De methoden en technieken voor Continuous Delivery winnen steeds meer aan belangstelling. Deze aanpak blijkt vaak de succesvolle strategie achter het realiseren van echte “business agility”. Veel organisaties weten heel goed waarom Continuous Delivery belangrijk is, maar worstelen met de vraag “hoe doe je dit dan?”. Hoe start je met Continuous Delivery en hoe zorg je voor een blijvend resultaat? Het Continuous Delivery Maturity Model helpt bij het aanbrengen van structuur en verkrijgen van begrip in de kernaspecten van het invoeren van Continuous Delivery in uw organisatie.

Download de volledige versie van het Continuous delivery Maturity model – Nederlandse versie document.

Waarom een Continuous Delivery Maturity model?

Continuous Delivery gaat om het verkrijgen van het overzicht van alle aspecten die betrokken zijn bij het ontwikkelen en releasen van software. Voor organisaties van enige omvang is dit een proces met een groot aantal stappen en activiteiten. Het volledige proces van het ontwikkelen en releasen van software is vaak een langdurig en complex proces waarbij een groot aantal experts en afdelingen zijn betrokken die moeten samenwerken om een groot aantal obstakels te overwinnen. Dit kan leiden tot een onbeheersbare hoeveelheid activiteiten die nodig zijn voor de invoering. Veelgehoorde vragen zijn: waar moeten we starten, moeten we alles doen of kunnen we zaken achterwegen laten en welke zaken geven het snelste resultaat?
Het Continuous Delivery Maturity model geeft antwoord op deze vragen. Het biedt houvast en geeft structuur aan de invoering van Continuous Delivery en de onderliggende componenten. Het model is ontwikkeld door Andreas Rehn, Tobias Palmborg en Patrik Boström en is gebaseerd op een groot aantal artikelen en blogs over dit onderwerp, het boek “Continuous Delivery” van Jez Humble & David Farley, de whitepaper Enterprise Continuous Delivery Model en een schat aan persoonlijke ervaring. Dit model richt zich op een bredere invoering van continuous delivery die verder gaat dan “automation” en alle aspecten belicht die nodig zijn voor het invoeren van Continuous Delivery in iedere organisatie.

 

Continuous delivery Maturity model

Continuous delivery Maturity model

The post Continuous delivery Maturity model – Nederlandse versie appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/04/01/continuous-delivery-maturity-model-nederlandse-versie/feed/ 0
Exposing JMS queues and topics with a JAX-WS webservice https://technology.amis.nl/2015/03/30/exposing-jms-queues-and-topics-with-a-jax-ws-webservice/ https://technology.amis.nl/2015/03/30/exposing-jms-queues-and-topics-with-a-jax-ws-webservice/#comments Mon, 30 Mar 2015 17:18:39 +0000 https://technology.amis.nl/?p=35099 Everyone can do HTTP calls and thus call most webservices. Interfacing with JMS queues or topics though is a bit more difficult (when not using Oracle SOA Suite). An alternative is using custom code. This usually requires libraries, JNDI lookups, opening connections and such. Because I wanted to make it easy for myself to put [...]

The post Exposing JMS queues and topics with a JAX-WS webservice appeared first on AMIS Oracle and Java Blog.

]]>
Everyone can do HTTP calls and thus call most webservices. Interfacing with JMS queues or topics though is a bit more difficult (when not using Oracle SOA Suite). An alternative is using custom code. This usually requires libraries, JNDI lookups, opening connections and such. Because I wanted to make it easy for myself to put stuff on queues and topics, I created a simple JAX-WS wrapper service. By using this service, JMS suddenly becomes a whole lot easier.
JAXWStoJMS

 

Implementation

If you just want to download and use the code, go to the usage section. I wrote the code in a short time-span. It can use some improvements to make the request message better and to allow dequeueing. Also I have not tested it under load and I might not do a nice cleanup of the connection.

Getting started

The implementation is relatively straightforward if you’re a bit familiar with JMS programming. There are some things to mind though. The first thing I encountered were some difficulties after I selected the JAX-WS Sun reference implementation in JDeveloper when creating my JAX-WS webservice. I should of course have selected the Weblogic implementation to avoid issues (such as missing metro-default.xml and missing classes after having added that file). Deleted the application and started over again. No issues the second time.

nojaxwsri

This next part is also shown in the title image. I first obtain the Context which is easy since the webservice is running in the application server. Using this context you can obtain a Destination by doing a JNDI lookup. This Destination can be used to obtain a ConnectionFactory. Using this ConnectionFactory you can obtain… yes… a Connection! This Connection can be used to obtain a Session. This Session in turn can be used to create a TextMessage and a MessageProducer. You can imagine what those two can do together.

Avoid separate code for queues and topics

It is important to realize that for this implementation it is not relevant if you are posting to a queue or a topic. Specific destinations exist for topics and queues but you can just as well use the Destination class itself. The same goes for the ConnectionFactory. Using these common classes avoids duplication in the code.

JMSProperties and JMSHeaders

JMSProperties

I didn’t like this part. The JMSProperties are custom properties which can have a specific type such as integer, string, float, double, boolean. There are separate methods on TextMessage instances to set these different types. In an XSD this would have been a choice. I didn’t do contract first development though and a Java implementation of an XSD choice isn’t something which can be called pretty (http://blog.bdoughan.com/2011/04/xml-schema-to-java-xsd-choice.html). Thus I supplied a string and an enum indicating the type in order to map it to the correct method and set the property.

JMSHeaders

The JMSHeaders also weren’t fun. The TextMessage class had several methods specific to individual headers! What I wanted though was just to specify name/value pairs and let it set the value based on that. I was required to make a mapping to the header specific methods of the TextMessage class and do a type conversion from string to the input of the specific method. This would have been easier with Oracle BPEL and invoke activity properties.

Base64

I choose to supply the message as Base64. Why? Well, because escaping XML doesn’t look good and we’re not even sure every message is going to be XML. We might want to send JSON. JSON escapes differently. In order to avoid escape issues, base64 always works. I used Apache Commons Codec to do the Base64 part. For quick online encoding/decoding you can use something like: https://www.base64encode.org/. Beware though not to feed the site with business sensitive information.

Usage

You can download the code here. The project is specifically written to run on Weblogic server (developed on the 12.1.3 SOA Suite quickstart). A WAR is included. It might also run on older SOA Suite versions with some minor changes.

First you have to create a queue or topic. A queue is easiest for testing. You can look at for example http://middlewaremagic.com/weblogic/?p=1987 on how to create a queue. I’ve created a queue called MyQueue which I supply as JNDI name.

After you deploy the service, you can call it using the Enterprise Manager test console or SOAP UI or anything which can do HTTP. After a call you can verify in the Weblogic console the message has arrived.

weblogicem

Weblogic console

Warning

Beware though that you are providing a hole in the Weblogic security layer by exposing JMS queues and topics to ‘the outside’. This webservice needs some pretty good security. I therefore recommend to only use it for development and testing purposes and avoid using it in a production environment.

The post Exposing JMS queues and topics with a JAX-WS webservice appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/03/30/exposing-jms-queues-and-topics-with-a-jax-ws-webservice/feed/ 1