AMIS Oracle and Java Blog https://technology.amis.nl Friends of Oracle and Java Tue, 18 Aug 2015 13:22:30 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.4 SIG Architectuur 27 augustus https://technology.amis.nl/2015/08/18/sig-architectuur-27-augustus/ https://technology.amis.nl/2015/08/18/sig-architectuur-27-augustus/#comments Tue, 18 Aug 2015 13:22:30 +0000 https://technology.amis.nl/?p=36871 Op donderdagavond 27 augustus organiseert AMIS een SIG (Special Interest Group) Architectuur. De SIGs worden voor en door eigen medewerkers georganiseerd. Sommige SIGs zijn ook open voor externen. Wil jij graag bij deze SIG Architectuur aanwezig zijn? Stuur dan een mail naar marketing@amis.nl en laat ons weten waarom je erbij wilt zijn en wat je [...]

The post SIG Architectuur 27 augustus appeared first on AMIS Oracle and Java Blog.

]]>
Op donderdagavond 27 augustus organiseert AMIS een SIG (Special Interest Group) Architectuur. De SIGs worden voor en door eigen medewerkers georganiseerd. Sommige SIGs zijn ook open voor externen. Wil jij graag bij deze SIG Architectuur aanwezig zijn? Stuur dan een mail naar marketing@amis.nl en laat ons weten waarom je erbij wilt zijn en wat je huidige functie is. 

Inhoud

In deze SIG willen we een aantal applicatie architectuur templates voor verschillende situaties/uitdagingen bedenken. We gaan deze templates uitdenken in een aantal groepjes, waarna ieder team komt met zijn architectuur oplossingen en onderbouwing. Het gaat dus vooral om het proces om samen te komen tot oplossingen.

De situaties die we gaan uitwerken kunnen bijvoorbeeld zijn: potentiële klanten van verschillend type (complex of simpel, met weinig of veel geld, etc) en/of verschillende uitdagingen (Digitale User Interface, B2B, Monitoring en Rapportage, Harmoniseren van de infrastructuur/platform, etc). In de uitgewerkte oplossingen zijn zowel de normale als de Cloud product offering van Oracle te gebruiken.

Deze SIG Architectuur is een interactieve en creatieve sessie, waarbij we met elkaar ervaringen delen en sparren zodat we beter in staat zijn mee te denken, praten en adviseren over het combineren van technologie-componenten tot goed samenwerkende oplossingen.

 

Datum: 27 augustus 2015

Tijd: 18.00 uur tot 21.00 uur

Kosten: Deze SIG is kosteloos bij te wonen.

Doelgroep: IT-architecten die werken met Oracle en geïnteresseerd zijn in het opdoen en delen van kennis met vakgenoten.

Locatie en parkeren: De bijeenkomst vindt plaats in ons pand aan de Edisonbaan 15 in Nieuwegein. Je kunt gratis parkeren.

The post SIG Architectuur 27 augustus appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/08/18/sig-architectuur-27-augustus/feed/ 0
A Short Note on Pesky CSSCAN Results Before a Characterset Change https://technology.amis.nl/2015/08/16/a-short-note-on-pesky-csscan-results-before-a-characterset-change/ https://technology.amis.nl/2015/08/16/a-short-note-on-pesky-csscan-results-before-a-characterset-change/#comments Sun, 16 Aug 2015 12:30:15 +0000 https://technology.amis.nl/?p=36833 A client of mine was busy correcting one of his development streets and therefore the NLS charactersets of about 10 11.2.0.4 EE databases had to be changed. Most of the databases were originally configured with WE8ISO8859P15, others with WE8ISO8859P1 and all of them had to get WE8MSWIN1252. Some of them were single instances (export, development [...]

The post A Short Note on Pesky CSSCAN Results Before a Characterset Change appeared first on AMIS Oracle and Java Blog.

]]>
A client of mine was busy correcting one of his development streets and therefore the NLS charactersets of about 10 11.2.0.4 EE databases had to be changed.
Most of the databases were originally configured with WE8ISO8859P15, others with WE8ISO8859P1 and all of them had to get WE8MSWIN1252. Some of them were single instances (export, development and test), some RAC (acceptance) and some of them were RAC plus Dataguard (production, for the procedure see Doc ID 1124165.1).

The initial results of the csscan’s always showed table SYS.REG$ with a couple of LOSSY entries and all of them indicated 1 to 4 user tables containing LOSSY (or Convertible) data. The user tables were truncated and imported again after the conversion. And there where the (RAC) instances showing troublesome entries in some WRI$_%, WRH$_% and/or WRR$_%-tables including table SYS.WRI$_ADV_MESSAGE_GROUPS.

For most of this tables we could figure out quite quickly how to remedy these entries, but table SYS.WRI$_ADV_MESSAGE_GROUPS eluded us for a while. All we found on the internet, in the Oracle documentation and on MySupport were about other specific SYS.WRI$- or WRH$ tables, but SYS.WRI$_ADV_MESSAGE_GROUPS was never mentioned in connection with csscans. So, I tried to figure out what this table actually is used for and found that…

… WRM$-tables (10 tables) contain the metadata information for the Automatic Workload Repository (AWR).
… WRH$ tables (125 tables) store historical data or snapshots of the Workload History collected by the MMON process. They can be purged by first reducing the retention time for the history followed by the purging of the statistics:

exec DBMS_STATS.ALTER_STATS_HISTORY_RETENTION(0);
exec DBMS_STATS.PURGE_STATS(sysdate);

These two types of tables are, according to Oracle Doc ID 258904.1, not to be truncated, but have to be purged with DBMS_STATS package to prevent them from any harm.

… WRR$-tables (31 tables) support the workload replay functionality.
… WRI$-tables (96 tables) are related to the advisory functions.

But my pesky little SYS.WRI$_ADV_MESSAGE_GROUPS-table turned out to belong to the Task Scheduler and that the Database Migration Utility (DMU) sometimes needs this table to be cleared as well (Doc ID 2018250.1). Luckyly, tasks were not used in these databases and finding out which task entry was to blame was senseless and too time consuming. So, we decided to remove all the tasks and found this command to do so:

exec DBMS_ADVISOR.DELETE_TASK('%');

… and really, it worked!

Table SYS.WRI$_ADV_MESSAGE_GROUPS was clean and we could finally execute CSALTER.PLB without any further problems.

The post A Short Note on Pesky CSSCAN Results Before a Characterset Change appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/08/16/a-short-note-on-pesky-csscan-results-before-a-characterset-change/feed/ 0
Generation of VM image for Oracle Event Processor and Stream Explorer using Vagrant and Puppet https://technology.amis.nl/2015/08/09/generation-of-vm-image-for-oracle-event-processor-and-stream-explorer-using-vagrant-and-puppet/ https://technology.amis.nl/2015/08/09/generation-of-vm-image-for-oracle-event-processor-and-stream-explorer-using-vagrant-and-puppet/#comments Sun, 09 Aug 2015 14:31:07 +0000 https://technology.amis.nl/?p=36825 In this article, I will introduce a set of Vagrant and Puppet configuration files that automate the creation of a Linux Virtual Box VM with Oracle Event Processor and Stream Explorer installed in it. The installation process that is automated in this article is described in all its manual glory in my earlier article: Oracle [...]

The post Generation of VM image for Oracle Event Processor and Stream Explorer using Vagrant and Puppet appeared first on AMIS Oracle and Java Blog.

]]>
In this article, I will introduce a set of Vagrant and Puppet configuration files that automate the creation of a Linux Virtual Box VM with Oracle Event Processor and Stream Explorer installed in it. The installation process that is automated in this article is described in all its manual glory in my earlier article: Oracle StreamExplorer and Oracle Event Processor – installation instructions to quickly get going, that describes which files to download, which Linux command lines steps to go through and which wizards to run. The article you are reading is also a sequel to my article Quickly produce a Linux 64 bit Ubuntu 14.04 Desktop environment using Vagrant and Puppet – as starting point for Oracle installations that describes how Vagrant and Puppet can be used to generate VM images that provide a good foundation for installing [Oracle] software on.

The sources for the current article are located in a GitHub Repository: vagrant-ubuntu1404-puppet-oracle-oep-and-sx.

The steps you need to go through in order to prepare for the completely automated generation of the VM images for OEP and Stream Explorer are:

  • Install Vagrant and Virtual Box
  • Clone Git repository from GitHub
  • Download software package for OEP and Stream Explorer as well as a JDK
  • Run Vagrant (that in turn will run the Puppet provider to perform all installation tasks)
  • Restart VM and start OEP domain

Using the approach and the Git repository introduced in this article, the creation of a VM for OEP/StreamExplorer 12c becomes a breeze!

Install Vagrant and Virtual Box

To get going, you need to first install Vagrant and Oracle Virtual Box, as discussed in many places including my article: https://technology.amis.nl/2014/07/29/fastest-way-to-a-virtual-machine-with-jdeveloper-12-1-3-and-oracle-database-xe-11gr2-on-ubuntu-linux-64-bit/.

Clone Git repository from GitHub

Then clone the GitHub repository –  https://github.com/lucasjellema/vagrant-ubuntu1404-puppet-oracle-oep-and-sx – to your local machine or simply download the repository as zip-file and expand the file in a local directory. I am assuming a Windows Host, but the steps are almost the same for Linux, MacOS or other hosts.

Download software packages for OEP and Stream Explorer as well as a JDK

The download for the OEP and Stream Explorer software is described in this article: https://technology.amis.nl/2015/08/09/oracle-streamexplorer-and-oracle-event-processor-installation-instructions-to-quickly-get-going/. Move the downloaded files to the files directory under the vagrant project home.

Downloading the JDK is described in this article: https://technology.amis.nl/2015/08/09/quickly-produce-a-linux-64-bit-ubuntu-14-04-desktop-environment-using-vagrant-and-puppet-as-starting-point-for-oracle-installations/. Move the downloaded files also to the files directory under the vagrant project home.

The contents of the file directory should look like this:

image

Some of these files – such as oraInst.loc and oep_install_responsefile and the directory user_projects were cloned with the Git repository. These were created during the manual installation of OEP and SX and subsequently saved for reuse in the automated installation process.

 

Run Vagrant (that in turn will run the Puppet provider to perform all installation tasks)

Go to the directory into which you cloned the Git repository – the directory that contains the Vagrantfile. You may want to edit this file – change the name of the VM perhaps or change the folder mappings.

When done, you can run vagrant and have the Virtual Box VM created:

vagrant up

image

That is all it takes. Leave the vagrant process alone for some time – up to 30 minutes depending on internet download speed – and when you come back, the VM will be baked for you.

SNAGHTML1db9c96

Explaining what is happening (a little)

Compared to the puppet manifest base.pp used to create a fairly basic Ubuntu 14.04 machine in https://technology.amis.nl/2015/08/09/quickly-produce-a-linux-64-bit-ubuntu-14-04-desktop-environment-using-vagrant-and-puppet-as-starting-point-for-oracle-installations/ I have added class oep_installation  and class sx_installation. These use the new modules oep and opatch. Note: module opatch is created by Edwin Biemond (https://github.com/biemond/puppet/blob/master/modules/wls/manifests/opatch.pp) and reused here. Note that module opatch defines its own function opatch using resources in the lib subdirectory.image

class oep_installation first uses resources oep::install to run the Oracle Universal Installer in silent mode, feeding it the response file that was saved in an earlier manual installation (as described here ).

image

It then uses oep::domain to create domain sx_domain – basically by copying the files that we collected when this domain was manually created in an environment with exactly the same setup.

image

 

class sx_installation uses the opatch module to install Stream Explorer which is distributed as an Oracle Path with Id 20636710 in a zip-file called ofm_sx_generic_12.1.3.0.1_disk1_2of2.zip. opatch::apply can be used for the installation of any OPatch,=

 

Note: far too much specific details are in the configuration definitions, which is not the correct way in Puppet. I should read settings – using the Hiera mechanism – from a YAML or JSON file. I have not yet gotten round to adding that for this module. Use my declarations perhaps for inspiration, but certainly not as a reference!

image

Restart VM and start OEP domain

Stop the VM using vagrant halt. Then start the VM again using vagrant up.

image

This means we can login to the desktop. Login as user oracle (display name developer) with password oracle:

image

Open a terminal:

image

The OEP Domain is started using the script: /u01/app/oracle/OEP_Home/user_projects/domains/sx_domain/defaultserver/startwlevs.sh

image

 

 

When started, the Stream Explorer can be accessed in the browser at http://localhost:9002/sx :

image

.

.

Bonus: Install JDeveloper

Not yet automated is the installation of JDeveloper – the IDE for OEP. You may want to follow the instructions for manual installation of JDeveloper as provided in this article: https://technology.amis.nl/2015/08/09/oracle-streamexplorer-and-oracle-event-processor-installation-instructions-to-quickly-get-going/ .

 

Resources

Important resources for this article were – as always when talking about Vagrant or Puppet – the articles and sources from Edwin Biemond. I have heavily leaned on his Puppet modules: https://github.com/biemond/puppet/tree/master/modules.

Very useful was Guido Schmutz’s article on automating the installation of OEP and SX on Docker containers (https://guidoschmutz.wordpress.com/2015/03/29/installing-oracle-stream-explorer-in-a-docker-image/) – something I want to investigate as well, using Puppet as the configurator for the container – rather than the Docker file all by itself.

I have also used many articles and documents on Puppet (and Vagrant) as it is so easy to make mistakes. Some relevant documents:

The post Generation of VM image for Oracle Event Processor and Stream Explorer using Vagrant and Puppet appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/08/09/generation-of-vm-image-for-oracle-event-processor-and-stream-explorer-using-vagrant-and-puppet/feed/ 0
Oracle StreamExplorer and Oracle Event Processor – installation instructions to quickly get going https://technology.amis.nl/2015/08/09/oracle-streamexplorer-and-oracle-event-processor-installation-instructions-to-quickly-get-going/ https://technology.amis.nl/2015/08/09/oracle-streamexplorer-and-oracle-event-processor-installation-instructions-to-quickly-get-going/#comments Sun, 09 Aug 2015 10:51:49 +0000 https://technology.amis.nl/?p=36799 This article discusses the installation of Oracle Event Processor 12c on Linux 64bit and the subsequent installation of Stream Explorer on top of OEP 12c. This article assumes Linux 64bit as the operating system. More specifically: it assumes the environment that can be produced following the instructions in my article Quickly produce a Linux 64 [...]

The post Oracle StreamExplorer and Oracle Event Processor – installation instructions to quickly get going appeared first on AMIS Oracle and Java Blog.

]]>
This article discusses the installation of Oracle Event Processor 12c on Linux 64bit and the subsequent installation of Stream Explorer on top of OEP 12c. This article assumes Linux 64bit as the operating system. More specifically: it assumes the environment that can be produced following the instructions in my article Quickly produce a Linux 64 bit Ubuntu 14.04 Desktop environment using Vagrant and Puppet – as starting point for Oracle installations – Ubuntu 14.04 64bit plus Desktop and JDK 7U79. Note that other Linux 64bit environments are probably fine (even better maybe as Ubuntu is not officially certified with OEP). Note that in a subsequent article I am going to leverage Vagrant and Puppet to automatically install OEP and Stream Explorer – so as to stamp out VM images for researching OEP and SX without manual actions.

I assume that the Linux environment has a user oracle in a group oracle and a directory (tree) /u01/app/oracle of which user oracle is the owner. This directory is where the ORACLE_HOME will be based.

The following steps are required:

1. Download Software Packages for OEP and SX and JDeveloper

2. Install Oracle Event Processor

3. Install Stream Explorer (as OPatch on top of OEP)

4. Create an OEP Domain

5. Start the OEP Domain and access Stream Explorer in browser

optional: 6. Install JDeveloper and create a connection to OEP domain

 

1. Download Software Packages for OEP and SX and JDeveloper

Go to http://www.oracle.com/technetwork/middleware/complex-event-processing/downloads/index.html, accept the OTN license agreement, and download three files:

  • OEP – ofm_sx_generic_12.1.3.0.0_disk1_1of2.zip
  • Stream Explorer – ofm_sx_generic_12.1.3.0.1_disk1_2of2.zip
  • (optional) JDeveloper – fmw_12.1.3.0.0_soaqs_Disk1_1of1.zip

 

SNAGHTMLd9c89f

If you are using the Virtual Box created with Vagrant as discussed in my previous article, then move these files to the /files directory under the home directory of the Vagrant project (and extract the zip files while you are at it).

image

2. Install Oracle Event Processor

Start a terminal window as user oracle and navigate to the directory that contains the software packages.

image

Run the installer:

java -jar fmw_12.1.3.0.0_oep.jarimage

This will start the installation wizard.

Set the Inventory Directory to /u01/app/oracle/oraInventory. Click OK.

image

image

 

 

image

Set Oracle Home to /u01/app/oracle/OEP_Home:

image

image

image

Ubuntu is not officially certified. However, for development purposes it will do the job. So press Next despite the warning.

Click on Save Response File.

image

Save the response file that we can use at a later stage to automate the installation process using the silent install procedure.

image

Then click on install.

image

image

image

Press Finish. And back at the command line:

image

3. Install Stream Explorer (as OPatch on top of OEP)

Stream Explorer is distributed as a patch on top of Oracle Event Processor 12.1.3, to be installed through the OPatch tool. Extracting file ofm_sx_generic_12.1.3.0.1_disk1_2of2.zip resulted in a folder called 20636710 which contains StreamExplorer as a patch with that PatchId.

Run OPatch to install the patch like this:

  • set the environment variable ORACLE_HOME to /u01/app/oracle/OEP_HOME
  • navigate to the directory 20636710
  • run the opatch tool using $ORACLE_HOME/Opatch/opatch apply

image

 

Opatch is started and analyzes the situation:

image

Then the patch is applied:

clip_image001[5]

 

4. Create an OEP Domain

Before we can start creating Stream Explorer explorations, we need to configure an OEP domain. Configure your OEP domain with script /u01/app/oracle/OEP_Home/oep/common/bin/config.sh.

image

This will start the Configuration Wizard:

image

image

Set password to weblogic1 – or something better

image

set Server Name to defaultserver and port to 9002 (these are the default values)

image

Set password for keystore to weblogic1 – or something better

image

image

Set name of domain to sx_domain – or something you like better:

image

image

image

 

And look what the wizard has created for us:

a directory structure user_projects under OEP_HOME (/u01/app/oracle/OEP_Home)

image

 

 

5. Start the OEP Domain and access Stream Explorer in browser

The OEP Domain is started using the script: $ [OEP_HOME]/user_projects/domains/[domain_name]/defaultserver/startwlevs.sh

In our case this means:

/u01/app/oracle/OEP_Home/user_projects/domains/sx_domain/defaultserver/startwlevs.sh

image

When started, the Stream Explorer can be accessed in the browser at http://localhost:9002/sx :

image

image

The Visualizer can be accessed as well – at http://localhost:9002/wlevs.  It requires Flash to be installed first:

image

The OEP domain can be stopped like this:

(from a new terminal): /u01/app/oracle/OEP_Home/user_projects/domains/sx_domain/defaultserver/stopwlevs.sh

image

 

 

optional: 6. Install JDeveloper and create a connection to OEP domain

JDeveloper – and then the specific JDeveloper edition shipped as part of the SOA Suite 12c quick start installer – provides the IDE for Oracle Event Processor. You do not need JDeveloper for your first steps with Stream Explorer., However, once you get more serious, it is very likely that you will need the JDeveloper IDE as well.

JDeveloper is installed using two jar files extracted from fmw_12.1.3.0.0_soaqs_Disk1_1of1.zip (note: zip file can be downloaded from http://www.oracle.com/technetwork/middleware/complex-event-processing/downloads/index.html and many other places on OTN and eDelivery)

To install, navigate to the directory that contains the two jar files and run:

java -jar fmw_12.1.3.0.0_soa_quickstart.jar

clip_image001[7]

Follow the steps in the installer to install JDeveloper (see for example https://guidoschmutz.wordpress.com/2014/07/06/installing-oracle-soa-suite-12c-quick-start-distribution/ ).

clip_image001[9]clip_image002

Set the Oracle Home for this JDeveloper installation – /u01/app/oracle/JDeveloper_Home

clip_image003clip_image004clip_image005clip_image006

Run JDeveloper

JDEV_HOME/jdeveloper/jdev/bin/jdev

clip_image007clip_image008clip_image009

and create an OEP Connection to the OEP Domain sx_domain.

clip_image011

clip_image012

The OEP Domain can be manipulated from within JDeveloper as well as on the command line.

clip_image014

 

Resources

I have used these resources on the installation of Stream Explorer (and Oracle Event Processor)

Official Oracle Documentation: http://docs.oracle.com/middleware/1213/eventprocessing/SXGSG/GUID-45846CCC-3A27-4033-80C4-E90BE3F9233A.htm#SXGSG128

Patrick Sinke’s article: http://blog.whitehorses.nl/2015/03/13/setting-up-oracle-stream-explorer-12-1-3/

Niall’s take on the installation process: http://niallcblogs.blogspot.nl/2015/03/381-oracle-stream-explorer-installation.html

Guido Schmutz on OEP & SX on Docker containers: https://guidoschmutz.wordpress.com/2015/03/29/installing-oracle-stream-explorer-in-a-docker-image/

The post Oracle StreamExplorer and Oracle Event Processor – installation instructions to quickly get going appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/08/09/oracle-streamexplorer-and-oracle-event-processor-installation-instructions-to-quickly-get-going/feed/ 0
Quickly produce a Linux 64 bit Ubuntu 14.04 Desktop environment using Vagrant and Puppet – as starting point for Oracle installations https://technology.amis.nl/2015/08/09/quickly-produce-a-linux-64-bit-ubuntu-14-04-desktop-environment-using-vagrant-and-puppet-as-starting-point-for-oracle-installations/ https://technology.amis.nl/2015/08/09/quickly-produce-a-linux-64-bit-ubuntu-14-04-desktop-environment-using-vagrant-and-puppet-as-starting-point-for-oracle-installations/#comments Sun, 09 Aug 2015 08:49:24 +0000 https://technology.amis.nl/?p=36701 My objective in this article: create generally reusable Linux 64 bit (Ubuntu 14.0.4 with Desktop) Virtual Box Image – based on Vagrant and Puppet as to stamp out multiple copies of the image. The image should support Puppet and Git and must have a JDK installed. I frequently want to try out new software. In [...]

The post Quickly produce a Linux 64 bit Ubuntu 14.04 Desktop environment using Vagrant and Puppet – as starting point for Oracle installations appeared first on AMIS Oracle and Java Blog.

]]>
My objective in this article: create generally reusable Linux 64 bit (Ubuntu 14.0.4 with Desktop) Virtual Box Image – based on Vagrant and Puppet as to stamp out multiple copies of the image. The image should support Puppet and Git and must have a JDK installed.

I frequently want to try out new software. In many occasions this is software from Oracle, frequently still in beta. Usually this software runs best on Linux (even though my laptop still boots in Windows7). Besides. I want an isolated environment in which to run said software. An environment that I can easily create, start and also stop and remove.

In the past I have written a bit on both Vagrant – an open source product that allows us to script the creation of virtual machine images with Virtual Box (as well as Docker containers and AWS machine images) – and on Puppet – another open source project that supports configuring environments based on declarative descriptions of the desired end state. I have been introduced to both tools – that happen to work together very well – by Edwin Biemond (formerly of AMIS and currently of Oracle Corporation), who has done a ton of work and donated his findings to the community through his blog (http://biemond.blogspot.nl/) and his many GitHub contributions (https://github.com/biemond).

My objective as stated at the beginning of this article is actually not very demanding at all. Once you are somewhat versed in Vagrant and Puppet, this is almost the simplest thing to do. Yet it is a useful building block for me.

You will find the sources – Vagrant and Puppet scripts – in this GutHub repository: https://github.com/lucasjellema/vagrant-ubuntu1404-puppet-java. Note that the JDK7 module is copied from Edwin’s Puppet module: https://github.com/biemond/puppet/tree/master/modules/jdk7.

Preparation

To get going, you need to first install Vagrant and Oracle Virtual Box, as discussed in many places including my article: https://technology.amis.nl/2014/07/29/fastest-way-to-a-virtual-machine-with-jdeveloper-12-1-3-and-oracle-database-xe-11gr2-on-ubuntu-linux-64-bit/.

Then clone the GitHub repository –  https://github.com/lucasjellema/vagrant-ubuntu1404-puppet-java – to your local machine or simply download the repository as zip-file and expand the file in a local directory. I am assuming a Windows Host, but the steps are almost the same for Linux, MacOS or other hosts.

You need to download a JDK installation file (for the target operation system which is Linux 64bit) to the files subdirectory. Go to http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html and download the jdk-###-linux-x64.tar.gz file and copy it to the files directory.

SNAGHTML1b167c

 

Also download UnlimitedJCEPolicyJDK7.zip from http://www.oracle.com/technetwork/java/javase/downloads/jce-7-download-432124.html and copy it to the files directory.

image

 

Looking at the Vagrant file for stamping out the Virtual Box machine

Open file vagrantfile to see what is defined inside:

Select Box image: ubuntu/trusty64 (that is available from the following URL  https://atlas.hashicorp.com/ubuntu/boxes/trusty64)

image

Ensure an Ubuntu.sh shell script is run on the image for configuring Puppet support in the client (again: this could be done in various ways). This script is inherited from Edwin who was inspired by  http://garylarizza.com/blog/2013/02/01/repeatable-puppet-development-with-vagrant/ and https://github.com/hashicorp/puppet-bootstrap/blob/master/ubuntu.sh)

image

Map relevant folders on the host to the VM (guest):

image

Note that the Vagrant file also contains definitions regarding the physical resources allocated to the Virtual Box VM (RAM memory , Video Memory, CPUs) as well as a name:

image

Use the Puppet Provisioner in Vagrant to run the Puppet manifest (plus modules) from within the guest – using the Puppet that was installed with the shell script:

image

 

Note: this section states that relative to the directory containing the vagrantfile, folder manifests contains the Puppet manifests to run – specifically the base.pp manifest – and directory modules contains the Puppet modules to be loaded and possibly included from the base.pp.

The directory structure for the Vagrant and Puppet resources looks like this:

image

Puppet declarations for the target environment

When you look inside the base.pp Puppet manifest, you will find the declarative configurations that describe the Virtual Machine to be provisioned. Vagrant takes care of the foundation – Ubuntu 14.04 64 bit. Puppet does the fine-tuning.

Here we see how Puppet is instructed to ensure that the Linux packages git, build-essential and ubuntu-desktop are installed (which boils down to: install them if they are not already installed). The group and user oracle are created, the password for the user is set (also to oracle).

 

image

The instruction include java::install activates the class java::install in package java in modules\java\manifests\init.pp. This class in its turn leverages the jdk7 module to install a specific Java 7 JDK. It is currently configured to install Java 7 U79 – which should correspond with the JDK tar file in the files directory (jdk-7u79-linux-x64.tar). If you want to install a different JDK version, you need to update this java::install class. Note: directly updating the class is not the correct way in Puppet. I should read settings – using the Hiera mechanism – from a YAML or JSON file. I have not yet gotten round to adding that for this module.

image

Generating the Virtual Machine

With the set up out of the way, we can start stamping out the clean Ubuntu 64 bit VM images with Desktop and Java set up, primed for action. We may want to do one final thing, especially if we are going to create multiple VM images – and that is define the title of the VM :

image

Then: open a command prompt in the directory that contains the vagrantfile.

image

and type: vagrant up.

This will fire off vagrant with the vagrantfile and have it do its thing.

image

At some point in the process, you will see Virtual Box being launched. This means the VM is running. Vagrant continues to install the Virtual Box Guest Additions – used for mapping folders and interaction between host and VM in other ways. Then, in the vagrant logging, you will see the output indicating the installation of Puppet into the VM image:

image

A little later, this Puppet instance is put to good use: to run the Puppet manifest that is available inside the Guest VM through the folder mapping to the host folder.

image

 

After several minutes – depending on what had to be downloaded to your local machine – Vagrant will complete its work (which was partially performed by Puppet working inside the VM).

image

 

Accessing the brand new VM

The new VM can be accessed through SSH from the command line, using the simple command:

vagrant ssh

This brings us into the VM as user vagrant. Have a look around.

image

 

And just to  be sure about the JDK installation:

image

Alternatively, we can access the desktop – using remote desktop tools or by simply going through the Virtual Box GUI.

image

Note that the VM is stopped using vagrant halt (with shutdown) or vagrant suspend (A suspend effectively saves the exact point-in-time state of the machine). To get going again, use vagrant resume (after a suspend) or vagrant up (after a halt). To get rid of the VM altogether, use vagrant destroy.(creating a new VM is easily enough now that we have the scripts for it).

SNAGHTMLbd7f5f

When you change or extend the Puppet manifest and want to have it reapplied to the VM, you can use the command: vagrant provision –provision-with puppet to only reapply the (changed) puppet script. To learn more details about what is going on during Puppet provisioning driven from Vagrant, you can add this  line inside the config.vm.provision  entry for Puppet:

puppet.options =”–verbose –debug”

image

The post Quickly produce a Linux 64 bit Ubuntu 14.04 Desktop environment using Vagrant and Puppet – as starting point for Oracle installations appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/08/09/quickly-produce-a-linux-64-bit-ubuntu-14-04-desktop-environment-using-vagrant-and-puppet-as-starting-point-for-oracle-installations/feed/ 0
Er is geen tekort aan IT talent; er is een tekort aan IT competenties https://technology.amis.nl/2015/08/07/geen-tekort-aan-it-talent-er-is-een-te-kort-aan-it-competenties/ https://technology.amis.nl/2015/08/07/geen-tekort-aan-it-talent-er-is-een-te-kort-aan-it-competenties/#comments Fri, 07 Aug 2015 10:36:56 +0000 https://technology.amis.nl/?p=36650 In mijn werk als unitmanager ben ik regelmatig bezig met resourcing vraagstukken. Heb ik de professionals? Welke persoon kan ik op deze opdracht inzetten? Moet ik extra mensen werven? Het vinden van experts met de juiste competenties is erg lastig. Bij klanten en partners hoor ik dezelfde geluiden. Vaak wijzen we naar de hogescholen en [...]

The post Er is geen tekort aan IT talent; er is een tekort aan IT competenties appeared first on AMIS Oracle and Java Blog.

]]>

In mijn werk als unitmanager ben ik regelmatig bezig met resourcing vraagstukken. Heb ik de professionals? Welke persoon kan ik op deze opdracht inzetten? Moet ik extra mensen werven? Het vinden van experts met de juiste competenties is erg lastig. Bij klanten en partners hoor ik dezelfde geluiden. Vaak wijzen we naar de hogescholen en universiteiten. Zij zouden onze aankomende consultants goed moeten opleiden en zorgen voor de juiste competenties. Een veel gehoorde uitspaak is dan ook “Als ze van school komen kunnen ze nog bijna niets, wij moeten ze nog opleiden in het echte werk.”

Hoe leren schoolverlaters nieuwe competenties?

Laat ik met een praktijkvoorbeeld beginnen. Afgelopen 4 maanden heeft een team van studenten een stageopdracht gedaan op het gebied van security en identity management. Geen eenvoudig onderwerp voor een groep 3e jaars IT studenten. De kennis die ze hadden was theoretisch en gebaseerd op twee colleges in security. Kenmerkend voor deze groep was dat de studenten over drive en enthousiasme beschikten. In een totaal onbekend werkveld waren ze in staat om, met voldoende begeleiding en kennis van AMIS experts, een Cloud gebaseerde Identity en Access management oplossing te bouwen. In een techniek waarvan ze 4 maanden geleden nog nooit van gehoord hadden. Opmerkelijk was het plezier en de leergierigheid. Dus veel talent en drive om competenties te leren. Na 3 maanden in de opdracht beschikten ze over de competenties die vergelijkbaar zijn met een medior security specialist. En ik was onder de indruk.

Wachten op kandidaten met de juiste competenties

De onderliggende uitdaging voor IT managers is dat de gevraagde competenties heel snel veranderen en dat we voor onszelf de lat erg hoog leggen. Ik zie af en toe een aanvraag voor een “Docker specialist met 5 jaar ervaring” langskomen, terwijl de eerste release van Docker pas in 2013 was. We willen op al onze projecten de superspecialist met eindeloos veel ervaring. Daarentegen zijn organisaties steeds minder goed in staat om professionals met voldoende competenties uit de markt te werven. De mensen met de juiste competenties werken al op aantrekkelijke opdrachten en zijn niet beschikbaar of veel te duur.

We kunnen ons niet meer veroorloven om eindeloos te gaan wachten op de ideale kandidaat voor onze vacature. Het gevraagde schaap met de vijf poten komt toch nooit. Het is vaak veel eenvoudiger om professionals te trainen in bepaalde competenties, dan te werven. Dat betekent dat ik op zoek moet gaan naar kandidaten met talent in plaats van de gevraagde competenties. Recruiters en IT managers die blijven wachten zouden zich moeten afvragen of ze met deze strategie wel effectief bezig zijn.

Wat kost het om een kandidaat met competenties te werven?

Het werven van een junior kandidaat kan veelal in een maand gedaan worden. Een senior kandidaat kan soms wel 3 tot 6 maanden duren. Is jouw organisatie bereid om zo lang wachten tot er een geschikte kandidaat is geworven? Dit staat nog los van de vergoeding die een eventuele recruiter vraagt voor het aanbrengen van de geschikte kandidaat. Ook kost het ons veel tijd om een correcte vacature samen te stellen en hiervoor binnen de eigen organisatie goedkeuring te krijgen. Als dit proces uiteindelijk is doorlopen zijn de gevraagde competenties al verouderd en heeft de potentiële kandidaat allang een andere baan gevonden. Dit staat in schril contrast met de enorme hoeveelheid starters op de arbeidsmarkt die niet goed aan een opdracht kunnen komen omdat ze de noodzakelijke ervaring missen. Deze groep zit soms wel 2 tot 3 jaar te wachten tot ze het stigma “ervaren” hebben gekregen. Wat zonde van de tijd!! Stel je eens voor wat je in deze 2 tot 3 jaar aan competenties en werkervaring kunt opdoen.

Talent moet je werven, competenties kan je trainen

Als organisaties verstandig zijn dan stoppen we met wachten op onze droomkandidaat. We gaan op zoek naar mensen met talent. Dit zijn nieuwsgierige professionals, die iedere dag proberen Select-from-talkent-poolbeter te worden en niet bang zijn om zichzelf iedere 3 jaar opnieuw uit te vinden. Organisaties moeten leren om zich niet blind te staren op de door onze klanten gevraagde competenties. Talent om competenties te leren is veel belangrijker. Maak eens de afweging hoeveel competenties je een kandidaat met talent kunt aanleren in de tijd die het kost om de ideale kandidaat te vinden. Zes maanden training en een aanzienlijk opleidingsbudget wat we niet aan de recruiter uitgegeven…. Ga dus aan de slag met de echte kandidaten van vlees en bloed die je dagelijks spreekt! Zorg ervoor dat ze bij je op projecten komen en train ze in de competenties. Geef vertrouwen en ruimte om te leren en te groeien. Dan heb je snel een team om je heen met de gevraagde competenties. Dit werkt veel beter dan het wachten op de die ene “droomprins/prinses”-kandidaat.

Morele verplichting voor ons als IT consultancy organisaties

Als IT consultancy organisatie hebben we de morele verplichting om onze expertise te delen met hogescholen en universiteiten. We geven colleges, denken mee over curriculum en geven concrete kansen aan studenten die barsten van talent en op zoek zijn naar competenties. Geef deze junior professionals vooral de kans om zich te ontwikkelen en hun competenties op te doen.

Talent is er overal

De kern van de zaak is dat er geen tekort is aan talent; we moeten onze nek uit durven steken en talent voorzien van de door ons gewenste competenties. Dat is niet alleen goed voor ons als bedrijf, maar ook voor onze hele economie.

The post Er is geen tekort aan IT talent; er is een tekort aan IT competenties appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/08/07/geen-tekort-aan-it-talent-er-is-een-te-kort-aan-it-competenties/feed/ 0
To quickly transfer scanned documents and pictures from the iPhone to PC and USB stick https://technology.amis.nl/2015/08/05/to-quickly-transfer-scanned-documents-and-pictures-from-the-iphone-to-pc-and-usb-stick/ https://technology.amis.nl/2015/08/05/to-quickly-transfer-scanned-documents-and-pictures-from-the-iphone-to-pc-and-usb-stick/#comments Wed, 05 Aug 2015 13:21:13 +0000 https://technology.amis.nl/?p=36645 My challenge: how to scan a bunch of paper documents and store the electronic image files on my NAS. At my disposal: an iPhone, wireless network and a Windows laptop. I also have a Facebook account that is configured on the iPhone. The steps I went through: 1. Scan documents by taking pictures with iPhone [...]

The post To quickly transfer scanned documents and pictures from the iPhone to PC and USB stick appeared first on AMIS Oracle and Java Blog.

]]>
My challenge: how to scan a bunch of paper documents and store the electronic image files on my NAS.

image

At my disposal: an iPhone, wireless network and a Windows laptop. I also have a Facebook account that is configured on the iPhone.

The steps I went through:

1. Scan documents by taking pictures with iPhone

 

2. Open the Camera Roll. If you want to share with an Apple Device (iPad, Mac) you can use the Photo Stream option that allows access to pictures directly on iCloud)

To share with with other devices, email and Facebook are options – as is of course connecting the iPhone to a USB port. For the first two options:

Select images on iPhone (in Edit Mode in Camera Roll) – Click on Share

image

and Share with Facebook

image

(this requires the Facebook account to have been configured on the iPhone)

 

image

Note: when there are five or fewer than you can also email them from the Share option)

 

3. Open your Facebook account

image

Note: to prepare for this step, I first installed Firefox Addon Facedown by Achim Wehmann – see https://addons.mozilla.org/en-US/firefox/addon/facedown/ )

Click on Albums.

 

Right Click on the Album with the desired pictures (quite possibly called iOS Photos). The context menu should contain the option Download Facebook Album.

image

Click this item and select the target folder into which the image files will be downloaded.

image

Download takes place, very rapidly:

SNAGHTML6a7730f

4. Copy the files to the NAS mapped network drive

The post To quickly transfer scanned documents and pictures from the iPhone to PC and USB stick appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/08/05/to-quickly-transfer-scanned-documents-and-pictures-from-the-iphone-to-pc-and-usb-stick/feed/ 0
SOA Suite 12c: Collect & Deploy SCA composites & Service Bus artifacts using Maven https://technology.amis.nl/2015/08/03/soa-suite-12c-collect-deploy-sca-composites-service-bus-artifacts-using-maven/ https://technology.amis.nl/2015/08/03/soa-suite-12c-collect-deploy-sca-composites-service-bus-artifacts-using-maven/#comments Mon, 03 Aug 2015 10:00:14 +0000 https://technology.amis.nl/?p=36615 An artifact repository has many benefits for collaboration and governance of artifacts. In this blog post I will illustrate how you can fetch SCA composites and Service Bus artifacts from an artifact repository and deploy them. The purpose of this exercise is to show that you do not need loads of custom scripts to do [...]

The post SOA Suite 12c: Collect & Deploy SCA composites & Service Bus artifacts using Maven appeared first on AMIS Oracle and Java Blog.

]]>
An artifact repository has many benefits for collaboration and governance of artifacts. In this blog post I will illustrate how you can fetch SCA composites and Service Bus artifacts from an artifact repository and deploy them. The purpose of this exercise is to show that you do not need loads of custom scripts to do these simple tasks. Why re-invent a wheel when Oracle already provides it?

This example has been created for SOA Suite 12.1.3. This will not work as-is for 11g and earlier since they lack OOTB Maven support for SOA Suite artifacts. In order to start using Maven to do command-line deployments, you need to have some Oracle artifacts in your repository. See http://biemond.blogspot.nl/2014/06/maven-support-for-1213-service-bus-soa.html on how to put them there. I have used two test projects which were already in the repository. A SCA composite called HelloWorld_1.0 and a Service Bus project also called HelloWorld_1.0. In my example, the SCA composite is in the GroupId nl.amis.smeetsm.composite and the Service Bus project is in the GroupId nl.amis.smeetsm.servicebus. You can find information on how to deploy to an artifact repository (e.g. Nexus) here.

SCA Composite

Quick & dirty with few dependencies

I have described getting your SCA composite out of Nexus and into an environment here. The process described there has very few dependencies. First you manually download your jar file using the repository API and then you deploy it using a Maven command like:

mvn com.oracle.soa.plugin:oracle-soa-plugin:deploy -DsarLocation=HelloWorld-1.0.jar -Duser=weblogic -Dpassword=Welcome01 -DserverURL=http://localhost:7101

In order for this to work, you need to have a (dummy) pom.xml file in the current directory. You cannot use the project pom file for this. The only requisites (next to a working Maven installation) are;

  • the sar file
  • serverUrl and credentials of the server you need to deploy to

Notice that you do not even need an Oracle home location for this. In order to build the project from sources however, you do need an Oracle home.

Less quick & dirty using Maven

An alternative to the previously described method is to use a pom which has the artifact you want to deploy as a dependency. This way Maven obtains the artifact from the repository (configured in settings.xml) for you. This is also a very useful method to combine artifacts in a greater context such as for example a release. The Maven assembly plugin (which uses the configuration file unit-assembly.xml in this example) can be used to specify how to treat the downloaded artifacts. The format ‘dir’ specifies that the downloaded artifacts should be put in a specific directory as-is (not zipped or otherwise repackaged). Format ‘zip’ will (surprise!) zip the result so you can for example put it in your repository or somewhere else. The dependencySet directive indicates which dependencies should go to which directory. When combining Service Bus and SOA artifacts in a single pom, you can use this information to determine which artifact should be put in which directory and this can then be used to determine which artifact should be deployed where.

You can for example use a pom.xml file like:

 <?xml version="1.0" encoding="UTF-8"?>  
 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0               http://maven.apache.org/maven-v4_0_0.xsd">  
      <modelVersion>4.0.0</modelVersion>  
      <groupId>nl.amis.smeetsm.unit</groupId>  
      <artifactId>HelloWorld_1.0</artifactId>  
      <packaging>jar</packaging>  
      <version>1.0</version>  
      <name>HelloWorld_1.0</name>  
      <url>http://maven.apache.org</url>  
      <dependencies>  
           <dependency>  
                <groupId>nl.amis.smeetsm.composite</groupId>  
                <artifactId>HelloWorld_1.0</artifactId>  
                <version>1.0</version>  
                <type>jar</type>  
           </dependency>  
      </dependencies>  
      <build>  
           <plugins>  
                <plugin>  
                     <artifactId>maven-assembly-plugin</artifactId>  
                     <version>2.5.4</version>  
                     <configuration>  
                          <descriptors>  
                               <descriptor>unit-assembly.xml</descriptor>  
                          </descriptors>  
                     </configuration>  
                </plugin>  
           </plugins>  
      </build>  
 </project>

With a unit-assembly.xml file like

 <assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3"  
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  
   xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 http://maven.apache.org/xsd/assembly-1.1.3.xsd">  
      <id>unit</id>  
      <formats>  
           <format>dir</format>  
      </formats>  
      <dependencySets>  
           <dependencySet>  
                <outputDirectory>/unit/composite</outputDirectory>  
                <includes>  
                     <include>nl.amis.smeetsm.composite:*</include>  
                </includes>  
           </dependencySet>  
      </dependencySets>  
 </assembly>  

Using this method you also need the following in your settings.xml file so it can find the repository. In this example I have used a local Nexus repository.

 <mirror>  
      <id>nexus</id>  
      <name>Internal Nexus Mirror</name>  
      <url>http://localhost:8081/nexus/content/groups/public/</url>  
      <mirrorOf>*</mirrorOf>  
 </mirror>  

And then in order to obtain the jar from the repository

mvn assembly:single

And deploy it the same as described above only with a slightly longer location of the sar file.

mvn com.oracle.soa.plugin:oracle-soa-plugin:deploy -DsarLocation=target/HelloWorld_1.0-1.0-unit/HelloWorld_1.0-1.0/unit/composite/HelloWorld_1.0-1.0.jar -Duser=weblogic -Dpassword=Welcome01 -DserverURL=http://localhost:7101

Thus what you need here (next to a working Maven installation) is;

  • a settings.xml file containing a reference to the repository (you might be able to avoid this by providing it command-line)
  • a specific pom with the artifact you want to deploy specified as dependency
  • serverUrl and credentials of the server you want to deploy to

Service Bus

For the Service Bus in general the methods used to get artifacts in and out of an artifact repository are very similar to the SCA composites.

Getting the Service Bus sbar from an artifact repository to an environment does require the projects pom file since you cannot specify an sbar file directly in a deploy command. The command to do the actual deployment also differs from deploying a SCA composite. You do require an Oracle home for this.

mvn pre-integration-test -DoracleHome=/home/maarten/Oracle/Middleware1213/Oracle_Home -DoracleUsername=weblogic -DoraclePassword=Welcome01 -DoracleServerUrl=http://localhost:7101

You can also use a method similar to the one described for the SCA composites. Mind though that you need the project pom file also as a dependency.

 <?xml version="1.0" encoding="UTF-8"?>  
 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0               http://maven.apache.org/maven-v4_0_0.xsd">  
      <modelVersion>4.0.0</modelVersion>  
      <groupId>nl.amis.smeetsm.unit</groupId>  
      <artifactId>HelloWorld_1.0</artifactId>  
      <packaging>jar</packaging>  
      <version>1.0</version>  
      <name>HelloWorld_1.0</name>  
      <url>http://maven.apache.org</url>  
      <dependencies>  
           <dependency>  
                <groupId>nl.amis.smeetsm.servicebus</groupId>  
                <artifactId>HelloWorld_1.0</artifactId>  
                <version>1.0</version>  
                <type>sbar</type>  
           </dependency>  
           <dependency>  
                <groupId>nl.amis.smeetsm.servicebus</groupId>  
                <artifactId>HelloWorld_1.0</artifactId>  
                <version>1.0</version>  
                <type>pom</type>  
           </dependency>  
      </dependencies>  
      <build>  
           <plugins>  
                <plugin>  
                     <artifactId>maven-assembly-plugin</artifactId>  
                     <version>2.5.4</version>  
                     <configuration>  
                          <descriptors>  
                               <descriptor>unit-assembly.xml</descriptor>  
                          </descriptors>  
                     </configuration>  
                </plugin>  
           </plugins>  
      </build>  
 </project>  

And a unit-assembly.xml like;

  <assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3"  
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  
   xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 http://maven.apache.org/xsd/assembly-1.1.3.xsd">  
      <id>unit</id>  
      <formats>  
           <format>dir</format>  
      </formats>  
      <dependencySets>  
           <dependencySet>  
                <outputDirectory>/unit/servicebus</outputDirectory>  
                <includes>  
                     <include>nl.amis.smeetsm.servicebus:*</include>  
                </includes>  
           </dependencySet>  
      </dependencySets>  
 </assembly>

Thus what you need here (next to a working Maven installation) is;

  • an Oracle home location
  • a settings.xml file containing a reference to the repository (you might be able to avoid this by providing it command-line)
  • a specific pom with the artifact specified as dependency (this will fetch the sbar and pom file)
  • serverUrl and credentials of the server you want to deploy to

Deploy many artifacts

In order to obtain large amounts of artifacts from Nexus and deploy them, it is relatively easy to create a shell script, for example something like the one below. The script below uses the structure created by the above described method to deploy artifacts. It has a part which first downloads a ZIP, unzips it and then loops through deployable artifacts and deploys them. The script depends on a ZIP in the artifact repository with the specified structure. In order to put the unit in Nexus, replace ‘dir’ with ‘zip’ in the assembly file and deploy the unit. You are creating a copy of the artifact though so you should probably use the pom and assembly directly for creating the unit of artifacts and loop over them without the step in between of creating a separate ZIP of the assembly.

The local directory should contain a dummypom.xml for the SCA deployment. The script creates a tmp directory, downloads the artifact, extracts it, loops over its contents, creates a deploy shell script and execute it. Separating assembly (deploy_unit.sh) and actual deployment (deploy_script.sh) is advised. This allows you to rerun the deployment or continue from a certain point where it might have failed. The assembly can be handed to someone else (operations?) to do the deployment.

dummypom.xml:

 <?xml version="1.0" encoding="UTF-8"?>  
 <project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">  
      <modelVersion>4.0.0</modelVersion>  
      <groupId>nl.amis.smeetsm</groupId>  
      <artifactId>DummyPom</artifactId>  
      <version>1.0</version>  
 </project>

deploy_unit.sh:

The script has a single parameter. The URL of the unit to be installed. This can be a reference to an artifact in a repository (if you have your unit as a separate artifact in the repository). The script is easily updated to use a local file or structure as described above.

#!/bin/sh  
   
 servicebus_hostname=localhost  
 servicebus_port=7101  
 servicebus_username=weblogic  
 servicebus_password=Welcome01  
 servicebus_oraclehome=/home/maarten/Oracle/Middleware1213/Oracle_Home/  
 composite_hostname=localhost  
 composite_port=7101  
 composite_username=weblogic  
 composite_password=Welcome01  
   
 if [ -d "tmp" ]; then  
  rm -rf tmp  
 fi  
 mkdir tmp  
 cp dummypom.xml tmp/pom.xml  
 cd tmp  
   
 #first fetch the unit ZIP file  
 wget $1  
 for f in *.zip  
 do  
  echo "Unzipping $f"  
  unzip $f  
 done  
   
 #deploy composites  
 for D in `find . -type d -name composite`  
 do  
  echo "Processing directory $D"  
  for f in `ls $D/*.jar`  
  do  
   echo "Deploying $f"  
   URL="http://$composite_hostname:$composite_port"  
   echo "URL: $URL"  
   echo mvn com.oracle.soa.plugin:oracle-soa-plugin:deploy -DsarLocation=$f -Duser=$composite_username -Dpassword=$composite_password -DserverURL=$URL >> deploy_script.sh  
  done  
 done  
   
 #deploy servicebus  
 for D in `find . -type d -name servicebus`  
 do  
  echo "Processing directory $D"  
  for f in `ls $D/*.pom`  
  do  
   echo "Deploying $f"  
   URL="http://$composite_hostname:$composite_port"  
   echo "URL: $URL"  
   echo mvn -f $f pre-integration-test -DoracleHome=$servicebus_oraclehome -DoracleUsername=$servicebus_username -DoraclePassword=$servicebus_password -DoracleServerUrl=$URL >> deploy_script.sh  
  done  
 done  
   
 ./deploy_script.sh  
   
 cd ..  
 rm -rf tmp  

For this example I created a very basic script. It does require a Maven installation, a settings.xml telling where the repository is and an Oracle home location (Service Bus requires it). Also it has some liabilities, for example in the commands used to find the deployable artifacts. It does give an idea though on how you can easily deploy large amounts of composites using relatively little code by leveraging Maven commands. It also illustrates the difference between SCA composite and Service Bus deployments.

Finally

You can easily combine the assembly files and pom files for the SCA composites and the Service Bus to create a release containing both. Deploying them is also easy using a single command. I also illustrated how you can easily loop over several artifacts using a shell script. I have not touched the usage of configuration plans and how to efficiently group related artifacts in your artifact repository. Those will be the topic of a next blog post.

The post SOA Suite 12c: Collect & Deploy SCA composites & Service Bus artifacts using Maven appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/08/03/soa-suite-12c-collect-deploy-sca-composites-service-bus-artifacts-using-maven/feed/ 0
Overview of WebLogic 12c RESTful Management Services https://technology.amis.nl/2015/07/21/overview-of-weblogic-12c-restful-management-services/ https://technology.amis.nl/2015/07/21/overview-of-weblogic-12c-restful-management-services/#comments Tue, 21 Jul 2015 14:58:12 +0000 https://technology.amis.nl/?p=36565 Inspired by a presentation given by Shukie Ganguly on the free Oracle Virtual Technology Summit in July (see here); “New APIs and Tools for Application Development in WebLogic 12c”, I decided to take a look at an interesting new feature in WebLogic Server 12c: the RESTful Management Services. You can see here how to enable [...]

The post Overview of WebLogic 12c RESTful Management Services appeared first on AMIS Oracle and Java Blog.

]]>
Inspired by a presentation given by Shukie Ganguly on the free Oracle Virtual Technology Summit in July (see here); “New APIs and Tools for Application Development in WebLogic 12c”, I decided to take a look at an interesting new feature in WebLogic Server 12c: the RESTful Management Services. You can see here how to enable them. In this post I will provide an overview of my short study on the topic.

RESTful management services consist of two sets of resources. tenant-monitoring resources and ‘wls’ resources. The first is more flexible in response format (JSON, XML, HTML) and more suitable for monitoring. With the latter you can for example update datasource properties and create entire servers. It however only supports JSON as return format. The ‘wls’ resources also provide links so you can automagically traverse the resource tree which is very useful. I’ve provided a Python script to do just that at the end of this post.

Monitoring

In the past I have already created all kinds of tools to do remote monitoring of WebLogic Server 11g. See for example http://javaoraclesoa.blogspot.nl/2012/09/monitoring-datasources-on-weblogic.html for some code to monitor datasources and for the state of the SOA Infrastructure; http://javaoraclesoa.blogspot.nl/2012/11/soa-suite-cluster-deployments-and.html and also for BPEL: http://javaoraclesoa.blogspot.nl/2013/03/monitoring-oracle-soa-suite-11g.html.

With the 12c RESTful Management Services this becomes a lot easier and does not require any custom code, which is of course a major improvement!

It is possible to let the RESTful Management Services return HTML, JSON or XML by using the Accept HTTP header (application/json or application/xml. HTML is the default). See here.

What can you monitor?

Available resources under http(s)://host:port/management/tenant-monitoring are (WLS 12.1.1):

  • servers
  • clusters
  • applications
  • datasources

You can also go to the level of an individual resource like for example datasources/datasourcename.

SOA Suite

The tenant-monitoring resources of the RESTful Management Services are not specific for SOA Suite. They do not allow you to obtain much information about the inner workings of applications like the SOA infrastructure application or the BPEL process manager. Thus my SOA infrastructure monitoring tool and BPEL process state monitoring tool could still be useful. You can potentially replace this functionality however with for example Jolokia. See below.

Monitoring a lot of resources

Because the Management Services allow monitoring of many resources, they would be ideal to use in a monitoring tool like Nagios. Mark Otting beat me to this however; http://www.qualogy.com/monitoring-weblogic-12c-with-nagios-and-rest/.

The RESTful Management services provide a specific set of resources which you can monitor. These resources are limited. There is also an alternative for the RESTful Management Services for monitoring WebLogic Server (and other application servers), namely Jolokia. See here. One of the nice things about Jolokia is that it allows you to directly access MBeans and you are not limited to a fixed set of available resources. Directly accessing MBeans is very powerful (and potentially dangerous!). This could for example allow obtaining SOA infrastructure state and list deployed composites.

Management

The RESTful Management Services do not only provide monitoring capabilities but also editable resources;
http://docs.oracle.com/middleware/1213/wls/WLRMR/resources.htm#WLRMR471. These resources can be accessed by going to an URL like; http(s)://host:port/management/wls/{version}/path, for example http://localhost:7001/management/wls/latest/. The resources only provide the option to reply with JSON (Accept: application/json) and provide links entries so you can see the parent and children of a resource. With POST, PUT and DELETE HTTP verbs you can update, create or remove resources and with GET and OPTIONS you can obtain information.

Deploying without dependencies (just curl)

An interesting usecase is command-line deployments without dependencies. This was an example given in the Oracle documentation. (see here). You could use for example a curl command (or whatever command-line HTTP client) to deploy an ear without need for Java libraries or WLST/Ant/Maven scripts. There is also a blog on this here.

Walking the resource tree

In contrast to the tenant-monitoring resources, the management resources allow traversing the JSON tree. The response of a HTTP GET request contains a links element, which contains parent and child entries. When an HTTP GET is not allowed or the links element does not exist, you can’t go any further down the resource. In order to display available resources on your WebLogic Server I wrote a small Python script.

 import json  
 import httplib  
 import base64  
 import string  
 from urlparse import urlparse  
   
 WLS_HOST = "localhost"  
 WLS_PORT = "7101"  
 WLS_USERNAME = "weblogic"  
 WLS_PASSWORD = "Welcome01"  
   
 def do_http_request(host,port,url,verb,accept,username,password,body):  
   # from http://mozgovipc.blogspot.nl/2012/06/python-http-basic-authentication-with.html  
   # base64 encode the username and password  
   auth = string.strip(base64.encodestring(username + ':' + password))  
   service = httplib.HTTP(host,port)  
     
   # write your headers  
   service.putrequest(verb, url)  
   service.putheader("Host", host)  
   service.putheader("User-Agent", "Python http auth")  
   service.putheader("Content-type", "text/html; charset=\"UTF-8\"")  
   # write the Authorization header like: 'Basic base64encode(username + ':' + password)  
   service.putheader("Authorization", "Basic %s" % auth)  
   service.putheader("Accept",accept)   
   service.endheaders()  
   service.send(body)  
   # get the response  
   statuscode, statusmessage, header = service.getreply()  
   #print "Headers: ", header  
   res = service.getfile().read()  
   #print 'Content: ', res  
   return statuscode,statusmessage,header,res  
   
 def do_wls_http_get(url,verb):  
   return do_http_request(WLS_HOST,WLS_PORT,url,verb,"application/json",WLS_USERNAME,WLS_PASSWORD,"")  
   
 def get_links(body):  
   uris = []  
   json_obj = {}  
   json_obj = json.loads(body)  
   if json_obj.has_key("links"):  
     for link in sorted(json_obj["links"]):  
       if (link["rel"] != "parent"):  
         uri = link["uri"]  
         uriparsed = urlparse(uri)  
         uris.append(uriparsed.path)  
   return uris     
        
 def get_links_recursive(body):  
   uris=[]  
   links = get_links(body)  
   for link in links:  
     statuscode,statusmessage,header,res = do_wls_http_get(link,"GET")  
     if statuscode==200:  
       print link  
       get_links_recursive(res)
       
 statuscode,statusmessage,header,res= do_wls_http_get("/management/wls/latest/","GET")  
 if statuscode != 200:  
   print "HTTP statuscode: "+str(statuscode)  
   print "Have you enabled RESTful Management Services?"  
 else:  
   get_links_recursive(res)

Output of this script on a WebLogic 12.1.3 server contains information on all datasources, application deployments, servers and jobs. You can use it to for example compare two environments for the presence of resources. The script is easily expanded to include the configuration of individual resources. This way you can easily compare environments and see if you have missed a specific configuration setting. Of course, only resources are displayed which can be accessed by the RESTful Management Services. Absence of for example a data-source or application deployment can easily be detected but absence of a credential store or JMS queue will not be detected this way. The links are parsed in order (sorted) to help in comparing. You can also use this script to compare WebLogic Server versions to see what new resources Oracle has added since the last release.

References

Deploying applications remotely with WebLogic REST Management Interface
http://buttso.blogspot.nl/2015/04/deploying-applications-remotely-with.html

Virtual Technology Summit
http://www.oracle.com/technetwork/community/developer-day/index.html

Enable RESTful Management Services
http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/domainconfig/EnableRESTfulManagementServices.html

Jolokia
https://jolokia.org/

Monitoring WebLogic 12c with Nagios and REST
http://www.qualogy.com/monitoring-weblogic-12c-with-nagios-and-rest/

Using REST Resource Methods to Manage WebLogic Server
http://docs.oracle.com/middleware/1213/wls/WLRMR/resources.htm#WLRMR471

RESTful Management Interface Reference for Oracle WebLogic Server
http://docs.oracle.com/middleware/1213/wls/WLRMR/management_wls_version_deployments_application.htm#weblogic_management_rest_wls_resources_deployment_applicationsresource_deployapplication_286308891

The post Overview of WebLogic 12c RESTful Management Services appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/07/21/overview-of-weblogic-12c-restful-management-services/feed/ 0
Use DB Vault to protect password strength policy https://technology.amis.nl/2015/07/20/use-db-vault-to-protect-password-strength-policy/ https://technology.amis.nl/2015/07/20/use-db-vault-to-protect-password-strength-policy/#comments Mon, 20 Jul 2015 13:24:00 +0000 https://technology.amis.nl/?p=36538 Suppose your organization wants to enforce a security policy on database password strength. The DBA’s have implemented a password strength verification function in PLSQL such as the oracle supplied ora12c_strong_verify_function in the DEFAULT profile of the database. There seems no way to get around it at first: Database account u4 is created:     U4 [...]

The post Use DB Vault to protect password strength policy appeared first on AMIS Oracle and Java Blog.

]]>

Suppose your organization wants to enforce a security policy on database password strength. The DBA’s have implemented a password strength verification function in PLSQL such as the oracle supplied ora12c_strong_verify_function in the DEFAULT profile of the database. There seems no way to get around it at first:

Database account u4 is created:

 

create-user-u4

 

U4 logs in and tries to keep it simple, i.e. the password:

 

u4-cannot-simplify-password

 

That password verification function got in the way. U4 searches for solutions to get around this block and stumbles upon the blog from Steve Karam titled Password Verification Security Loophole where Steve demonstrated that it is possible to enter a weak password when creating a user or altering a password, even when a database password verify plsql function is enforced. The way to accomplish this is to use the special IDENTIFIED BY VALUES clause when running the ALTER USER command:

 

2015-07-19 20_22_46-Untitled - Notepad

 

The reason for this behaviour by oracle database is that the IDENTIFIED BY VALUES clause is followed by a hash encoded password string which cannot (easily) be decoded to the original plaintext password. The password strength rules only apply to the original plaintext password value. The only way to crack a hash would be to feed the hash algorithm with candidate passwords and see if the hashed value matches the encoded password string that is known. In the case of the ALTER USER command that would be unfeasible because where would the Oracle database have to stop trying? The number of candidate passwords is limitless..

Until Oracle decides to disable this feature that allows the pre-cooked-at-home encoded password string to be used, there seems no way to stop users from using the IDENTIFIED BY VALUES clause when they have the privilege to use the ALTER USER command, is there?

In fact there is a way to do anti-featuring. It’s possible in one of my favorite EE options called Database Vault (a seperately licenced product for Oracle Database Enterprise Edition) because it allows us to create our own rules on commands such as ALTER USER on top of required system privileges we would normally need to use the command. If we have the database vault rules enabled we would see following when someone tries to use the IDENTIFIED BY VALUES clause:

u4-cannot-simplify-password-using-identified-by-values-clause

as you can see, IDENTIFIED BY VALUES clause can no longer can be used.
The setup script in Datababase Vault I used is given below and should be run by a database account with at least DV_ADMIN role enabled. Note that individual DV rules are first combined into a DV rule set and then this rule set is used as the command rule for ALTER/CREATE USER & CHANGE PASSWORD. Rules in a rule set will be evaluated either using ALL TRUE or ANY TRUE logic. In my case I needed a mix, therefore I created one DV rule with two checks that were combined using ANY TRUE and a second DV rule to check the sql string. These two DV rules were then put in the DV rule set using ALL TRUE evaluation logic. The ‘Is user allowed or modifying own password’ rule is in fact a copy of an Oracle supplied rule. It checks whether the user has the DV_ACCTMGR role OR whether the user is trying to change his/her own password.

— create DV RULESBEGIN
DVSYS.DBMS_MACADM.CREATE_RULE (
rule_name   => ‘Contains no identified by values clause’,
rule_expr   => ‘UPPER(DVSYS.DV_SQL_TEXT) not like ”%IDENTIFIED BY VALUES%”’);

   DVSYS.DBMS_MACADM.CREATE_RULE (
rule_name   => ‘Is user allowed or modifying own password’,
rule_expr   => ‘DVSYS.DBMS_MACADM.IS_ALTER_USER_ALLOW_VARCHAR(”””||dvsys.dv_login_user||”””) = ”Y” OR DVSYS.dv_login_user = dvsys.dv_dict_obj_name’);
END;
/

— CREATE DV RULESET

BEGIN
DVSYS.DBMS_MACADM.CREATE_RULE_SET (
rule_set_name     =>'(Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
description       => ‘rule set for (Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
enabled           => ‘Y’,
eval_options      => ‘1’,
audit_options     => ‘3’,
fail_options      => ‘1’,
fail_message      => ‘IDENTIFIED BY VALUES clause not allowed’,
fail_code         => ‘-20600′,
handler_options   => ‘0’,
handler           => NULL);
END;
/

— ADD RULES TO RULESET

BEGIN
DVSYS.DBMS_MACADM.ADD_RULE_TO_RULE_SET (
rule_set_name   => ‘(Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
rule_name       => ‘Contains no identified by values clause2′,
rule_order      => ‘1’,
enabled         => ‘Y’);
DVSYS.DBMS_MACADM.ADD_RULE_TO_RULE_SET (
rule_set_name   => ‘(Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
rule_name       => ‘Is user allowed or modifying own password2′,
rule_order      => ‘1’,
enabled         => ‘Y’);
END;
/

— UPDATE COMMAND RULE

BEGIN
DVSYS.DBMS_MACADM.UPDATE_COMMAND_RULE (
command         => ‘CREATE USER’,
rule_set_name   => ‘(Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
object_owner    => DBMS_ASSERT.ENQUOTE_NAME (‘%’, FALSE),
object_name     => ‘%’,
enabled         => ‘Y’);
END;
/

BEGIN
DVSYS.DBMS_MACADM.UPDATE_COMMAND_RULE (
command         => ‘ALTER USER’,
rule_set_name   => ‘(Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
object_owner    => DBMS_ASSERT.ENQUOTE_NAME (‘%’, FALSE),
object_name     => ‘%’,
enabled         => ‘Y’);
END;
/

BEGIN   DVSYS.DBMS_MACADM.UPDATE_COMMAND_RULE (
command         => ‘CHANGE PASSWORD’,
rule_set_name   => ‘(Is user allowed or modifying own password) AND (command does not contain IDENTIFIED BY VALUES clause)’,
object_owner    => DBMS_ASSERT.ENQUOTE_NAME (‘%’, FALSE),
object_name     => ‘%’,
enabled         => ‘Y’);
END;
/

 

 

NOTES:

  • the password command in sqlplus also seems to be using the IDENTIFIED BY VALUES clause, so using this DV setup would disable that command too

u4-cannot-simplify-password-using-password-command

  • to find out the hash encoded string to be used in IDENTIFIED BY VALUES clause one can easily create a user in a homegrown database (preferably using same version as victim database) and afterwards retrieve the spare4 column value from SYS.USER$ table for that user. Note that the username itself is used in the Oracle algorithm to calculate the hash value so the hash value only works for a user with the same name.

The post Use DB Vault to protect password strength policy appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/07/20/use-db-vault-to-protect-password-strength-policy/feed/ 0
Vakantie is net werken… https://technology.amis.nl/2015/07/17/vakantie-net-werken/ https://technology.amis.nl/2015/07/17/vakantie-net-werken/#comments Fri, 17 Jul 2015 15:05:25 +0000 https://technology.amis.nl/?p=36463 De transformatie van de kenniswerker in vakantieganger is op dit moment overal zichtbaar. Ik vind het mooi om te zien hoe mensen in andere situaties toch vergelijkbare patronen vertonen. Wat zien we en wat kunnen we leren van dit soort laterale verbanden? Voor mij: energie, flow en plezier. Vakantie is net werken. Bij de vakantieganger maakt [...]

The post Vakantie is net werken… appeared first on AMIS Oracle and Java Blog.

]]>
De transformatie van de kenniswerker in vakantieganger is op dit moment overal zichtbaar. Ik vind het mooi om te zien hoe mensen in andere situaties toch vergelijkbare patronen vertonen. Wat zien we en wat kunnen we leren van dit soort laterale verbanden? Voor mij: energie, flow en plezier. Vakantie is net werken.

Bij de vakantieganger maakt het kostuum plaats voor de bermuda, de gepoetste schoenen voor teenslippers en de stropdas voor pet en zonnebril. Ook het managementteam van de vakantie kent een andere samenstelling, een samenstelling met partner en kinderen. Een verscheidenheid aan belangen en ook in de manier waarop naar de wereld wordt gekeken.

De laatste dagen voor de vakantie zijn voor mij altijd weer boeiend. RelativerenAlles uit de kast halen om de belangrijke zaken gedaan te krijgen, over te dragen en met een goed gevoel weg te kunnen. Als dat voor elkaar is, kan de vakantie echt starten. De laatste zaken van de voorbereiding en dan ‘echt los komen’. Het is meestal een stap in een andere wereld, een reis met onverwachte gebeurtenissen, vol anticipatie. De aankomst op de bestemming(en), de weg vinden, soms letterlijk, eigenlijk altijd figuurlijk.

Ik zie in succes, energie, flow, en plezier de kernfactoren van elke beweging die je wilt inzetten. Succes bepaalt de richting: wanneer is het goed of gelukt? Energie zorgt ervoor dat er de kracht is om daadwerkelijk iets voor elkaar te krijgen. Flow zorgt voor stuwkracht, geen eenmalige uitbarsting maar een permanente drijvende kracht. En plezier is zichtbaar en voelbaar, het beeld dat we echt naar het succes toe willen werken, niet slechts omdat we gedwongen worden door de omstandigheden.

Keep calm and stay positiveAls je het positief insteekt, is alles een mooie ontdekkingsreis. Door tijdens de vakantie te kiezen voor de juiste, optimistische mind-set, voorkom ik irritaties die mijn vakantiegevoel teniet (kunnen) doen. Het oog op het doel ofwel het succes houden staat voorop: fijn vakantie vieren, samen zijn, uitrusten, gedachten verzetten, nieuwe omgevingen ontdekken en mensen leren kennen. Dat geeft de energie om met tegenslagen om te gaan, of het nu gaat om files onderweg, een bestemming die toch anders is dan de folder deed vermoeden, de omgeving die vaak luidruchtiger is dan gehoopt of om de vaak kleine irritaties in het eigen gezin, dat toch wat minder harmonieus blijkt dan die fantastische families in allerlei mooie televisieseries.

Een positieve kijk op de zaken geeft incasserings- en relativeringsvermogen. En net dat beetje afstand om om te kunnen gaan met de omstandigheden en om energie vrij te maken om obstakels te overkomen. En ook om dat vol te houden en het resultaat te bereiken: Lachende gezichten, veel plezier en nieuwe ervaringen. Resultaat ook in lessen, die ik dan weer mee terug neem naar het werk. Om zo met energie, flow en plezier te werken aan nieuwe zakelijke successen.

Ik wens u een fijne vakantie!

The post Vakantie is net werken… appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/07/17/vakantie-net-werken/feed/ 0
Managing identity information from multiple sources with Oracle Identity Manager, Part 2 https://technology.amis.nl/2015/07/16/managing-identity-information-from-multiple-sources-with-oracle-identity-manager-part-2/ https://technology.amis.nl/2015/07/16/managing-identity-information-from-multiple-sources-with-oracle-identity-manager-part-2/#comments Thu, 16 Jul 2015 10:18:14 +0000 https://technology.amis.nl/?p=36449 Consolidating identity information in Oracle Identity Manager In part 1 one this article we saw several options for managing identities in an environment where multiple sources for identity information are used. In this part, you’ll find more information on how to set up Oracle Identity Manager in a scenario like the one described in the [...]

The post Managing identity information from multiple sources with Oracle Identity Manager, Part 2 appeared first on AMIS Oracle and Java Blog.

]]>
Consolidating identity information in Oracle Identity Manager

In part 1 one this article we saw several options for managing identities in an environment where multiple sources for identity information are used. In this part, you’ll find more information on how to set up Oracle Identity Manager in a scenario like the one described in the Swift&Safe Inc. use case.

First of all, the identities based on the input from CUST1 will be placed in dedicated organizations in Oracle Identity Manager so these identities and their authorizations can be managed separately. Regarding HR1 and HR2, these systems use their own internal identifiers for user records. These identifiers must be provided to Oracle Identity Manager as user attributes in the feeds from HR1 and HR2. In Oracle Identity Manager an UDF (User Defined Field) must be created for the identifier attribute (for instance, the personnel number) from HR1 and a separate UDF for HR2. During reconciliation, Oracle Identity Manager can match users in HR1 and HR2 by comparing the unique identifiers in the feeds with the UDFs in Oracle Identity Manager. You can add an attribute to the Oracle Identity Manager user by creating a sandbox and opening the user definition in the Identity System Administration interface. Depending on the version of Oracle Identity Manager you can find it in the ‘Form Designer’ under the Configuration section or ‘User’ under System Entities.

open_user_field_defFigure 1: Opening the user field definition.

UDF_defFigure 2: Adding a user field.

Next, additional UDFs can be created to store source specific information from the HR1 or HR2 system, for example about the persons manager and department, and any other information that needs to be present in Oracle Identity Manager. The additional UDFs can then be used in request, approval and review procedures and as attributes of accounts that are provisioned to target systems.

After this has been set up, measures must be implemented to prevent the creation of multiple identities for individual persons. The best way to do this is by adding an event handler in the orchestration that deals with all creations (no matter the source). The logic in the event handler can also be implemented in the connectors, however from an operational standpoint it’s easier to implement the logic once, in a central location. The event handler will add a check in the workflow by taking a number of attributes (first name, last name, birth date, etc.) and trying to find a match in Oracle Identity Manager on some or all of the attributes, skipping identities in the CUST1 organizations. If there is a match, the create event will terminate and a notification is sent to someone in your organization who can verify that the create event indeed concerns someone who already has an identity in Oracle Identity Manager. Once verified, the unique identifier of the second HR registration must be added to the existing identity, so the next time the source is reconciled the user is linked to the existing identity based on the unique identifier of that source.

proc_defFigure 3: assigning an event handler to an action using the Design Console.

The reason people should be involved when there is a possible match, is to make sure that it is in fact the same person. If you have enough information in the feeds from HR1 and HR2, and are able to apply sufficient logic in the event handler, you can consider triggering automated actions instead of requiring user input. And if the person has different managers in the HR sources, they need to be updated on the situation. Since the manager plays an important role in Oracle Identity Manager, and identities have only one ‘manager’ field, it can happen that tasks for a manager get routed to the wrong manager. If this happens often it may be wise to adjust approval and certification workflows to look for manager information in the source specific UDFs of the user instead of the regular Oracle Identity Manager ‘manager’ field, or configure workflows to not use the manager but specify an approver or certifier based on organization or other attributes. You can also modify the user creation and update process to choose from manager information in the HR1 and HR2 feeds to fill the regular Oracle Identity Manager ‘manager’ field.

An event handler should also be added to the orchestration involved when someone leaves the company. A check must be done to see if the identity is linked to multiple sources. If so, the identity should not be removed or disabled and only the link to the trusted source that was reconciled must be removed.

Usefull links

Configuring User Defined Fields (UDF): http://docs.oracle.com/cd/E27559_01/admin.1112/e27149/customattr.htm#OMADM4803

Developing Event Handlers: https://docs.oracle.com/cd/E52734_01/oim/OMDEV/oper.htm#OMDEV3085

Managing Notification Service: http://docs.oracle.com/cd/E27559_01/admin.1112/e27149/notification.htm#OMADM873

Managing Connector Lifecycle: http://docs.oracle.com/cd/E27559_01/admin.1112/e27149/conn_mgmt.htm#OMADM4295

Developer’s Guide for Oracle Identity Manager: http://docs.oracle.com/cd/E27559_01/dev.1112/e27150/toc.htm

Oracle Identity Manager Identity Connectors Documentation: https://docs.oracle.com/cd/E22999_01/index.htm

Oracle Identity Manager – Development: https://docs.oracle.com/cd/E52734_01/oim/oim-develop.htm

 

The post Managing identity information from multiple sources with Oracle Identity Manager, Part 2 appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/07/16/managing-identity-information-from-multiple-sources-with-oracle-identity-manager-part-2/feed/ 0
Synchronizing databases through BPEL services https://technology.amis.nl/2015/07/16/synchronizing-databases-through-bpel-services/ https://technology.amis.nl/2015/07/16/synchronizing-databases-through-bpel-services/#comments Wed, 15 Jul 2015 22:03:16 +0000 https://technology.amis.nl/?p=36472 Introduction This blog post is about how to synchronize two databases through BPEL, focusing on transaction, rollback and fault handling. During a project, I’ve encountered a situation where we wanted to migrate from an old database to a new one. However, in order to gradually move external systems from the old to the new database, [...]

The post Synchronizing databases through BPEL services appeared first on AMIS Oracle and Java Blog.

]]>
Introduction

This blog post is about how to synchronize two databases through BPEL, focusing on transaction, rollback and fault handling.

During a project, I’ve encountered a situation where we wanted to migrate from an old database to a new one. However, in order to gradually move external systems from the old to the new database, it was required that both databases would be kept in sync for a limited amount of time. Apart from the obvious database tools, for example Oracle Golden Gate, this can be done through the service layer as well and that’s what this article is about. I will explain how I have done it with a strong focus on fault handling, since that’s the most complicated part of the deal. In this case, since keeping things in sync is what we’re aiming for, a rollback needs to be performed on one database when the other fails to process the update.

One of the requirements is that it should be easy to throw the synchronization code away, as it has no place in our future plans. Another requirement is that the service layer should return faults in a decent manner.

Preparation

In order to enable out-of-the-box rollback functionality, make sure that the data sources connecting to both databases are XA enabled. As there is plenty of information about this subject, I will not get into detail about it in this blog.

Now we will be developing two services:

  • SalesOrderBusinessService: a BPEL process that receives messages from a BPM process and forwards them to our integration service
  • UpdateSalesOrderIntegrationService: a BPEL process that receives messages from SalesOrderBusinessService and updates two databases through adapters

We need to make sure that both services have a fault specified in their wsdl operation in order to return the recoverable fault.


<wsdl:message name="UpdateSalesOrderRequestMessage">
<wsdl:part name="UpdateSalesOrderRequest" element="cdm:UpdateSalesOrderEBM"/>
</wsdl:message>

<wsdl:message name="UpdateSalesOrderResponseMessage">
<wsdl:part name="UpdateSalesOrderResponse" element="hdr:ServiceResult"/>
</wsdl:message>

<wsdl:message name="UpdateSalesOrderFaultMessage">
<wsdl:part name="UpdateSalesOrderFault" element="hdr:ErrorMessages"/>
</wsdl:message>

<wsdl:portType name="SalesOrderBusinessService_ptt">
<wsdl:operation name="updateSalesOrder">
<wsdl:input message="tns:UpdateSalesOrderRequestMessage"/>
<wsdl:output message="tns:UpdateSalesOrderResponseMessage"/>
<wsdl:fault name="TechnicalFault" message="tns:UpdateSalesOrderFaultMessage"/>
</wsdl:operation>
</wsdl:portType>

Development

Once the data sources and wsdl definitions are in place, we can start developing our BPEL services. Let’s start with UpdateSalesOrderIntegrationService. It will be a SOA composite, containing a BPEL process, a web service and two database adapters. In the end it should look like this:

compositeUpdateSO

 

While we can create the database adapters with default settings, we have to make an adjustment to the BPEL process: the transaction will have to be changed from “required” to “requiresNew”. See picture below:

createUpdateSO

The UpdateSalesOrderBPEL process will first update the new database and, if the update is successful, the old database too. This can easily be achieved when the database procedure returns, for example, OK or NOK (with a business reason) to let us know about the processing result. If the update in the old database is not successful, however, we need to throw a fault to rollback the update in the first database. This is out-of-the-box functionality, but we need to be aware that the rollback will only take place when the entire transaction fails. This means that we can’t catch any faults in this BPEL process, because then it will be considered a successful transaction. Also, this is why we set the transaction property to “requiresNew”: in SalesOrderBusinessService we do want to catch faults, but if UpdateSalesOrderIntegrationService is in the same transaction, the transaction will still be considered successful and we will not get our rollback. In the end, the BPEL process should look something like this, between the “receive” and “reply” activities:

 

bpelUpdateSO

The throw activity goes as follows and we can either assign error information from the database procedure or our own information to the faultVariable:

throwUpdateSO

The next step is to create SalesOrderBusinessService. The composite should look like this and we can keep the transaction property for the BPEL process at “required”:

compositeSO

Our BPEL process will look like this:

bpelSO

As you can see, the main flow is very basic and we don’t need to do anything out of the ordinary here. The interesting part is the Catch, where the Technical Fault coming from the IntegrationService will be handled. In this case, we can simply assign the fault message to the fault message of the Business Service and reply the fault to the requestor. Consequently, the requestor can, for example, re-send the message once the problem in the old database has been resolved. If there is a business problem (NOK) in the new database, it should be handled as a business problem and no SOAP fault will be returned. Should there be any other technical faults, like a database being down, the CatchAll will handle those as usual.

catchSO

That’s it, we’re done. Now, once the old database can be shutdown, it will be fairly easy to remove the code: just throw away the first “CheckResult” component and the database adapter from UpdateSalesOrderIntegrationService, as well as the Catch activity in the Business Service.

Keep in mind the most important parts of the deal:

  1. XA data sources are required
  2. Integration Service should have its transaction property at “requiresNew”
  3. Integration Service cannot have any fault handling
  4. Business Service should handle specific faults from the Integration Service
  5. Make sure that the temporary code can be easily removed

The post Synchronizing databases through BPEL services appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/07/16/synchronizing-databases-through-bpel-services/feed/ 0
Managing identity information from multiple sources with Oracle Identity Manager, Part 1 https://technology.amis.nl/2015/07/14/managing-identity-information-from-multiple-sources-with-oracle-identity-manager-part-1/ https://technology.amis.nl/2015/07/14/managing-identity-information-from-multiple-sources-with-oracle-identity-manager-part-1/#comments Tue, 14 Jul 2015 10:21:37 +0000 https://technology.amis.nl/?p=36318 When you are implementing Oracle Identity Manager to manage the identities within your organization, you may have to use multiple sources for identity information. For instance, there might be different departments with their own HR system and there might be separate sources for customers or business partners. In this article I’ll discuss 4 options to [...]

The post Managing identity information from multiple sources with Oracle Identity Manager, Part 1 appeared first on AMIS Oracle and Java Blog.

]]>
When you are implementing Oracle Identity Manager to manage the identities within your organization, you may have to use multiple sources for identity information. For instance, there might be different departments with their own HR system and there might be separate sources for customers or business partners. In this article I’ll discuss 4 options to manage multiple sources and prevent issues like double identities. I will also present a use case to explain how to configure Oracle Identity Manager to use multiple sources.

Use Case

Swift&Safe Inc. is an organization that specializes in logistics. Swift&Safe Inc. uses Oracle Identity Manager to provide employees and customers with appropriate access to company resources. They use two different HR systems: one for employees who work in at the office (mostly administrative staff, but also some who work the logistic processes), and one for employees on the road. Let’s call these systems HR1 and HR2, both will be used by Oracle Identity Manager for importing identities. In addition, the company uses a third system (CUST1) to register customers and Oracle Identity Manager will also import identities from this system.

design

Figure 1: Swift&Safe Inc.

In this case people can have several positions within the company concurrently and can therefore exist in both HR1 and HR2, but Swift&Safe Inc. only allows one identity in Oracle Identity Manager per employee, so that any entitlements a particular person has can be checked against segregation of duties (SoD) policies. In addition, an employee can also be a customer and in that case needs a separate customer identity because access to customer facing resources is managed separately.

Swift&Safe Inc. is working on connecting Oracle Identity Manager to these three sources so employees and customers will have the correct identities in Oracle Identity Manager.

How does it work?

First I’ll tell you a couple of things you need to know about how the importing of identity information works in Oracle Identity Manager. After that, we can look into possible implementation options.

Source systems are integrated with Oracle Identity Manager by use of connectors. A connector is installed for every source and holds information about the format of the data in the source system (meta data), and a mapping table specifying which attributes of entries in the source correspond to which attributes of an identity in Oracle Identity Manager. The meta data and mapping table tell Oracle Identity Manager how to interpret the flow of data coming from a source so Oracle Identity Manager can build identities with the provided information.

attribute mapping Figure 2: Example of attribute mapping.

Oracle Identity Manager uses its reconciliation engine to handle the process of importing information. Reconciliation can be done in trusted mode and target mode. In trusted mode the imported identity information is used to create, update and delete identities in Oracle Identity Manager. In target mode, the imported data is regarded as information about accounts that are present in the source system. These accounts are assigned to identities in Oracle Identity Manager.

The reconciliation engine first uses the information of an entry in the source to try to match the entry to an identity in Oracle Identity Manager, based on matching rules. Depending on the result of this matching process, an action is then assigned to handle the imported entry, based on action rules. The matching and action rules are defined at connector level so these are specific per source. The entry and assigned actions (for example “create identity”) are stored in an event that is placed in the event queue. Items in this queue are then processed in so called orchestrations, which are workflows that take care of the job at hand.

recon action rule Figure 3: Action rules define actions for each type of matching results

Implementation options

  1. Integrating HR sources

integratehr

One way to prevent issues, is to make sure only one system is authoritative for the lifecycle of identities. A trusted reconciliation is set up with this source. Additional target reconciliations can be set up with any number of sources to augment Oracle Identity Manager identities with additional attributes that are not present in the trusted reconciliation. In the case of Swift&Safe Inc. this option requires the consolidation of identity information at HR system level, because information from all three systems must be present in the trusted source defined in Oracle Identity Manager.

  1. Using a staging area

staging

This option involves setting up a system that acts as a staging area between the HR sources and Oracle Identity Manager. This may be in the form of a database or directory where information from multiple sources is combined (and maybe scrubbed, enriched or anonymized) in order to create a single trusted source for Oracle Identity Manager. In some situations this may be an option because of the complexity of the data, the amount of changes in meta data, the skill set of the support team or responsibility for data sanitation. But it may not be technically possible or too costly to maintain an extra system.

  1. Allowing multiple identities per person

allowmultiple

Technically you can use multiple trusted sources in Oracle Identity Manager, and these sources will be authoritative for the lifecycle of ‘their’ identities. In this case multiple identities will be created for a person if this person is registered in more than one trusted source, and this results in multiple accounts on target systems. This can be useful for keeping accounts related to different job functions separated. Having multiple accounts on the same company resources can also be confusing to end users while perfoming their daily dutties and when they review information in request or review processes. Or maybe only one identity will be created and creation of subsequent identities for the same person will fail, depending on the configuration of Oracle Identity Manager for instance regarding the uniqueness of attributes.

  1. Consolidating identity information in Oracle Identity Manager

design

Using Oracle Identity Manager to consolidate identity information. This is the option that Swift&Safe Inc. will be implementing. They will use the capabilities of Oracle Identity Manager to combine identity information and centrally manage accounts and access rights. In part 2 of this article we’ll take a look at the basic configuration that is needed to achieve this.

Conclusion

There are several options for managing identities in an environment where multiple sources for identity information are used. Which one fits best in your organization depends on several factors such as technical feasibility, costs, maintainability and reliability, and data quality responsibility. Swift&Safe Inc. decided on option 4 because they need to keep their HR systems separated and do not want the burden of maintaining an extra system needed for a staging area. Oracle Identity Manager provides them with an excellent option by providing a central platform with configurable connectors, reconciliation options and workflows which allows them to accommodate the flow of identity information. In part 2 of this article you’ll find more information on how to set up Oracle Identity Manager in this scenario.

 

 

The post Managing identity information from multiple sources with Oracle Identity Manager, Part 1 appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/07/14/managing-identity-information-from-multiple-sources-with-oracle-identity-manager-part-1/feed/ 0
Subversion revision of a deployed BPM/SOA composite? https://technology.amis.nl/2015/07/12/subversion-revision-of-a-deployed-bpmsoa-composite/ https://technology.amis.nl/2015/07/12/subversion-revision-of-a-deployed-bpmsoa-composite/#comments Sun, 12 Jul 2015 21:37:44 +0000 https://technology.amis.nl/?p=36438 So there you are: a production error was reported … in your code (of all places) … but no one knows what release the code came from? Wouldn’t it be great if it was easy to link deployed composites to their Subversion location and revision? This article show an approach based on ‘Subversion keywords expansion’. [...]

The post Subversion revision of a deployed BPM/SOA composite? appeared first on AMIS Oracle and Java Blog.

]]>
So there you are: a production error was reported … in your code (of all places) … but no one knows what release the code came from?

holmes

Wouldn’t it be great if it was easy to link deployed composites to their Subversion location and revision?

This article show an approach based on ‘Subversion keywords expansion’. This is illustrated with the following steps:

  1. Add properties to the composite.xml file
  2. Set Subversion keywords for composite.xml
  3. Query the deployed composite with wlst
  4. Solve the limitations in Subversion keyword expansion

Let’s get started:

Step 1: add properties to the composite.xml

In composite.xml, add the below lines after the properties for productVersion and compositeID that JDeveloper already added:


<property name="productVersion" type="xs:string" many="false">12.1.3.0.0</property>
<property name="compositeID" type="xs:string" many="false">38e4c940-31e8-46ca-90d7-1e56639f6880</property>
<property name="subVersionURL" type="xs:string" many="false">$URL$</property>
<property name="subVersionRevision" type="xs:string" many="false">$Rev$</property>

Step 2: set Subversion keywords for composite.xml

On composite.xml, add the Subversion properties ‘Revision’ and ‘URL’ (of type ‘svn-keywords’) .

This can be done using TortoiseSVN:

– check out your project from Subversion

– right-click composite.xml, goto TortoiseSVN –> Properties

002 - subversion setting properties

– Click on ‘New’ and then ‘Keywords’:

003 - subversion setting properties - 02

– Select keywords ‘Revsion’ and ‘URL’:

004 - subversion setting properties - 03

– with result:

005 - subversion setting properties - 04

 

… and you’re done.

The same could be achieved or using command line:

   svn propset svn:keywords “Revision URL” composite.xml

 

After this is done, subversion will expand the svn keywords $URL$ and $Rev$ when the file is checked out.

Now, commit the composite.xml into Subversion and then check it out again. Examine the properties that now should look like:


<property name="productVersion" type="xs:string" many="false">12.1.3.0.0</property>
<property name="compositeID" type="xs:string" many="false">38e4c940-31e8-46ca-90d7-1e56639f6880</property>
<property name="subVersionURL" type="xs:string" many="false">$URL: svn://192.168.178.50/LGO/sandbox/HelloKeywordApplication/HelloKeyword/SOA/composite.xml $</property>
<property name="subVersionRevision" type="xs:string" many="false">$Rev: 25 $</property>

 

Now, re-deploy the composite with the new composite.xml

Step 3: query the deployed composite with wlst

After checking out the above code from Subversion and deploying it, the properties can be queried using the wslt script:


# function that returns mbean(s) of all composites
# borrowed from Edwin Biemond and changed

def findMBeans(prefix):
# get a listing of everything in the current directory
mydirs = ls(returnMap='true');

# we're going to use a regular expression for our test
pattern = java.util.regex.Pattern.compile(str(prefix) + str('.*name=*') + str('.*$'));

# loop through the listing
beanList = [];
for mydir in mydirs:
x = java.lang.String(mydir);
matcher = pattern.matcher(x);
# if we find a match, add it to the found list
while matcher.find():
beanList.append(x);

return beanList;

print 'starting the script ....'
username = 'weblogic'
password = 'welcome01'
url='t3://localhost:7001'

connect(username,password,url)

custom();
cd('oracle.soa.config');

#Note the , at the end of the string, so components are not returned...
composites = findMBeans('oracle.soa.config:partition=default,j2eeType=SCAComposite,');

for composite in composites:

cd( composite );

properties = mbs.getAttribute(ObjectName(composite), 'Properties');

print 'Composite : ' + mbs.getAttribute(ObjectName(composite), 'Name');

for property in properties:
print '- property name/value : ' + property.get('name') + ' / ' + property.get('value');

print '----------';
print

cd('..');

disconnect();

 

Output of the script is ( …. beginning deleted ….)

Composite : HelloKeyword [1.0]
– property name/value : productVersion / 12.1.3.0.0
———-
– property name/value : subVersionRevision / $Rev: 25 $
———-
– property name/value : subVersionURL / $URL: svn://192.168.178.50/LGO/sandbox/HelloKeywordApplication/HelloKeyword/SOA/composite.xml $
———-
– property name/value : compositeID / 38e4c940-31e8-46ca-90d7-1e56639f6880
———-

Step 4: Solve the limitations in Subversion keyword expansion

Note that the revision number that is displayed is the revision number of the composite.xml file. THIS IS NOT THE CHECKED OUT REVISION NUMBER, but it is THE REVISION NUMBER OF WHEN THE FILE COMPOSITE.XML WAS LAST CHANGED.

The two measures below, will make your composite really traceable:

  1. Composites that are released will be first tagged in Subversion
  2. A property ReleaseLabel will be added and release labels will only be used once

So, add a property like below in the composite.xml:


<property name="ReleaseLabel" type="xs:string" many="false">@ReleaseLabelNotSet@</property>

This property can then be set by the script that checks out a release from Subversion (e.g. by an ant search/replace…)

Note that this property is NOT a Subversion keyword, so giving this property a value is something that has to be explicitly done by the script that is used for building a release.

Additional benefit is that if the default value is set at @ReleaseLabelNotSet@ it will be clear when not-officially-released composites are deployed.

Querying of this property works with the same wlst script.

 

Note: the wlst script and properties have been tested with SOA Suite 11.1.1.6, 11.1.1.7 and 12.1.3.

The post Subversion revision of a deployed BPM/SOA composite? appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/07/12/subversion-revision-of-a-deployed-bpmsoa-composite/feed/ 0