Publish a REST service from PL/SQL to handle HTTP POST requests – using the embedded PL/SQL gateway

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

Oracle Database can act as an HTTP server – using the Embedded PL/SQL Gateway (the 10g successor of the MOD_PLSQL gateway). With just a few statements, we can have the Oracle Database become a listener to HTTP requests (GET or POST). When requests are received at the configured host, port and URL, the request is passed to a PL/SQL procedure that handles it and prepares a response.

In this article, we will expose a REST service at URL http://localhost:8080/api/movieevents. This service processes an HTTP POST request that in this case contains a JSON payload. The payload is passed to the PL/SQL procedure to do with as it feels fit.

The implementation takes place in two steps. First, some preparations must be made by the DBA – to make it possible for a particular database schema to handle HTTP requests received on a certain URL. This includes opening up a certain host and port.

First, you may want to set the HTTP port:

select dbms_xdb.gethttpport
from   dual

and if you do not like it, set another one:

EXECUTE dbms_xdb.SETHTTPPORT(8080);

The following statements create the Access Control List that specifies that connection is allowed to database schema WC with HTTP requests to host 127.0.0.1 (aka localhost) and ports between 7000 and 9200:

begin

  dbms_network_acl_admin.create_acl (

    acl             => 'utlpkg.xml',

    description     => 'Normal Access',

    principal       => 'CONNECT',

    is_grant        => TRUE,

    privilege       => 'connect',

    start_date      => null,

    end_date        => null

  );

end;



begin

  dbms_network_acl_admin.add_privilege ( 

  acl         => 'utlpkg.xml',

  principal     => 'WC',

  is_grant     => TRUE, 

  privilege     => 'connect', 

  start_date     => null, 

  end_date     => null); 

  dbms_network_acl_admin.assign_acl (

  acl => 'utlpkg.xml',

  host => '127.0.0.1',

  lower_port => 7000,

  upper_port => 9200);

end;

Next, the DAD is created – linking the URL path segment /api/ to the WC database schema. This means that any HTTP request received at http://localhost:8080/api/XXX is passed to a PL/SQL procedure called XXX :

BEGIN

  DBMS_EPG.create_dad 

  ( dad_name => 'restapi'

  , path => '/api/*'

  );

  DBMS_EPG.AUTHORIZE_DAD('restapi','WC');

end;

The next line instructs the Embedded PL/SQL Gateway to return a readable error page whenever a request is not processed correctly:

exec dbms_epg.set_dad_attribute('restapi', 'error-style', 'DebugStyle');

This line associates the database user WC with the restapi url.

EXEC DBMS_EPG.SET_DAD_ATTRIBUTE('restapi', 'database-username', 'WC');

The final aspect of the preparation involves allowing anonymous access – this means that no username and password are required for HTTP calls  handled by the Embedded PL/SQL Gateway. As per Tim Hall’s instructions:

to enable anonymous access to the XML DB repository, the following code creates the “<allow-repository-anonymous-access>” element if it is missing, or updates it if it is already present in the xdbconfig.xml file.

SET SERVEROUTPUT ON

DECLARE

  l_configxml XMLTYPE;

  l_value     VARCHAR2(5) := 'true'; -- (true/false)

BEGIN

  l_configxml := DBMS_XDB.cfg_get();



  IF l_configxml.existsNode('/xdbconfig/sysconfig/protocolconfig/httpconfig/allow-repository-anonymous-access') = 0 THEN

    -- Add missing element.

    SELECT insertChildXML

           (

             l_configxml,

                '/xdbconfig/sysconfig/protocolconfig/httpconfig',

                'allow-repository-anonymous-access',

                XMLType('<allow-repository-anonymous-access xmlns="http://xmlns.oracle.com/xdb/xdbconfig.xsd">' ||

                         l_value ||

                        '</allow-repository-anonymous-access>'),

                'xmlns="http://xmlns.oracle.com/xdb/xdbconfig.xsd"'

              )

    INTO   l_configxml

    FROM   dual;



    DBMS_OUTPUT.put_line('Element inserted.');

  ELSE

    -- Update existing element.

    SELECT updateXML

           (

             DBMS_XDB.cfg_get(),

             '/xdbconfig/sysconfig/protocolconfig/httpconfig/allow-repository-anonymous-access/text()',

             l_value,

             'xmlns="http://xmlns.oracle.com/xdb/xdbconfig.xsd"'

           )

    INTO   l_configxml

    FROM   dual;



    DBMS_OUTPUT.put_line('Element updated.');

  END IF;



  DBMS_XDB.cfg_update(l_configxml);

  DBMS_XDB.cfg_refresh;

END;

The database account anonymous also has to be unlocked to truly enable anonymous access:

ALTER USER anonymous ACCOUNT UNLOCK;

 

This completes the preparations. We now have setup a DAD that is associated with the /api/* path in HTTP requests sent to http://localhost:8080/api/*. This DAD hands requests to the WC database schema to be handled. Requests do not have to include username and password.

Now we have to connect to the WC database schema in order to create the PL/SQL procedure that will handle such requests.

create or replace procedure movieevents

( p_json_payload in varchar2 default '{}' 

)

is

begin

  htp.p('call received p_json_payload='||p_json_payload);

  htp.p('REQUEST_METHOD='||owa_util.get_cgi_env(param_name => 'REQUEST_METHOD'));

end movieevents;

Between the definition of the DAD, the opening up of the port range and the creation of this procedure, we have completed the setup that will receive and process HTTP POST requests that send a body with any payload to http://localhost:8080/api/movieevents. This call will result in nothing but a simple response that describes in plain text what it received.

This opens up a bridge from any client capable of speaking HTTP to the Database – non transactional, cross firewall and without additional drivers.

Resources

Some resources:

 http://ora-00001.blogspot.com/2009/07/creating-rest-web-service-with-plsql.html

And especially Tim Hall:

http://www.oracle-base.com/articles/10g/dbms_epg_10gR2.php and  http://oracle-base.com/articles/misc/xml-over-http.php

The Oracle documentation: http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28424/adfns_web.htm

On debugging and errorpage:  http://daust.blogspot.com/2008/04/troubleshooting-404-not-found-error-on.html

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

Next step with Docker – Create Image for Oracle Database 12c using Puppet

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

In a previous article – https://technology.amis.nl/2015/03/15/docker-take-two-starting-from-windows-with-linux-vm-as-docker-host/ – I have discussed my first steps with Docker. With Windows as my host environment I used Vagrant to create a VirtualBox VM with Ubuntu. In that VM I installed Docker can played around creating some containers, images and eventually an image for Oracle WebLogic 12.1.3. I leveraged the excellent work by Mark Nelson (especially his article https://redstack.wordpress.com/2014/11/14/gettingn-to-know-docker-a-better-way-to-do-virtualization/).

In this article I am taking things one step further by creating a Docker container – and from that container an image – with the latest Oracle Database release 12.1.0.2 (Enterprise Edition). Again, the Mark Nelson article is my guide and Edwin Biemond – champion of all things automated – provided the Docker file and Puppet scripts that get the job done. Edwin was also kind enough to help me out when a library dependency caused problems.

I ran into the (default) size limitation on Docker containers (10 GB) while installing the Oracle Database. I resolved this challenge by mapping a host folder to the container (with the original database software) and by sharing a volume from a second container that was used as temporary (staging) area. Thus I virtually expanded the file system of my container considerably beyond the 10 GB mark.

The steps I went through are basically:

0. preparation: (as discussed in the previous article) Get a Ubuntu based Virtual Machine (Virtual Box) running on my Windows host laptop. Install Docker into this VM. Download the Oracle Database software (12.1.0.2 Enterprise Edition) from http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index-092322.html. Two files are downloaded to a folder on the host with a total of some 2,6 GB.

image

Note: the Puppet scripts expect two files with names linuxamd64_12c_database_1of2.zip and linuxamd64_12c_database_2of2.zip. I renamed the downloaded files to match these expectations.

1. create Docker image – db12c102_centos6 –  based on CentOS with Puppet installed as well as the Puppet modules required for the database installation

2. create Docker container – softwarecontainer – to act as the ‘staging container’ – a container that shares a volume that can be used as expanded file storage from other containers

3. run Docker container based on image db12c102_centos6 with host folder mapped into it and with volume shared from softwarecontainer

4. edit Puppet files to match the 12.1.0.2 Enterprise Edition installation

5. run the Puppet site.pp manifest – this will install the database software and initialize an instance

6. test whether the container is actually running the database; then create an image from the container

 

At the end of the article, I have both a container and image with the Oracle Database 12c (12.1.0.2) running – based on the oradb Puppet Module by Edwin Biemond. The container exposes port 1521 where the database can be accessed from the host as well as from other containers – as we will see in subsequent articles.

1. Create base Docker image

With a simple Dockerfile – derived from Docker Database Puppet – I create a base image that provides the starting point for the container that will hold the Oracle Database installation.

The Dockerfile looks like this:

# CentOS 6
FROM centos:centos6

RUN yum -y install hostname.x86_64 rubygems ruby-devel gcc git unzip
RUN echo “gem: –no-ri –no-rdoc” > ~/.gemrc

RUN rpm –import https://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs && \
rpm -ivh
http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm

# configure & install puppet
RUN yum install -y puppet tar
RUN gem install highline -v 1.6.21
RUN gem install librarian-puppet -v 1.0.3

RUN yum clean all

ADD puppet/Puppetfile /etc/puppet/
ADD puppet/manifests/site.pp /etc/puppet/

WORKDIR /etc/puppet/
RUN librarian-puppet install

EXPOSE 1521

ADD startup.sh /
RUN chmod 0755 /startup.sh

WORKDIR /

CMD bash -C ‘/startup.sh';’bash’

The image is created from this Dockerfile using this command:

docker build -t db12c102_centos6 .

A screenshot from somewhere midway in the creation process:

image

and the completion:

image

Note: Initially I used the exact same Dockerfile Edwin published – with an edited site.pp file in order to install the 12.1.0.2 Enterprise Edition instead of the 12.1.0.1 Standard Edition. I then ran into disk space limitations. Apparently, copying the software zip files (2,6 GB) to the container and extracting the content of these files inside the container occupied so much space that the installation was aborted.

image

I needed a trick to create container and install the Oracle Database without burdening the container with these zip-files and their temporary extracted contents. The trick consists of three things:

  1. create the image in multiple steps (instead of a single one with a single Dockerfile that auto-runs the complete Puppet script), starting with a simple base image
  2. run a container from this base image with a host folder mapping from which it can access the software zip-files without actually copying them to the container’s file system
  3. have the container import a volume from another container; this volume is used as temporary staging area (for the extracted files needed only during installation); finally create an image from this last container

Also note that in the original script, Edwin did not have the line “RUN gem install highline -v 1.6.21”; he advised me to add this line because the original Dockerfile resulted in a dependency error:

image

Adding this line (to make sure highline gets installed before a version of highline with a more demanding requirement on Ruby is brought along by librarian-puppet.

2. Create Docker container – softwarecontainer – to act as the ‘staging container’

A very simple Docker image is created using this Dockerfile:

FROM busybox

RUN mkdir /var/tmp/install

RUN chmod 777 /var/tmp/install

VOLUME /var/tmp/install

VOLUME /stage

CMD /bin/sh

and this command:

docker build -t softwarestore .

This results in an image called softwarestore which exposes its folder /var/tmp/install as a volume that can be used as expanded file storage from other containers.

Start the container softwarecontainer based on image:

docker run -i -t -v /host_temp/shared:/stage/shared –name softwarecontainer softwarestore /bin/sh

The container softwarecontainer is now available along with its /var/tmp/install volume that will be used during database installation as staging area.

 

3. Run Docker container based on base image db12c102_centos6

Run a new container based on the base image created in step 1:

docker run -ti  -v /host_temp/shared/downloads:/software –volumes-from softwarecontainer  db12c102_centos6  /bin/bash

with host folder mapped into it and with volume shared from softwarecontainer.

The host folder with the database software is accessible from within the container at /software, as is the /var/tmp volume in the softwarecontainer:

image

 

 

4. Edit Puppet files to match the 12.1.0.2 Enterprise Edition installation

Inside the container: Open the site.pp file at /etc/puppet in a text editor. Note: this directory and this file were created along with the base image in step 1.

image

Edit the lines that refer to SE (Standard Edition) and 12.1.0.1:

image

Note that only a few changes are required to process EE instead of SE and 12.1.0.2 instead of some other version.

 

5. Run the Puppet site.pp manifest to install the database software and initialize an instance

The heavy lifting regarding the installation of the Oracle Database and the creation of an instance (service orcl) is done by Puppet. The Puppet script is started (still inside the container) using this command:

puppet apply /etc/puppet/site.pp –verbose –detailed-exitcodes || [ $? -eq 2 ]

The first steps are shown here:

image

And the last ones:

image

When Puppet is done, we have a running database. All temporary files have been cleaned up.

6. Test whether the container is actually running the database – then create an image from the container

With these commands (inside the container) we can run SQL*Plus and connect to the running database instance:

export ORACLE_SID=orcl

export ORACLE_HOME=/oracle/product/12.1/db

cd $ORACLE_HOME/bin

./sqlplus “sys/Welcome01 as sysdba”

SQL*Plus is started and we can for example select from dual.

image

Note: The database sid = orcl. Password for SYS and SYSTEM are Welcome01.

Using exit twice – once to leave SQL*Plus and once to exit the container, we return to the host. The container is shown (agitated_bohr).

The next step – which takes some time, due to the size of the container and the images created from it – is to create an image that captures the current state of the container:

docker ps – a (to find the container id)

docker commit <container id>

docker images

assign nice name and version to image:

docker tag c5d3effcbdd6 oracle-db-12-1-0-2:1.0

Look at the result of this:

image

A sizable image – that through export and import and be reduced in size although that would severe the link with the base centos image.

The situation at this point can be visualized using this picture:

image

 

7. Run Container based on the Image

Run container from that image, local folder /software mapped to host folder that contains the software

docker run -ti -p 0.0.0.0:1521:1521 -v /host_temp/shared/downloads:/software --volumes-from softwarecontainer oracle-db-12-1-0-2:1.0  /bin/bash

Note: the -v and –volumes-from are not really required, because the two folders were required only for installing the database (during the creation of the image). Running the container with:

docker run --privileged=true -ti -p 1521:1521 oracle-db-12-1-0-2:1.0 /bin/bash

will do the job just as well. Note: I have added the –privileges=true here because I ran into a problem with not being able to switch users in the container. This discussion led me to use this additional parameter.

Once the container is fired up, the database can be started – using the /startup.sh script or using the statements listed in step 6. That is: I went through these steps (which seem a little bit more than should be required):

su oracle

/startup.sh

[provide oracle as the password]; this will start the listener

export ORACLE_SID=orcl

export ORACLE_HOME=/oracle/product/12.1/db

cd $ORACLE_HOME/bin

./sqlplus “sys/Welcome01 as sysdba”

SQLPlus starts and connects us to an idle instance. Then type startup – and the database is started.

After exiting SQL*Plus, I can check the listener status:

./lsnrctl

then type status.

The service orcl.example.com and the instance orcl are both ready.

Tidbits

These are really notes to myself – little things I needed or came across while going through the steps described in this article.

Docker

Docker stores container data in the directory /var/lib/docker/containers in Ubuntu.

Remove a single image: docker rmi IMAGE_ID (note: images can only be removed when no containers are based off of it)

Trick for removing multiple images in one go: http://stackoverflow.com/questions/17665283/how-does-one-remove-an-image-in-docker

Remove Docker Containers (http://stackoverflow.com/questions/17236796/how-to-remove-old-docker-containers):

<code>docker rm $(docker ps --before &lt;Container ID&gt; -q)</code>

Volumes are outside of the Union File System by definition, so any data in them will not count towards the devicemapper 10GB limit (http://fqa.io/questions/29083406/docker-disk-quotas). By adding a VOLUME in the Dockerfile, I am hoping to be able to leave the big software install packages outside the 10GB limit. image Learn the IP address assigned to a container:

$ sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' &lt;container_id&gt;

When a Docker container is restarted its, IP Addresses changes. Applications as well as others servers that were communicating with the container before the container restart, will be unable to communicate. Configuring a DNS server on Docker and configure consumers to use DNS names is a solution to the IPAddress change after a container restart.

Interesting observations in this white paper from Oracle on WebLogic and Docker

Linux

Determine size on disk of directories and their contents: du -f  (http://www.codecoffee.com/tipsforlinux/articles/22.html)

Remove entire directory: rm -r directory

Library Dependency

While creating the Docker image using Edwin’s Docker file, I ran into a dependency issue, that Edwin helped me fix. (well, he fixed it for me)

As the screenshot shows, the highline component that apparently gets installed as part of librarian-puppet requires a higher Ruby version than is available.

image

This was resolved by adding a single line to the Docker file:

RUN gem install highline -v 1.6.21

just prior to the line that installs librarian-puppet. This makes sure that highline – a version that does not have this advanced Ruby dependency – is already around when the librarian-puppet is installed. It will therefore not try to install the latest version of highline, that gives us the problem.

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

Mobile backend with REST services and JSON payload based on SOA Suite 12c for Live Mobile Hacking with an OFM 12c red stack – Part 2

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

This article continues the story from Mobile backend with REST services and JSON payload based on SOA Suite 12c for Live Mobile Hacking with an OFM 12c red stack – Part 1. It is the story of how to expose SOAP/XML based Web Services – primed for enterprise integration and process automation – as REST services with URL parameters and JSON payloads that provide the mobile backend APIs required for the development of mobile apps. The story began with the challenge Luc Bors and I were facing for the Live Mobile Hacking session at the Oracle Fusion Middleware EMEA Partner Forum (March 2015). For an audience of some 200 SOA Suite and Oracle Middleware experts – including the product management teams for several of the products we were going to demonstrate – we were to develop live both a mobile app as well as a mobile backend (REST APIs) to support the mobile app.

The use case was defined – an app for flight attendants to be used on tables to look up flight details, inspect the passenger list, update the flight status and record passenger complaints. Luc handled the mobile app based on the agreements we made on the REST resources, services and payloads. It was up to me to implement the backend – consisting of a database, a SOA composite to expose the database data and operations at middleware level (in SOAP/XML) – as described in the previous article. Next, I needed to implement a Service Bus project to expose the required REST service interface with JSON payload, leveraging the SOAP/XML service published by the SCA Composite. That is the story told in this sequel – the part highlighted in the red rectangle in the next picture:

image

This article will briefly describe the steps I went through to create the REST services on top of the SOAP/XML services. At the end of this article, the end to end flow from the above illustration will have been implemented.

All source code for the mobile backend can be downloaded from a GitHub repository.

 

Steps:

  • Design REST services, Resources, URLs and JSON payloads
  • Create REST proxy services in Service Bus project
  • Create Pipeline to connect REST proxy to Business Service for SOAP/XML services with appropriate transformations between message formats (JSON/XML and vice versa)
  • Publish Service Project and test end-to-end REST services (all the way down to the database and back again)

At the end of this article, here is what we have achieved:

image

 

At this point, the Mobile App can come off the mock services it has undoubtedly used during development, and start consuming the real services.

 

Step One – Design REST services, Resources, URLs and JSON payloads

Luc and I had discussed the functionality to be offered in the mobile app. Subsequently, Luc indicated the REST services he would require to implement the app. Both in terms of functions and in terms of the actual URLs patterns and JSON payloads. The latter can be found in this GitHub directory.

The former were defined as follows:

To retrieve flight details: GET request to /FlyingHigh/FlightService/FlightService/flights/<flight code>?flightDate=2015-03-07 (with a JSON response structured as shown below)

image

 

To retrieve the passenger list for a flight : GET request to /FlyingHigh/FlightService/FlightService/flights/<flight code>/passengerlist?flightDate=2015-03-07 (with a JSON response structured as shown below)

image

 

 

To update the status of a flight : PUT request to /FlyingHigh/FlightService/FlightService/flights – with a JSON request payload as follows:

image

 

The REST service for submitting a complaint from a passenger was defined the other way round: I dictated the service interface and Luc simply had to consume it – as an example of how ideally you should not do things in mobile app development. The call for submitting the complaint has to be sent as a POST request to: soa-infra/resources/default/CustomerService!1.0/CustomerCareService/complaints  with a JSON payload structured like this:

image

Note: the CustomerCareService is exposed from a SOA Composite directly, not through the Service Bus proxy that is created in the remainder of this article. The flow for requests to this service is as follows – a BPM process being triggered as a result:

image

 

Step Two – Create REST proxy services in Service Bus project

The REST adapter wizard in JDeveloper can be used in two ways: either take an existing Pipeline (Service Bus) or Component (such as Mediator or BPEL) and derive a REST binding from the existing SOAP/XML service interface (not an ideal approach) or design a REST service with corresponding payload as desired and have the adapter wizard generate the internal SOAP/XML representation that can be connected through a Pipeline or Mediator. The latter is the approach to be used when the REST service design is leading – as it usually is, including this case.

The starting point is a Service Bus project that already contains a Business Service for the FlightService SOA Composite. This business service obviously exposes the same SOAP/XML interface exposed by the composite.

image

What we are after, is a REST service exposed by this Service Bus project – of which the interface definition is predefined  by Luc. We can create this from the context menu in the Proxy Services swim

lane. The wizard for configuring the REST binding appears.

SNAGHTML2333454

Set the name to FlightRestService.

Now for each operation the service should expose, a Resource should be configured. For example for the retrieval of FlightDetails using /FlyingHigh/FlightService/FlightService/flights/<flight code>?flightDate=2015-03-07

 

Press the gears icon to specify the format for the response payload. This is expressed through an XSD document that describes the native format for the response; we call this type of document an NXSD document – in this case for example nxsd_retrieveFlightDetails.xsd.

Choose the type of format:

SNAGHTML24b19c2

 

And load the sample of the JSON message payload:

SNAGHTML24becba

The wizard will now create the NXSD representation of this JSON message format:

SNAGHTML24d59bf

Close the wizard.

Back in the Response tab, we can generate a sample of the message payload in JSON or XML format:

 

image

The operation to fetch the passengerlist can be defined a similar way:

SNAGHTML2515c6d

and the details:

image

 

Note: the endpoint definition for the REST service is set like this – a bit silly really, with the duplicate FlightService that we will have to use when invoking the service, for example from the browser:

image

 

Step Three – Create Pipeline to connect REST proxy to Business Service

Create Pipeline to connect REST proxy to Business Service for SOAP/XML services with appropriate transformations between message formats (JSON/XML and vice versa).

Add a Pipeline (for example FlightPipeline) that implements the WSDL created for the REST Proxy Service. The REST proxy can invoke this pipeline, handing it the XML representation (described by the NXSD document) of the URI parameters received in the REST call. It expects to receive the XML representation (also described in the NXSD) of the JSON payload of the response message.

XQuery of XSLT based transformations are required between the NXSD description of the request and the XSD description of the input to the business service FlightServiceBS and between the output from the FlightServiceBS and the required NXSD format to be returned to the REST Proxy Service.

The next figure lists the six transformations created between NXSD and XSD-for-Business Service:

image

 

The pipeline between REST proxy and Business Service contains an operational branch – for the three REST operations in the FlightService. In each operation, it does similar things: transform the request, route to the business service and transform the response.

image

The overall project looks like this:

 

image

 

Step Four – Publish Service Project and test end-to-end REST services

Nothing special about deployment, so I will skip that part.

After deployment, the REST services can be invoked. An example is:

image

This results in a SOA composite instance with the following flow trace:

image

We can look at the details of what happens inside the Mediator:

image

to learn that what we get in JSON is pretty much what was constructed in the PL/SQL package in the database. No surprise there.

Of course I can invoke the other REST services as well:

image

but that does not add much value.

The really interesting next step is when Luc starts implementing the mobile app against these REST services.

SNAGHTML21929f7

 

Perhaps the Passenger Complaint is mildly interesting – as it causes some activities, as illustrated by the next two pictures:

first the call to the REST service for (POSTing) customer complaints:

image

and then the result inside the SOA Suite and BPM Suite:

SNAGHTML21b9c87

 

 

Some artist’s impressions from the actual live demo

 

Here some pictures from the actual demo, courtesy of Luis Weir:

image

image

image

image

and Sten Vesterli:

image

and Jim Weaver:

image

and Vikas Anand:

image

and Jan van Hoef:

image

and José Rodrigues

image

and Simon Haslam:

image

and Jacco Cijsouw:

image

 

and one last one taken by me from stage:

image

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

SOA Suite: How to set database session context through Database Adapter – and how to retrieve data from V$SESSION

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

To allow the Oracle Database to produce meaningful log, audit trail and trace information that allow problem analysis of issues and interpretation and optimization of run time behavior, it is necessary that the database session execution context is set. Without it, efficient administration of the RDBMS is not possible. This context that should be set by any consumer of the database consists of aspects such as who (is the user for which the database is doing this work), when (is the moment in time that the database should assume), what (is the application or component – and the specific version – that is currently executing logic that requires work from the database) and why (what is the functional background for this activity).

image

Context elements are associated with a database session. Some of these are [only]set at login time, when the connection is created and the session is initiated and others can be set throughout a session with alter session statements, calls to specific supplied PL/SQL packages or manipulation through OCI or JDBC interfaces. Some of the most valuable settings in the context are:

  • MODULE – the (free format) name of the client application’s module or program unit
  • ACTION- a free format indication of the feature or function in the client application from which the current activity was triggered
  • CLIENT_INFO – additional free format information about the (state of) client (program) for which the session currently performs activities
  • CLIENT_IDENTIFIER – identifier describing the light weight end user connected to the client application
  • the ECID – the Fusion Middleware execution context identifier that identifies a cross-WebLogic-and-database-node chain of activities executed to service a single request

The values for these context attributes are set through calls to packages – dbms_application_info and dbms_session – except for the ECID which is set in a handshake between Fusion Middleware/WebLogic and the Database (see blog article).

Many of the predefined database context settings are available from the dynamic V$ views, such as V$SESSION, where DBAs can leverage them for monitoring and analysis purposes. They are also recorded in ADH, AWR, the database audit trail and in trace files. Additionally, the context values can be used in policies (VPD, Data Redaction, Fine Grained Auditing, Resource Plans/Consumer Groups) and in custom filters and logging mechanisms.

This article shows how the SOA Suite when it accesses the database through the database adapter could pass the appropriate database session context settings and how in the database these should be set.

The sample application is a SOA Composite called FlightService with three database adapter bindings that each call out to a procedure or function in a PL/SQL package in the database.

image

The application is introduced in some details in this article. We will add the pieces to set the proper database context for the database actions initiated from the SOA Composite FlightService.

Implementation of Database Session Context Propagation

The interaction with the PL/SQL package through the database adapter is based on User Defined Types – custom Database object definitions. I consider use of these types a best practice. They allow us to exchange nested data structures in single round trips, provide data structures that map very well to XML messages and that can very well be handled by the database adapter. Additionally, use of these UDTs allow the database developers to work in a rather elegant way – leveraging their tools to their full potential.

image

 

I have created an additional UDT for the session context details:

create or replace
type session_context_t force as object
( client_identifier varchar2 (100)
, module varchar2(48)
, action varchar2(32)
, client_info varchar2(32)
, ecid varchar2(50)
, flashback_time date
, edition varchar2(128)
);

I have added an input parameter based on this type to each of the three program units in the PL/SQL API, like this:

create or replace
package flight_service
as
function retrieve_flight_details
( p_flight_number in number
, p_airline_code in varchar2
, p_departure_date in date
, p_session_context in session_context_t
) return flight_t
;
function retrieve_passenger_list
( p_flight_number in number
, p_airline_code in varchar2
, p_departure_date in date
, p_session_context in session_context_t
) return passenger_list_t
;
procedure set_flight_status
( p_flight_number in number
, p_airline_code in varchar2
, p_departure_date in date
, p_flight_status in out varchar2
, p_session_context in session_context_t
)

end flight_service;

In the implementation of each of three units, I have added a call to a local, private procedure – for example:

function retrieve_flight_details
( p_flight_number in number
, p_airline_code in varchar2
, p_departure_date in date
, p_session_context in session_context_t
) return flight_t
is
  l_flight flight_t;
begin
  set_session_context(p_session_context => p_session_context);
  select flight 
  into   l_flight
  from

The implementation of this procedure is the linking pin that actually sets the database session context:

procedure set_session_context
( p_session_context in session_context_t
) is
begin
  if p_session_context is not null
  then 
    DBMS_APPLICATION_INFO.SET_MODULE(p_session_context.module, p_session_context.action );
    DBMS_APPLICATION_INFO.SET_CLIENT_INFO(p_session_context.client_info);
    DBMS_SESSION.SET_IDENTIFIER(p_session_context.client_identifier);
  end if;
end set_session_context;

After adding the input parameter, I have re-run the database adapter wizard for each of the three bindings. Simply by clicking next on each page and finish on the last one, the database adapter binding configuration – and the associated XSD – were extended with the new parameter.

The XSD definition for each DB adapter binding was extended with the P_SESSION_CONTEXT element and associated type:

image

I need to make sure that the three routing rules in the Mediator take care of providing values for this element. It seemed easiest and most appropriate to do so through assigning values rather than by including in the transformation – although I am not yet sure about the best approach. Here is the Mediator editor:

image

and here are the four value assignment rules – one which is special for this routing rule (‘getFlightDetails’ for the value of the action element):

SNAGHTML35119d

 

The next step could be to derive values from properties such as Flow (or Composite) Instance Title, the user identity set in the middleware context and perhaps composite sensor values.

 

Database Session Context in Action

Perform a request to the REST service

image

that through the Service Bus Project initiates a SOA Composite that can be traced in EM FMW Control:

image

The ECID is exposed in this management UI and this same ECID is passed by WebLogic to the database (see this article for more details ). Using the ECID, we can query the V$SESSION view – to learn about other session characteristics or session context settings for the database session that performed work on behalf of this ECID:

image

Note how the values for MODULE, ACTION, CLIENT_IDENTIFIER and CLIENT_INFO are available in this view and also in ADH, AWR, the database audit trail and in trace files. Additionally, the context values can be used in policies (VPD, Data Redaction, Fine Grained Auditing, Resource Plans/Consumer Groups) and in custom filters and logging mechanisms.

The values were passed from the Mediator to the Database Adapter (except for the ECID that is propagated by the FMW infrastructure automatically) , as the following trace overview shows:

image

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

Mobile backend with REST services and JSON payload based on SOA Suite 12c for Live Mobile Hacking with an OFM 12c red stack

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

In a previous article – https://technology.amis.nl/2015/03/03/preparation-for-live-mobile-hacking-with-an-ofm-12c-red-stack-budapest-2015/– I introduced the challenge Luc Bors and I were facing for the Live Mobile Hacking session at the Oracle Fusion Middleware EMEA Partner Forum. For an audience of some 200 SOA Suite and Oracle Middleware experts – including the product management teams for several of the products we were going to demonstrate – we were to develop live both a mobile app as well as a mobile backend (REST APIs) to support the mobile app.

The use case was defined – an app for flight attendants to be used on tables to look up flight details, inspect the passenger list, update the flight status and record passenger complaints. Luc handled the mobile app based on the agreements we made on the REST resources, services and payloads. It was up to me to implement the backend – consisting of a database, a SOA composite to expose the database data and operations at middleware level (in SOAP/XML) and a Service Bus project to expose the required REST service interface with JSON payload, leveraging the SOAP/XML service published by the SCA Composite.

This article will briefly describe the steps I went through. All source code for the mobile backend can be downloaded from a GitHub repository.

image

 

Step One – Database

The database created for Flying High contains just five tables – as shown in the next figure.

image

However, these tables are encapsulated – hidden from view from external consumers. This I consider to be a best practice. For various reasons that include the encapsulation of (potentially complex) SQL statements in APIs in the database rather than in client applications and the freedom of changing the actual implementation of tables and the way data is stored. The API that is externally exposed is a PL/SQL package FLIGHT_SERVICE. The package specification contains three unit – two functions and a procedure – for retrieving flight details, the passenger list for a flight and for updating the status of a flight.

The input and output parameters for these units are all based on user defined types. This allows me to exchange nested data structures in single round trips, provide data structures that map very well to XML messages and that can very well be handled by the database adapter. Additionally, use of these UDTs allow the database developers to work in a rather elegant way – leveraging their tools to their full potential.

 

image

 

When you check the script database.sql in the source code you will find the definitions of the tables, the UDTs and the PL/SQL package.

 

Step Two – SOA Composite, Database Adapter, SOAP/XML Service

Each of the operation in the database package Flight_Service is exposed as Reference in a SOA Composite application using three database adapter bindings. The User Defined Type based inputs and outputs to these operations are represented in (very similar) XML structures in the interface definitions for these three bindings.

 

image

 

In parallel with the creation of the PL/SQL package, a WSDL was created for the FlightService, with a similar granularity in operations and message data structures. Note: this is not required, but it certainly makes life simpler when you have to do a pressure cooker demo. The WSDL is visualized here:

image

You can access the source itself through this link. The FlightService.wsdl has a companion XSD document that describes the request and response messages as we want to expose them to consumer of the SOAP/XML service.

The structures in the XML Schema Definitions are quite close to the User Defined Types in the PL/SQL API. However, they are defined from a canonical model – not from database table and column definitions.

image

A Mediator component was added to the

For each of six back-and-forth hand overs, I have  created XSLT maps to convert from canonical input message to the input required by one of the three database adapter bindings as well as to convert from each of three outputs from the database adapters bindings to the response message format specified in the FlightService WSDL and XSD.

image

The Mediator has three routing rules configured that each use two XSLT Maps. The maps are quite straightforward, thanks to the similarities between the PL/SQL API object structures and the XSD definitions. One example of an XSLT map is shown here:

image

This map uses a DVM (domain value map) lookup to convert between the code used for FlightStatus in the database (sch, brd,cld) and the canonical values to be used in the response message (scheduled, boarding, closed). The DVM definition shown next. The document was transferred to the database MDS under the runtime SOA Suite.

image

 

After deploying the SOA composite (and configuring both the Data Source and the Database Adapter connection), the FlightService can be invoked, for example from SoapUI:

image

This screenshot shows a call to the getFlightDetails operation and the resulting response – wrapped in a SOAP envelope and of course all XML in nature.

Each service call will start a new instance of the FlightService SOA Composite. The instance will make the call to the database. All interaction can be tracked – with a  fine grained enough audit level at least – as shown in the  next figure.

image

This figure shows the XML structure that came out of the Database Adapter binding – and that was converted from the User Defined Type at database level. Note the value of BRD for FLIGHT_STATUS that is converted by the DVM lookup to boarding.

Unfortunately this beautiful service is still not good enough for Luc and his Mobile App. A REST/JSON service was the agreement. So my next challenge: expose the FlightService as a REST service that uses JSON payloads. That story is told in a follow up article.

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page