AMIS Oracle and Java Blog https://technology.amis.nl Friends of Oracle and Java Mon, 30 Mar 2015 01:23:27 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.1 ADF Performance Monitor: Measuring Network Time to Browser and Browser Load Time https://technology.amis.nl/2015/03/29/adf-performance-monitor-measuring-network-to-browser-and-browser-load-time/ https://technology.amis.nl/2015/03/29/adf-performance-monitor-measuring-network-to-browser-and-browser-load-time/#comments Sun, 29 Mar 2015 15:57:49 +0000 https://technology.amis.nl/?p=35040 Recently we added network and browser load time to the ADF Performance Monitor. Now you know exactly every end-user experience of your ADF application, in real-time. You can quickly resolve any performance bottlenecks with this end-to-end visibility. You can even drill down into an individual user to analyze the experience – to understand the ADF app behavior. The [...]

The post ADF Performance Monitor: Measuring Network Time to Browser and Browser Load Time appeared first on AMIS Oracle and Java Blog.

]]>
Recently we added network and browser load time to the ADF Performance Monitor. Now you know exactly every end-user experience of your ADF application, in real-time. You can quickly resolve any performance bottlenecks with this end-to-end visibility. You can even drill down into an individual user to analyze the experience – to understand the ADF app behavior. The dashboard is improved with several overview and detail graphs that shows the layer where the time is spent of your application. This is very useful to troubleshoot problems.

The ADF Performance Monitor is an advanced tool specifically build for ADF applications and is aware of the intricacies of the ADF framework. It traces key ADF actions across tiers and services to provide end-to-end visibility and automatically maps each tier to easily visualize the relationship between them. This Tracing provides deep visibility into the cause of application performance issues down to the tiniest detail. Click here for more information.

Network Time and Browser Load Time

Network time is the time that it takes to send a HTTP request from a browser (http request network time) to the application server and from the application server back to the browser (http response network time). The browser loadtime is the time that a browser needs to build up the DOM tree and load the page.

database_appserver_network_client

Normally, the combination of the network time and browser load time should not be more than ± one second.

Network Time/Browser Load Time in the ADF Performance Monitor

Several overview graphs are added to the main dashboard. At the day overview, it shows for each hour of the day (right bottom) the layer where the time is spent. It shows the time spent in the database, webservice, application server, and network/browser load time:

ADF Performance Monitor Day Overview - Slow Network Time at 11 am

At the top right graph (hourly overview of day performance) we can see a lot of red in the bars. Specifically from 11:00 to 12:00 – apparantly there were many very slow requests. In the graph at the right bottom we can now explain what happened; there were network problems. After talking to the infrastructure department – this was indeed the case. They were very busy with maintanance and were not aware (!) that end-users were suffering from there work.

We can drill down to an hour overview:

ADF Performance Monitor Hour Overview - Slow Network Time at 11 am

Also, we can zoom in into a specific managed server of our 12 server cluster (called regi_adf_server1) from the dropdown list:

ADF Performance Monitor Hour Overview - Slow Network Time at 11 am - filtered by managed server

We can see here that during the whole hour there were network problems (graph right bottom).

We can zoom in more to a five minute overview (11:40-11:45):

ADF Performance Monitor Five Minute Overview - Slow Network Time at 1140 am

We can see that many end-users have to wait ± 5 seconds – on pure network time (!). Note that The JVM is not the problem and is working fine (bottom graph).

The next day the network problem was resolved. We can see this in the following hour overview graph were there is much less purple colour (right bottom):

ADF Performance Monitor Hour Overview - Normal at 11 am - filtered by managed server

In the HTTP request popup we can also select a specific end-user an monitor his/her experience. The client response time is the time including everything; the app server process time, the network to browser and browser load time. We can see that in this case the client response time was even more: ± 9 seconds for each HTTP request (!). This user (and organisation) was complaining and calling our support desk where we could monitor and investigate this end-user experience:

End User Experience - HTTP Requests2

You can get here more information on the ADF Performance Monitor. A one month trial version is available (on demand).

The post ADF Performance Monitor: Measuring Network Time to Browser and Browser Load Time appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/03/29/adf-performance-monitor-measuring-network-to-browser-and-browser-load-time/feed/ 0
IT-beheer van morgen – hoe ziet dat eruit? https://technology.amis.nl/2015/03/27/it-beheer-van-morgen-hoe-ziet-dat-eruit/ https://technology.amis.nl/2015/03/27/it-beheer-van-morgen-hoe-ziet-dat-eruit/#comments Fri, 27 Mar 2015 14:20:23 +0000 https://technology.amis.nl/?p=34999 Wat is mijn visie op IT-beheer? Een lastige vraag. Want wat is vandaag de dag de essentie? Iedereen die zicht heeft op wat morgen komen gaat, is overmorgen waarschijnlijk een rijk man. Maar net als veel organisaties waar ik kom of mensen die ik spreek, beschik ook ik niet over een glazen bol. Jammer, want een [...]

The post IT-beheer van morgen – hoe ziet dat eruit? appeared first on AMIS Oracle and Java Blog.

]]>
Wat is mijn visie op IT-beheer? Een lastige vraag. Want wat is vandaag de dag de essentie? Iedereen die zicht heeft op wat morgen komen gaat, is overmorgen waarschijnlijk een rijk man. Maar net als veel organisaties waar ik kom of mensen die ik spreek, beschik ook ik niet over een glazen bol. Jammer, want een glazen bol zou voor velen een uitkomst zijn in de snel veranderende wereld van IT.

Als je niet weet wat er op langere termijn op je afkomt, moet je zorgen dat je kunt inspelen op onverwachte veranderingen. Met andere woorden: je moet als organisatie wendbaar, ofwel Agile, zijn. Dit geeft je een voorsprong op concurrenten. Alleen staat deze wens om wendbaar te zijn in veel gevallen op gespannen voet met het takenpakket van de beheerafdeling van de organisatie.

De zorgvuldigheid van beheer

In veel beheerafdelingen zijn processen gebaseerd op  systeemdenken. Wanneer er nieuwe IT middelen worden aangeschaft, worden ze ‘in beheer’ genomen. Dit betekent dat de verantwoordelijke afdeling op methodische wijze de productiviteit, continuïteit en betrouwbaarheid borgt van het aangekochte product. De taken die nodig zijn om dit te kunnen garanderen, vormen veelal de basis voor een georganiseerde ‘stabiliteit’. Deze stabiliteit vormt een natuurlijke weerstand tegen veranderingen. Veranderingen die in deze tijd eerder norm dan uitzondering zijn.

Investeringen in IT moeten renderen

infrastructuur bouwblokkenDe beheerafdeling voorziet ook in een andere belangrijke taak. Voorafgaand aan de investering zetten organisaties de kosten en baten uiteen. Het globale idee is natuurlijk dat de baten de kosten overstijgen en op deze wijze de investering laten renderen. Nadat IT middelen in gebruik zijn genomen, krijgt de beheerafdeling de opdracht om de ‘Total Cost of Ownership’ te beperken. Vaak blijft hierdoor het middel zo lang mogelijk in gebruik, om op die manier de kosten te beperken. Over de werkelijke kosten en baten wordt vaak niet meer gesproken en er wordt dus ook niet overwogen of vernieuwing van IT middelen wellicht meer rendement oplevert.

Devaluatie

Wanneer je vandaag de dag een business case sluitend wilt maken, dient de terugverdientijd (ROI) drastisch te worden verkort. Dit heeft een prijsopdrijvend effect op de producten of diensten die een organisatie levert. Producten en diensten die in sterk concurrerende markten onder hoogspanning staan. Daarom wordt er vaak voor gekozen om te blijven aanmodderen met gedateerde IT middelen. Dit heeft zo nu en dan dramatische gevolgen.

Onhoudbaar

Mijn overtuiging is dat het IT investerings- en beheermodel voor veel organisaties onhoudbaar is geworden. Zeker voor organisaties waarvan de ‘core business’ buiten de IT ligt. Maar stilstand is achteruitgang. De kans is bijvoorbeeld groot dat de doorgewinterde timmerman van morgen failliet gaat zonder een profiel op werkspot.nl, een eigen website met online klachtenafhandeling en betrouwbare klantreviews.

Naar de Cloud

Ik ben van mening dat veel van de hierboven omschreven uitdagingen worden opgelost met de businessmogelijkheden die achter de “Cloud” schuil gaan. Toch wordt Cloud nog maar weinig door organisaties ingezet. De belangrijkste reden is dat het ‘Cloudproduct’ vaak nog niet of niet voldoende aansluit op de uitgangspositie en toekomstige wensen van de organisatie. Cloudproducten zijn nog niet altijd volwassen of een goede vertaling van het traditionele investeringsmodel. Ik weet echter zeker dat deze problemen binnen afzienbare tijd worden opgelost. Hoe de wereld er met Cloud precies uit komt te zien? Dat weet ook ik niet. Wel ben ik zeer benieuwd naar uw uitdagingen zodat we wellicht samen kunnen nadenken over de oplossingen van morgen.

The post IT-beheer van morgen – hoe ziet dat eruit? appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/03/27/it-beheer-van-morgen-hoe-ziet-dat-eruit/feed/ 0
Best practices for implementing an efficient continuous delivery process https://technology.amis.nl/2015/03/24/best-practices-for-implementing-an-efficient-continuous-delivery-process/ https://technology.amis.nl/2015/03/24/best-practices-for-implementing-an-efficient-continuous-delivery-process/#comments Tue, 24 Mar 2015 13:43:23 +0000 https://technology.amis.nl/?p=35009 In the past years I have been involved in implementation/introduction of a continuous delivery process in a lot of software projects. The activities concerned with this are also labeled as build automation, continuous integration or DevOps depending on the fashion and trends. Despite the jargon and popular buzzwords I am convinced continuous delivery practices are [...]

The post Best practices for implementing an efficient continuous delivery process appeared first on AMIS Oracle and Java Blog.

]]>
In the past years I have been involved in implementation/introduction of a continuous delivery process in a lot of software projects. The activities concerned with this are also labeled as build automation, continuous integration or DevOps depending on the fashion and trends. Despite the jargon and popular buzzwords I am convinced continuous delivery practices are helpful for creating high quality software in an efficient way. The following practices have helped me a lot with implementing these practices. Here are 11 of my best practices for implementing continuous delivery. Please share your experiences as comments on this post.

1. Step-by-step implementation

Implement the continuous delivery process in small gradual changes. Do not use a big bang or waterfall approach. First work on several quick-wins / the low hanging fruit, then automate the steps in the process that will save you a lot of work and improve software quality by reducing the number of errors. Next follow a step by step approach for automating the complete delivery process. Each time focusing on a gradual improvement of the highest priority issues. This approach works best with an agile project approach. You can plan a small portion of continuous delivery work in every sprint. The team members and other stakeholders who are less involved with continuous integration and deployment automation have the chance to get used to this new way of working. This way you will get more buy-in during the implementation of the process.

2. Focus on testing

Test and review everything and do it often. You are automating the entire process of software delivery till production so every error or misinterpretation could quickly have massive consequences. The frequency of the release and delivery cycle will increase drastically. In order to safeguard your quality, a lot of automated testing is required since you won’t be able to perform thorough manual testing anymore at such a high frequence. When manual regression testing is taking 3 days you cannot keep up when you are releasing every 4 hours.

3. Version everything and commit frequently

Fun Continous delivery Store everything in your version control system. The choice of version control software is less important than its usage. Just choose one and use it properly. Using a file share folder is not acceptable anymore! Your version control system needs to contain every asset of development and deployment: design documents, code, database design, database scripts, environment configuration, test scripts, deployment tools and deployment scripts. The only thing you do not store in your source control are sensitive data or environment dependent settings (endpoints, usernames, passwords). Use a release management tool for storing these settings. This means you need to store development settings, scripts and sources in your source control. In the same place as where the rest of the software is stored. Commit frequently and integrate early. At least twice a day, preferably more often. Do not save your changes till the last day of the sprint, since your brilliant changes might break the rest of the code.

4. Peer review everything

Use your peers to review everything you make. From designs, configuration plans, code, build scripts and delivery architecture. A peer review increases quality and contributes to knowledge transfer. Your team member will ask questions about your approach and methods, make sure the structure is compliant with the coding and architecture guidelines and they might find things you have overlooked while working on the item that will certainly lead to errors later on in the deployment pipeline. Along with this review there is also added value in the knowledge transfer of the things you’ve created. By reviewing your code, another team member will automatically get to know your deliverable and get familiar with the changes. This will save time later when they have to work on the same code. A review checklist helps improve the quality of the peer review.

5. Accept feedback.

A very important follow-up on the previous item; accept any feedback. When someone is providing you with feedback about anything it is only intended for improving the product or the process. Consider feedback as a gift. Approach it with an open mind. Our natural tendency is to view (negative) feedback as a personal attack and not to accept positive feedback. Discuss this in your team and create a culture of accepting feedback as a positive step towards improving the product and/or the process. And feedback can also be the opportunity to learn from your experiences and improve yourself and your team.

6. Speed up your frequency

When you are doing continuous delivery, speed up your frequency. See every run as a practice run for deploying to production. One of my favorite quotes by Martin Fowler: if it hurts, do it more often. By speeding up your frequency you know you are sure it is working since you did so many trial runs. Also the size of the changes is smaller when you speed up the frequency. When something fails you can easily pinpoint the origin and trace it back to the change which caused this error. This makes debugging, finding the error and solving it much easier.

7. Ask someone else to try your implementation

Always ask someone else to give your implementation a test run. Even when it has passed the peer review stage. By handing over your solution to another person and let it run without your direct supervision will unveil differences in interpretations, your implicit preconditions and differences in your environmental variables. And letting someone else try your software will also validate its documentation, functioning, structure and efficiency. A great test of your deliverable.

8. Always keep the delivery pipeline functioning

The top priority of any continuous delivery team is to be able to deliver software continuously. This is only possible when the complete pipeline is functioning till production. When something causes an interruption or bottleneck in the functioning of the pipeline, the team has to fix this with the highest priority. Not because we like green lights on our dashboard… that’s only superficial. When the pipeline is not functioning the safeguards and controls on your codebase are not operational. Adding more code to this situation will only increase this problem. So a functioning pipeline is the highest priority.

9. Communicate your ideas

Continuous delivery is not something you do by your own. It is affecting the core of most development processes. You cannot change a process impacting many people without communicating this with the different stakeholders. Not only for understanding the process and technology solutions. This is also important for developing a supportive environment for your changes. So share your ideas, explain what you are planning to do, explain your designs and solutions and ask for (and accept) feedback. You are probably working with a team consisting of very smart people. Make use of this and let them contribute to your solution.

10. Virtualize

The iterative development, intensive automated testing and frequent deployment requires an environment with a “reset” option. You are going to us the test environment many times and often leaving it in a ruined state. It can be very helpful when you have procedure to recreate the environment in its original state. This will allow testing of your release / deployment more often increasing its quality. Virtualization will help implementing this requirement by using snapshots or scripting tools to re-create the environment from scratch. This is more of a challenge when using physical hardware.

11. Keep improving with your whole team

team collaborationContinuous delivery is not an one time effort. It is an ongoing project, evolving and improving step by step. It is not the effort of one team member assigned with the “automation task”. It is the responsibility of the whole team to make delivery of software as efficient as possible and to constantly search for ways to improve quality.

These tips have been proven to be very useful to me and I am sure they can help with improving your process. Feel free to add your opinions and experiences below. Love to hear them.

The post Best practices for implementing an efficient continuous delivery process appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/03/24/best-practices-for-implementing-an-efficient-continuous-delivery-process/feed/ 0
Next step with Docker – Create Image for Oracle Database 12c using Puppet https://technology.amis.nl/2015/03/22/next-step-with-docker-create-image-for-oracle-database-12c-using-puppet/ https://technology.amis.nl/2015/03/22/next-step-with-docker-create-image-for-oracle-database-12c-using-puppet/#comments Sun, 22 Mar 2015 21:47:32 +0000 https://technology.amis.nl/?p=34992 In a previous article – https://technology.amis.nl/2015/03/15/docker-take-two-starting-from-windows-with-linux-vm-as-docker-host/ – I have discussed my first steps with Docker. With Windows as my host environment I used Vagrant to create a VirtualBox VM with Ubuntu. In that VM I installed Docker can played around creating some containers, images and eventually an image for Oracle WebLogic 12.1.3. I leveraged the [...]

The post Next step with Docker – Create Image for Oracle Database 12c using Puppet appeared first on AMIS Oracle and Java Blog.

]]>
In a previous article – https://technology.amis.nl/2015/03/15/docker-take-two-starting-from-windows-with-linux-vm-as-docker-host/ – I have discussed my first steps with Docker. With Windows as my host environment I used Vagrant to create a VirtualBox VM with Ubuntu. In that VM I installed Docker can played around creating some containers, images and eventually an image for Oracle WebLogic 12.1.3. I leveraged the excellent work by Mark Nelson (especially his article https://redstack.wordpress.com/2014/11/14/gettingn-to-know-docker-a-better-way-to-do-virtualization/).

In this article I am taking things one step further by creating a Docker container – and from that container an image – with the latest Oracle Database release 12.1.0.2 (Enterprise Edition). Again, the Mark Nelson article is my guide and Edwin Biemond – champion of all things automated – provided the Docker file and Puppet scripts that get the job done. Edwin was also kind enough to help me out when a library dependency caused problems.

I ran into the (default) size limitation on Docker containers (10 GB) while installing the Oracle Database. I resolved this challenge by mapping a host folder to the container (with the original database software) and by sharing a volume from a second container that was used as temporary (staging) area. Thus I virtually expanded the file system of my container considerably beyond the 10 GB mark.

The steps I went through are basically:

0. preparation: (as discussed in the previous article) Get a Ubuntu based Virtual Machine (Virtual Box) running on my Windows host laptop. Install Docker into this VM. Download the Oracle Database software (12.1.0.2 Enterprise Edition) from http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index-092322.html. Two files are downloaded to a folder on the host with a total of some 2,6 GB.

image

Note: the Puppet scripts expect two files with names linuxamd64_12c_database_1of2.zip and linuxamd64_12c_database_2of2.zip. I renamed the downloaded files to match these expectations.

1. create Docker image – db12c102_centos6 –  based on CentOS with Puppet installed as well as the Puppet modules required for the database installation

2. create Docker container – softwarecontainer – to act as the ‘staging container’ – a container that shares a volume that can be used as expanded file storage from other containers

3. run Docker container based on image db12c102_centos6 with host folder mapped into it and with volume shared from softwarecontainer

4. edit Puppet files to match the 12.1.0.2 Enterprise Edition installation

5. run the Puppet site.pp manifest – this will install the database software and initialize an instance

6. test whether the container is actually running the database; then create an image from the container

 

At the end of the article, I have both a container and image with the Oracle Database 12c (12.1.0.2) running – based on the oradb Puppet Module by Edwin Biemond. The container exposes port 1521 where the database can be accessed from the host as well as from other containers – as we will see in subsequent articles.

1. Create base Docker image

With a simple Dockerfile – derived from Docker Database Puppet – I create a base image that provides the starting point for the container that will hold the Oracle Database installation.

The Dockerfile looks like this:

# CentOS 6
FROM centos:centos6

RUN yum -y install hostname.x86_64 rubygems ruby-devel gcc git unzip
RUN echo “gem: –no-ri –no-rdoc” > ~/.gemrc

RUN rpm –import https://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs && \
rpm -ivh
http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm

# configure & install puppet
RUN yum install -y puppet tar
RUN gem install highline -v 1.6.21
RUN gem install librarian-puppet -v 1.0.3

RUN yum clean all

ADD puppet/Puppetfile /etc/puppet/
ADD puppet/manifests/site.pp /etc/puppet/

WORKDIR /etc/puppet/
RUN librarian-puppet install

EXPOSE 1521

ADD startup.sh /
RUN chmod 0755 /startup.sh

WORKDIR /

CMD bash -C ‘/startup.sh';’bash’

The image is created from this Dockerfile using this command:

docker build -t db12c102_centos6 .

A screenshot from somewhere midway in the creation process:

image

and the completion:

image

Note: Initially I used the exact same Dockerfile Edwin published – with an edited site.pp file in order to install the 12.1.0.2 Enterprise Edition instead of the 12.1.0.1 Standard Edition. I then ran into disk space limitations. Apparently, copying the software zip files (2,6 GB) to the container and extracting the content of these files inside the container occupied so much space that the installation was aborted.

image

I needed a trick to create container and install the Oracle Database without burdening the container with these zip-files and their temporary extracted contents. The trick consists of three things:

  1. create the image in multiple steps (instead of a single one with a single Dockerfile that auto-runs the complete Puppet script), starting with a simple base image
  2. run a container from this base image with a host folder mapping from which it can access the software zip-files without actually copying them to the container’s file system
  3. have the container import a volume from another container; this volume is used as temporary staging area (for the extracted files needed only during installation); finally create an image from this last container

Also note that in the original script, Edwin did not have the line “RUN gem install highline -v 1.6.21”; he advised me to add this line because the original Dockerfile resulted in a dependency error:

image

Adding this line (to make sure highline gets installed before a version of highline with a more demanding requirement on Ruby is brought along by librarian-puppet.

2. Create Docker container – softwarecontainer – to act as the ‘staging container’

A very simple Docker image is created using this Dockerfile:

FROM busybox

RUN mkdir /var/tmp/install

RUN chmod 777 /var/tmp/install

VOLUME /var/tmp/install

VOLUME /stage

CMD /bin/sh

and this command:

docker build -t softwarestore .

This results in an image called softwarestore which exposes its folder /var/tmp/install as a volume that can be used as expanded file storage from other containers.

Start the container softwarecontainer based on image:

docker run -i -t -v /host_temp/shared:/stage/shared –name softwarecontainer softwarestore /bin/sh

The container softwarecontainer is now available along with its /var/tmp/install volume that will be used during database installation as staging area.

 

3. Run Docker container based on base image db12c102_centos6

Run a new container based on the base image created in step 1:

docker run -ti  -v /host_temp/shared/downloads:/software –volumes-from softwarecontainer  db12c102_centos6  /bin/bash

with host folder mapped into it and with volume shared from softwarecontainer.

The host folder with the database software is accessible from within the container at /software, as is the /var/tmp volume in the softwarecontainer:

image

 

 

4. Edit Puppet files to match the 12.1.0.2 Enterprise Edition installation

Inside the container: Open the site.pp file at /etc/puppet in a text editor. Note: this directory and this file were created along with the base image in step 1.

image

Edit the lines that refer to SE (Standard Edition) and 12.1.0.1:

image

Note that only a few changes are required to process EE instead of SE and 12.1.0.2 instead of some other version.

 

5. Run the Puppet site.pp manifest to install the database software and initialize an instance

The heavy lifting regarding the installation of the Oracle Database and the creation of an instance (service orcl) is done by Puppet. The Puppet script is started (still inside the container) using this command:

puppet apply /etc/puppet/site.pp –verbose –detailed-exitcodes || [ $? -eq 2 ]

The first steps are shown here:

image

And the last ones:

image

When Puppet is done, we have a running database. All temporary files have been cleaned up.

6. Test whether the container is actually running the database – then create an image from the container

With these commands (inside the container) we can run SQL*Plus and connect to the running database instance:

export ORACLE_SID=orcl

export ORACLE_HOME=/oracle/product/12.1/db

cd $ORACLE_HOME/bin

./sqlplus “sys/Welcome01 as sysdba”

SQL*Plus is started and we can for example select from dual.

image

Note: The database sid = orcl. Password for SYS and SYSTEM are Welcome01.

Using exit twice – once to leave SQL*Plus and once to exit the container, we return to the host. The container is shown (agitated_bohr).

The next step – which takes some time, due to the size of the container and the images created from it – is to create an image that captures the current state of the container:

docker ps – a (to find the container id)

docker commit <container id>

docker images

assign nice name and version to image:

docker tag c5d3effcbdd6 oracle-db-12-1-0-2:1.0

Look at the result of this:

image

A sizable image – that through export and import and be reduced in size although that would severe the link with the base centos image.

The situation at this point can be visualized using this picture:

image

 

7. Run Container based on the Image

Run container from that image, local folder /software mapped to host folder that contains the software

docker run -ti -p 0.0.0.0:1521:1521 -v /host_temp/shared/downloads:/software --volumes-from softwarecontainer oracle-db-12-1-0-2:1.0  /bin/bash

Note: the -v and –volumes-from are not really required, because the two folders were required only for installing the database (during the creation of the image). Running the container with:

docker run --privileged=true -ti -p 1521:1521 oracle-db-12-1-0-2:1.0 /bin/bash

will do the job just as well. Note: I have added the –privileges=true here because I ran into a problem with not being able to switch users in the container. This discussion led me to use this additional parameter.

Once the container is fired up, the database can be started – using the /startup.sh script or using the statements listed in step 6. That is: I went through these steps (which seem a little bit more than should be required):

su oracle

/startup.sh

[provide oracle as the password]; this will start the listener

export ORACLE_SID=orcl

export ORACLE_HOME=/oracle/product/12.1/db

cd $ORACLE_HOME/bin

./sqlplus “sys/Welcome01 as sysdba”

SQLPlus starts and connects us to an idle instance. Then type startup – and the database is started.

After exiting SQL*Plus, I can check the listener status:

./lsnrctl

then type status.

The service orcl.example.com and the instance orcl are both ready.

Tidbits

These are really notes to myself – little things I needed or came across while going through the steps described in this article.

Docker

Docker stores container data in the directory /var/lib/docker/containers in Ubuntu.

Remove a single image: docker rmi IMAGE_ID (note: images can only be removed when no containers are based off of it)

Trick for removing multiple images in one go: http://stackoverflow.com/questions/17665283/how-does-one-remove-an-image-in-docker

Remove Docker Containers (http://stackoverflow.com/questions/17236796/how-to-remove-old-docker-containers):

<code>docker rm $(docker ps --before &lt;Container ID&gt; -q)</code>

Volumes are outside of the Union File System by definition, so any data in them will not count towards the devicemapper 10GB limit (http://fqa.io/questions/29083406/docker-disk-quotas). By adding a VOLUME in the Dockerfile, I am hoping to be able to leave the big software install packages outside the 10GB limit. image Learn the IP address assigned to a container:

$ sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' &lt;container_id&gt;

When a Docker container is restarted its, IP Addresses changes. Applications as well as others servers that were communicating with the container before the container restart, will be unable to communicate. Configuring a DNS server on Docker and configure consumers to use DNS names is a solution to the IPAddress change after a container restart.

Interesting observations in this white paper from Oracle on WebLogic and Docker

Linux

Determine size on disk of directories and their contents: du -f  (http://www.codecoffee.com/tipsforlinux/articles/22.html)

Remove entire directory: rm -r directory

Library Dependency

While creating the Docker image using Edwin’s Docker file, I ran into a dependency issue, that Edwin helped me fix. (well, he fixed it for me)

As the screenshot shows, the highline component that apparently gets installed as part of librarian-puppet requires a higher Ruby version than is available.

image

This was resolved by adding a single line to the Docker file:

RUN gem install highline -v 1.6.21

just prior to the line that installs librarian-puppet. This makes sure that highline – a version that does not have this advanced Ruby dependency – is already around when the librarian-puppet is installed. It will therefore not try to install the latest version of highline, that gives us the problem.

The post Next step with Docker – Create Image for Oracle Database 12c using Puppet appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/03/22/next-step-with-docker-create-image-for-oracle-database-12c-using-puppet/feed/ 0
How-to bulk delete ( or archive ) as fast as possible, using minimal undo, redo and temp https://technology.amis.nl/2015/03/16/how-to-bulk-delete-or-archive-as-fast-as-possible-using-minimal-undo-redo-and-temp/ https://technology.amis.nl/2015/03/16/how-to-bulk-delete-or-archive-as-fast-as-possible-using-minimal-undo-redo-and-temp/#comments Mon, 16 Mar 2015 15:46:00 +0000 https://technology.amis.nl/?p=34941 Deleting some rows or tens of millions of rows from an Oracle database should be treated in a completely different fashion. Though the delete itself is technically the same, maintaining indexes and validating constraints may have such a time and resource consuming influence that a vast amount of undo and redo is necessary when deleting [...]

The post How-to bulk delete ( or archive ) as fast as possible, using minimal undo, redo and temp appeared first on AMIS Oracle and Java Blog.

]]>
Deleting some rows or tens of millions of rows from an Oracle database should be treated in a completely different fashion. Though the delete itself is technically the same, maintaining indexes and validating constraints may have such a time and resource consuming influence that a vast amount of undo and redo is necessary when deleting millions of rows, in contrast to just a bit when you delete some.

A classic recipe in order to limit the effect on undo and temp is to commit in batches. The result is that the delete is executed in the context of more than one transaction, with these transactions actually done sequentially, and thus saving on undo and temp. But this delete method is not very efficient as to using database resources, it generates a lot more redo than necessary, and it is quite detrimental to transaction time.
Another option is to chop the delete in parts and execute the parts in parallel… this way you are using your resources a lot better, but by hitting undo and temp at the same time, you end up using a lot of undo and temp again, as before. The best option probably is to setup partitioning… this way the delete will be converted into a DDL statement – alter table x drop partition p – and excessive generation of undo, redo and temp won’t be an issue any more. But this requires the Enterprise Edition and on top of that a separate license for the Partitioning Option, some customers cannot afford.

So let us assume this is an Oracle Standard Edition Database, and you want the delete of 10 million rows to be just one fast transaction, with no more than 2-4GB undo and 2-4GB temp usage, and redo should be as minimal as possible. This is how you could do it, presuming there is a short window allowing you to drop all indexes, with no exception, so even a drop of primary, unique and foreign key indexes is allowed.
During the delete an application that touches this table will slow down because of the dropped indexes, and the table reorganization after the delete with the “move” command will offline the table for a short time. If this is not allowed, you could use the dbms_redefinition package to reorganize the table online, but at the expense of considerably slowing down overall transaction time.

First of all, do a CXTAS of all rows you want to delete, including the rowid’s. You may wonder what CXTAS means… Create eXternal Table As Select. Because of all indexes still active the where clause predicate should be fast, and because this is effectively a DDL statement virtually no redo is generated. By the way, if no archiving or backup of the data to be deleted is needed, consider creating an external table with rowid’s only.
Analyze the external table.
Secondly, disable primary, unique and foreign key constraints, dropping the index. Also drop all other indexes or set unusable. Then delete all rows in the original table with rowid’s saved in the external table you just made, and commit when finished.
Thirdly, move the original table in nologging mode so as to reorganize and lower its high watermark, enable all constraints again and recreate or rebuild – if set unusable before – all other indexes.
Analyze the table.
And last, if the deleted data must be archived, move the external table file dump to some other database or filesystem.

Checkout the following code… this cleanup procedure is actually in use at one of our customers. It is a dynamic setup for cleanup of any table in any schema, provided the table contains just one (primary key ) index, possibly a lob or clob, and a timestamp or date column. If your environment demands another setup, feel free to adjust.

CREATE OR REPLACE PROCEDURE CLEANUP_TABLE
  ( P_schema       VARCHAR2    := 'ETL'
  , P_table_name   VARCHAR2    := 'LOG'
  , P_column_name  VARCHAR2    := 'TIMESTAMP'
  , P_constraint   VARCHAR2    := 'LOG_PK'
  , P_keep_data    VARCHAR2    := '00-01' -- 00 year and 01 months
  , P_directory    VARCHAR2    := 'ETL_BCK_DIR'
  , P_unload2file  PLS_INTEGER := 0
  , P_rollback     PLS_INTEGER := 0 )
AS
  v_curr_schema          VARCHAR2(30 CHAR)  := SYS_CONTEXT('USERENV','CURRENT_SCHEMA');
  v_directory            VARCHAR2(30 CHAR)  := UPPER(TRIM(P_directory));
  v_schema               VARCHAR2(30 CHAR)  := UPPER(TRIM(P_schema));
  v_table_name           VARCHAR2(30 CHAR)  := UPPER(TRIM(P_table_name));
  v_column_name          VARCHAR2(30 CHAR)  := UPPER(TRIM(P_column_name));
  v_constraint           VARCHAR2(30 CHAR)  := UPPER(TRIM(P_constraint));
  v_dmp                  VARCHAR2(100 CHAR) := LOWER(TRIM(P_table_name))||'.dmp';
  v_external_table_name  VARCHAR2(30 CHAR)  := 'XDEL_'||UPPER(TRIM(substr(P_table_name,1,25)));
  v_stat                 VARCHAR2(4000 CHAR);

-----------------------------------------------------------------------------------------------------
PROCEDURE Delete_Backupped_Table_Data
IS

BEGIN

  BEGIN
    EXECUTE IMMEDIATE 'DROP TABLE '||v_external_table_name||' PURGE';
  EXCEPTION
  WHEN OTHERS THEN
    NULL;
  END;

  BEGIN
    UTL_FILE.FREMOVE ( v_directory, v_dmp );
  EXCEPTION
  WHEN OTHERS THEN
    NULL;
  END;

END Delete_Backupped_Table_Data;
----------------------------------------------------------------------------------------------------
PROCEDURE Backup_Table_Data_2b_Deleted
IS
BEGIN

IF unload2file = 1 THEN
  v_stat := 'CREATE TABLE '||v_external_table_name||chr(10)
          ||'ORGANIZATION EXTERNAL'||chr(10)
          ||'('||chr(10)
          ||'TYPE ORACLE_DATAPUMP'||chr(10)
          ||'DEFAULT DIRECTORY '||v_directory||chr(10)
          ||'ACCESS PARAMETERS ( NOLOGFILE )'||chr(10)
          ||'LOCATION ( '''||v_dmp||''' )'||chr(10)
          ||')'||chr(10)
          ||'AS SELECT ROWID RID, t.* FROM '||v_schema||'.'||v_table_name||' t'||chr(10)
          ||'WHERE TO_NUMBER(TO_CHAR('||v_column_name||',''YYYYMMDD'')) < '||chr(10)
          ||'TO_NUMBER(TO_CHAR(TRUNC(SYSDATE - TO_YMINTERVAL('||''''||P_keep_data||''''||')),''YYYYMMDD''))';
ELSE
  v_stat := 'CREATE TABLE '||v_external_table_name||chr(10)
          ||'ORGANIZATION EXTERNAL'||chr(10)
          ||'('||chr(10)
          ||'TYPE ORACLE_DATAPUMP'||chr(10)
          ||'DEFAULT DIRECTORY '||v_directory||chr(10)
          ||'ACCESS PARAMETERS ( NOLOGFILE )'||chr(10)
          ||'LOCATION ( '''||v_dmp||''' )'||chr(10)
          ||')'||chr(10)
          ||'AS SELECT ROWID RID FROM '||v_schema||'.'||v_table_name||chr(10)
          ||'WHERE TO_NUMBER(TO_CHAR('||v_column_name||',''YYYYMMDD'')) < '||chr(10)
          ||'TO_NUMBER(TO_CHAR(TRUNC(SYSDATE - TO_YMINTERVAL('||''''||P_keep_data||''''||')),''YYYYMMDD''))';
END IF;
  dbms_output.put_line ( v_stat );
  execute immediate v_stat;

END Backup_Table_Data_2b_Deleted;
-----------------------------------------------------------------------------------------------------
PROCEDURE Exec_Cleanup
IS
BEGIN

-- drop index
  v_stat := ' ALTER TABLE '||v_schema||'.'||v_table_name||' MODIFY CONSTRAINT '||v_constraint||' DISABLE DROP INDEX';
  dbms_output.put_line ( v_stat );
  execute immediate v_stat;

-- delete rows
  v_stat := ' DELETE '||v_schema||'.'||v_table_name||chr(10)
          ||' WHERE ROWID IN ( SELECT RID FROM '||v_external_table_name||' )';
  dbms_output.put_line ( v_stat );
  execute immediate v_stat;
  if P_rollback = 1 then
    rollback;
  end if;

-- move (reorganize) table and clobs
  v_stat := ' ALTER TABLE '||v_schema||'.'||v_table_name||' MOVE NOLGGING';
  dbms_output.put_line ( v_stat );
  execute immediate v_stat;
  for i in ( select column_name, tablespace_name
             from dba_lobs
             where owner = v_schema
             and table_name = v_table_name )
  loop
    v_stat := ' ALTER TABLE '||v_schema||'.'||v_table_name||' MOVE NOLOGGING '||chr(10)
            ||' LOB ('||i.column_name||')  STORE AS ( TABLESPACE '||i.tablespace_name||' )';
    dbms_output.put_line ( v_stat );
    execute immediate v_stat;
  end loop;

-- create pk index again
  v_stat := ' ALTER TABLE '||v_schema||'.'||v_table_name||' MODIFY CONSTRAINT '||v_constraint||' ENABLE';
  dbms_output.put_line ( v_stat );
  execute immediate v_stat;

END Exec_Cleanup;
-----------------------------------------------------------------------------------------------------

BEGIN
  dbms_output.put_line ( '************************************************************************' );
  dbms_output.put_line ( TO_CHAR(SYSDATE,'DY DD-MON-YYYY HH24:MI:SS')||' >> START CLEANUP TABLE '||v_schema||'.'||v_table_name );
  dbms_output.put_line ( '************************************************************************' );
-- Drop external table and dump file, if they already exist
  Delete_Backupped_Table_Data;
-- Create external table and dump file, using a select stat of the data 2b deleted
  Backup_Table_Data_2b_Deleted;
-- Analyze the external table you just made
  DBMS_STATS.GATHER_TABLE_STATS( v_curr_schema, v_external_table_name );
-- Delete table data, using the rowids in the external table
  Exec_Cleanup;
-- Analyze the table after reorg
  DBMS_STATS.GATHER_TABLE_STATS( v_schema, v_table_name );
  dbms_output.put_line ( '************************************************************************' );
  dbms_output.put_line ( TO_CHAR(SYSDATE,'DY DD-MON-YYYY HH24:MI:SS')||' >> END CLEANUP TABLE '||v_schema||'.'||v_table_name );
  dbms_output.put_line ( '************************************************************************' );

EXCEPTION
  WHEN OTHERS THEN
  RAISE_APPLICATION_ERROR( -20010, SQLERRM );
END;
/

The post How-to bulk delete ( or archive ) as fast as possible, using minimal undo, redo and temp appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/03/16/how-to-bulk-delete-or-archive-as-fast-as-possible-using-minimal-undo-redo-and-temp/feed/ 1
Deploying SOA Suite 12c artifacts from Nexus https://technology.amis.nl/2015/03/16/deploying-soa-suite-12c-artifacts-from-nexus/ https://technology.amis.nl/2015/03/16/deploying-soa-suite-12c-artifacts-from-nexus/#comments Mon, 16 Mar 2015 09:55:05 +0000 https://technology.amis.nl/?p=34925 SOA Suite 12c introduces Maven support to build and deploy artifacts. Oracle has provided extensive documentation on this. Also there already are plenty of blog posts describing how to do this. I will not repeat those posts (only shortly describe the steps). What I couldn’t find quickly enough though was how to deploy artifacts from [...]

The post Deploying SOA Suite 12c artifacts from Nexus appeared first on AMIS Oracle and Java Blog.

]]>
SOA Suite 12c introduces Maven support to build and deploy artifacts. Oracle has provided extensive documentation on this. Also there already are plenty of blog posts describing how to do this. I will not repeat those posts (only shortly describe the steps). What I couldn’t find quickly enough though was how to deploy artifacts from an artifact repository to an environment. This is a task often done by provisioning software such as Puppet or Jenkins. Sometimes though you want to do this from a command-line. In this post I’ll briefly describe steps required to get your Continuous Delivery efforts going and how to deploy an artifact from the Nexus repository to a SOA Suite runtime environment.

nexustosainfra

Preparations

In order to allow building and deploying of artifacts without JDeveloper, several steps need to be performed. See the official Oracle documentation on this here: http://docs.oracle.com/middleware/1213/soasuite/develop-soa/soa-maven-deployment.htm#SOASE88425.

Preparing your artifact repository

Optional but highly recommended

  • install and configure an artifact repository (Nexus and Artifactory are both popular. For Nexus see: http://books.sonatype.com/nexus-book/reference/install.html)
  • configure your settings.xml file in your .m2 folder in order provide information about your artifact repository (for a default Nexus installation described here)

Required

  • install the Oracle Maven Sync plugin (Oracle manual 48.2.1)

Below steps are described on the blog of Edwin Biemond: http://biemond.blogspot.nl/2014/06/maven-support-for-1213-service-bus-soa.html

  • use the Oracle Maven Sync plugin to put libraries required for the build/deploy process in your local Maven repository
  • (if using an artifact repository) put the Oracle Maven Sync plugin in your artifact repository and use it to add the required Oracle libraries

Preparing your project

Referring to the competition here, but Roger Goossens has done a good job at describing what needs to be done: http://blog.whitehorses.nl/2014/10/13/fusion-middleware-12c-embracing-the-power-of-maven/. Mind here though that the serverUrl is provided as a hardcoded part of the sar-common pom.xml. You can of course override this by providing it to Maven in the command-line. If you like it to always be provided command-line (to avoid accidentally not overriding it), don’t add it to the sar-common pom.xml.

  • make sure your project can find your MDS (update the appHome and oracleHome properties in your pom.xml)
  • create a jndi.properties file (you will want to replace properties in this file during your build process)
  • update the composite.revision property

Now you can compile and deploy your project to Nexus and to a runtime SOA environment. During the next steps, I’ll use a test project already deployed to Nexus (a simple HelloWorld SCA composite).

nexusscreenshot

Deploy from Nexus

The repository is prepared. Your project is prepared. You can deploy to an environment from your local directory. You can deploy to Nexus from your local directory. However, during your build process, you don’t want to build and deploy from your source directory / version control, but you want to deploy from your artifact repository. How do you do that? Usually a provisioning tool does this, but such a tool is not always available at a customer or their process does not allow using such tools. We can fall back to the command-line for this.

Get the SAR

During the next step, we start deploying. Because the sarLocation parameter used during deployment cannot be an URL, you have to download your SAR manually first by using the repository API. For Nexus several options are described here and a sample is provided below.

wget http://localhost:8081/nexus/service/local/repositories/snapshots/content/nl/amis/smeetsm/HelloWorld/1.0-SNAPSHOT/HelloWorld-1.0-20150314.150901-1.jar

You can also use curl instead of wget if you prefer. wget and curl are Linux tools. PowerShell 3.0 (Windows 7+) also can use its own variant of wget.

Deploy the SAR

I created a dummy pom.xml file which did nothing but avoid complaining from Maven..

 <?xml version="1.0" encoding="UTF-8"?>  
 <project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">  
   <modelVersion>4.0.0</modelVersion>  
   <groupId>nl.amis.smeetsm</groupId>  
   <artifactId>DeployApp</artifactId>  
   <version>1.0-SNAPSHOT</version>  
 </project>

Now you can deploy your downloaded SAR:

[maarten@localhost mvntest]$ mvn com.oracle.soa.plugin:oracle-soa-plugin:deploy -DsarLocation=HelloWorld-1.0-20150314.150901-1.jar -Duser=weblogic -Dpassword=Welcome01 -DserverURL=http://localhost:7101
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building DeployApp 1.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- oracle-soa-plugin:12.1.3-0-0:deploy (default-cli) @ DeployApp ---
[INFO] ------------------------------------------------------------------------
[INFO] ORACLE SOA MAVEN PLUGIN - DEPLOY COMPOSITE
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] setting user/password..., user=weblogic
Processing sar=HelloWorld-1.0-20150314.150901-1.jar
Adding shared data file - /home/maarten/jdeveloper/mywork/mvntest/HelloWorld-1.0-20150314.150901-1.jar
INFO: Creating HTTP connection to host:localhost, port:7101
INFO: Received HTTP response from the server, response code=200
Deploying composite success.
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.479s
[INFO] Finished at: Sun Mar 15 16:09:24 CET 2015
[INFO] Final Memory: 14M/218M
[INFO] ------------------------------------------------------------------------

At first I thought that fetching the project pom.xml file would be required and that this pom.xml could be used for deployment. This did not work for me since the plugin expects to find the SAR file in the target directory (even when I override this).

[maarten@localhost mvntest]$ mvn -f HelloWorld-1.0-20150314.150901-1.pom com.oracle.soa.plugin:oracle-soa-plugin:deploy -DsarLocation=./HelloWorld-1.0-20150314.150901-1.jar -Duser=weblogic -Dpassword=Welcome01 -DserverURL=http://localhost:7101
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building HelloWorld 1.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- oracle-soa-plugin:12.1.3-0-0:deploy (default-cli) @ HelloWorld ---
[INFO] ------------------------------------------------------------------------
[INFO] ORACLE SOA MAVEN PLUGIN - DEPLOY COMPOSITE
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] setting user/password..., user=weblogic
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.007s
[INFO] Finished at: Sun Mar 15 16:04:03 CET 2015
[INFO] Final Memory: 15M/218M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.oracle.soa.plugin:oracle-soa-plugin:12.1.3-0-0:deploy (default-cli) on project HelloWorld: file not found: /home/maarten/jdeveloper/mywork/mvntest/target/sca_HelloWorld_rev1.0-SNAPSHOT.jar -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

Conclusion

Oracle has done a very nice job at providing Maven support for SOA Suite composites in SOA Suite 12c. The documentation provides a good start and several blog posts are already available for filling your artifact repository with Service Bus and SCA composite projects. In this blog post I have described how you can deploy your composite from your artifact repository to an environment using the command-line.

Of course a provisioning tool is preferable, but when such a tool is not available or the tool does not have sufficient Maven support, you can use the method described in this post as an alternative. This can of course also be used if you want to create a command-line only release for the operations department. If you want to provide a complete command-line installation though without requirements for settings.xml configuration to find the repository (in order to allow usage of the oracle-soa-plugin) you need to provide a separate Maven installation with settings.xml in your release. If the installation is performed from a location which cannot reach your artifact repository, you need to provide the repository as part of your release. These are workarounds though.

The post Deploying SOA Suite 12c artifacts from Nexus appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/03/16/deploying-soa-suite-12c-artifacts-from-nexus/feed/ 0
Docker – Take Two – Starting From Windows with Linux VM as Docker Host https://technology.amis.nl/2015/03/15/docker-take-two-starting-from-windows-with-linux-vm-as-docker-host/ https://technology.amis.nl/2015/03/15/docker-take-two-starting-from-windows-with-linux-vm-as-docker-host/#comments Sun, 15 Mar 2015 21:29:30 +0000 https://technology.amis.nl/?p=34920 My first attempt with Docker was from my Windows host machine using boot2docker, as described in this article: https://technology.amis.nl/2015/03/15/my-first-steps-with-docker-starting-from-windows-as-the-host/. Boot2docker is a great tool for being able to work with Docker on a Windows machine. However, I ran into limitations – such as not being able to create containers with the GUI applications running in [...]

The post Docker – Take Two – Starting From Windows with Linux VM as Docker Host appeared first on AMIS Oracle and Java Blog.

]]>
My first attempt with Docker was from my Windows host machine using boot2docker, as described in this article: https://technology.amis.nl/2015/03/15/my-first-steps-with-docker-starting-from-windows-as-the-host/. Boot2docker is a great tool for being able to work with Docker on a Windows machine. However, I ran into limitations – such as not being able to create containers with the GUI applications running in them. Besides, Linux seems to be – for now at least – the more natural environment for Docker. So decided to create a Linux VM – actually a Virtual Box VM – that would serve as my Docker host.

In this article I will walk through the steps I went through in order to get this Linux VM running on my Windows host and subsequently turn that VM into the Docker Server in which one or more containers will be running – eventually to serve as demo and training environments, for example with Oracle Databases and Middleware. After all, Mark Nelson showed the way in this wonderful article: https://redstack.wordpress.com/2014/11/14/gettingn-to-know-docker-a-better-way-to-do-virtualization/.

I decided to closely follow Mark’s lead in his choice of Linux VM: Ubunty 14.04.1, to be created using Vagrant (about which I have blogged before – for example https://technology.amis.nl/2014/06/26/provisioning-an-oracle-11g-database-virtualbox-vm-with-vagrant-and-puppet-for-dummies/ ).

 

Stage One – Create Ubuntu VM using Vagrant

I have both Vagrant and Virtual Box set up on my laptop. From that starting point, I open a command line window and create directory into which to create the Vagrant configuration for the VM.

SNAGHTML65c24fe

I initialize the Vagrant configuration.

image

Next, I open the vagrant file that gets  created in a text editor, to make some changes:

SNAGHTML65d6eca

I have indicated that the GUI should be used, that the memory should be increased over the initial setting and that a local folder ../temp (which resolves to c:\temp on the Windows host) should be mapped to the folder/host/temp in the Linux VM.

After making these changes and saving the file, the VM can be created using vagrant:

image

It will take some time to download the base box (janihur’s ubuntu-1404-desktop box).

Once downloading is complete, the VM is booted:

image

and before too long, my Ubuntu 14.04.1 VM is running with its beautiful desktop and all:

SNAGHTML660390b

It turns out on my first attempt that the curl utility is no yet available. I have to go through two steps to get it to work (on the command line, as sudo):

sudo apt-get update

and:

<code>sudo apt-get install curl</code>

image

Once that is taken care of, I can go on to install docker

 

Stage Two – Install Docker into the VM

 

Using the instructions from Mark Nelson, it is not hard to get going:

curl -sSL https://get.docker.com/ubuntu | /bin/sh

image

This will download the apt registry keys for Docker and install the Docker program.

Test the successful installation:

image

 

Thanks to http://www.tekhead.org/blog/2014/09/installing-docker-on-ubuntu-quick-fix/ I could resolve an issue with this message “ Error loading docker apparmor profile: fork/exec /sbin/apparmor_parser: no such file or directory ()” that appeared when I tried to start Docker using “docker -d”

sudo apt-get install apparmor

image

After making this fix, I can run the Docker daemon that other sessions can subsequently connect to:

image

Another error message: whenever I try to do anything with Docker, I run into “permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?“ When I perform actions as sudo, I do not have these issues. Also see: http://askubuntu.com/questions/477551/how-can-i-use-docker-without-sudo. The user I am using is to be added as a member of the docker group.

image

Now user vagrant can actually start interacting with Docker:

image

And start a new container:

image

 

After some time, the download is complete and the container is created, started, the ls is executed and the container is brought down again:

image

So next when we inspect the set of Docker images, the one we downloaded is listed:

image

Stage Three – Create a Container (and an Image) with Java and Oracle WebLogic

Subsequently, I follow Mark’s instructions with regard to creating a group, a user and downloading the X libraries we need:

image

 

As per Mark’s instructions, I exit the container and create an image out of it.

I have downloaded the JDK 1.7.75 to my Windows host and copied into the folder that is to be shared with the VM that runs the Docker Server:

SNAGHTML70dbcfc

I then start the container again, specifying how the folder /host_temp/Downloads on the VM is to mapped to the folder /home/oracle/Downloads in the container:

docker run -ti -v /host_temp/Downloads:/home/oracle/Downloads b30988747324 /bin/bash

and:

image

Next, I install the JDK in the container:

 

image

Exit the container, list the containers, commit the container – to create an image from the current container state – and tag the image with a useful title and version label: oracle-base:1.0

image

Next, start a new container based on this latest image and share the host’s X display with a container:

docker run -ti -p 0.0.0.0:7001:7001 -v /tmp/.X11-unix:/tmp/.X11-unix -v /host_temp/Downloads:/home/oracle/Downloads -e DISPLAY=$DISPLAY oracle-base:1.0 /bin/bash

image

With the X display shared, we can run GUI applications inside the container, such as shown here for jconsole:

image

and tadada:

image

Note: this was my big issue with running Docker through boot2docker.

I have downloaded WebLogic 12.1.3 generic installer from OTN:

image

and copied the file to the folder that is shared with the VM.

image

Now I want to install the WebLogic Server into a container based on oracle-base:1.0.

image

The installer runs, the checks fail but skip the warnings:

image

Next the wizard appears. Go through the steps, accept defaults:

image

 

imageimage

The configuration wizard will start subsequently, used for creating a WebLogic Domain.

image

Again, by and large default settings:

image

image

When the wizard is done, you can start the WebLogic Server base_domain using

/home/oracle/Oracle/Middleware/Oracle_Home/user_projects/domains/base_domain/startWebLogic.sh

SNAGHTML71a3fd2

When the server is running, you can access the Administration Console in the browser in the host VM:

image

Time to create an image for the container, now that it contains WebLogic 12.1.3. Exit the container, list all containers and commit the one we just extended:

docker commit -a “vagrant” -m  “WebLogic 12.1.3 Installed and base_domain created” dfe855989747

image

List the images, tag the one that was just created: docker tag ed49b759dc09 oracle-wls12_1_3:1.0

and for good measure, run another container, based on this new image:

docker run -ti -p 0.0.0.0:7001:7001 -v /tmp/.X11-unix:/tmp/.X11-unix -v /host_temp/Downloads:/home/oracle/Downloads -e DISPLAY=$DISPLAY oracle-wls12_1_3:1.0  /bin/bash

Use the same statement as before to run the base_domain:

/home/oracle/Oracle/Middleware/Oracle_Home/user_projects/domains/base_domain/startWebLogic.sh

The next figure visualizes the current situation – with Windows, the Ubuntu Virtual Box that hosts the Docker engine and the Docker container that hosts WebLogic 12.1.3:

docker-image

Just out of curiosity I checked the size of the Virtual Box image. After creating several images, apparently the size of the VM on physical hard disk is 3 GB. I am curious how this will grow when I start adding containers.

image

The post Docker – Take Two – Starting From Windows with Linux VM as Docker Host appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/03/15/docker-take-two-starting-from-windows-with-linux-vm-as-docker-host/feed/ 0
Oracle datapump, advanced compression and licensing https://technology.amis.nl/2015/03/15/oracle-datapump-advanced-compression-and-licensing/ https://technology.amis.nl/2015/03/15/oracle-datapump-advanced-compression-and-licensing/#comments Sun, 15 Mar 2015 17:00:00 +0000 https://technology.amis.nl/?p=34009 As you may well know, Advanced Compression is an option you have to pay for when using it. But as a DBA you can’t always control the use of this option, e.g. the use of Datapump with Compression parameters. Lately, a few customers were involved in a discussion with Oracle LMS (License Management Services) about [...]

The post Oracle datapump, advanced compression and licensing appeared first on AMIS Oracle and Java Blog.

]]>
As you may well know, Advanced Compression is an option you have to pay for when using it.

But as a DBA you can’t always control the use of this option, e.g. the use of Datapump with Compression parameters. Lately, a few customers were involved in a discussion with Oracle LMS (License Management Services) about the use of Advanced Compression within their database.

Using export scripts with the parameter  COMPRESSION=METADATA_ONLY (which is default) does not require the Advanced Compression option.

But when using the parameter COMPRESSION=ALL,the output of the LMS – scripts could be as this:

  
VERSION NAME CURRE LAST_USAGE_DATE LAST_SAMPLE_DATE FEATURE_INFO_       
----------------- ---------------------------------- ----- ----------------    -------------------  -------------------------------------------------------
 
11.2.0.4.0        Oracle Utility Datapump (Export)   TRUE   2014-12-17_23:22:32  2014-12-17_23:22:32 Oracle Utility Datapump (Export) invoked: 176 times, compression used: 63 times, encryption used: 0 times
 

It’s all about the phrase:  “ invoked: 176 times, compression used: 63 times

What can be concluded of this output:

  • It does say that Oracle Datapump was invoked and compression used, but not WHEN compression was used. It could be used in the first 63 times of the 176 times datapump was invoked. This is important because it shows that it wasn’t used deliberately, and the script was replaced in time e.g. When the compression used is close to 176, you’ve got a harder nut to crack.
  • If this is the only output which shows that you are using Advanced Compression, that is, no OLTP compression, RMAN, SecureFiles and Data Guard network compression, your defense is a lot stronger.
  • The fact that this option can’t be switched off seems quite a good case for a lawyer to crack. At this moment revoking users to use Data Pump at all is the only way to avoid its usage.
  • The fact that there’s an enhancement request to disallow unlicense compression feature with Datapump shows that there’s indeed a problem Oracle hasn’t been able to solve.
Bug 8478082 : DISALLOW UNLICENSED COMPRESSION FEATURE WHILE USING DATA PUMP

2 - Very desirable feature

*** 12/23/14 07:07 am RESPONSE ***
There is no way to disable the use of any of the features that are part of the Advanced Compression option. 
You can track the usage of those features with DBA_FEATURE_USAGE_STATISTICS, and if you discover a feature 
is being used when it shouldn't be, you can contact the user and ask them to stop using it.

So… when confronted with a license-incompliancy of Advanced Compression, purely based on the use of Datapump parameters, don’t take it for granted, but fight it.

Resources:

– Oracle doc about options and packs:  http://docs.oracle.com/cd/E11882_01/license.112/e47877/options.htm#DBLIC139

The post Oracle datapump, advanced compression and licensing appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/03/15/oracle-datapump-advanced-compression-and-licensing/feed/ 0
My First Steps with Docker – starting from Windows as the Host https://technology.amis.nl/2015/03/15/my-first-steps-with-docker-starting-from-windows-as-the-host/ https://technology.amis.nl/2015/03/15/my-first-steps-with-docker-starting-from-windows-as-the-host/#comments Sun, 15 Mar 2015 13:45:36 +0000 https://technology.amis.nl/?p=34834 After reading quite a bit about Docker – especially the great write up by Mark Nelson (Getting to know Docker – a better way to do virtualization?) I believe it is more than about time for me to delve a little further into Docker. Following Edwin Biemond’s lead, I have dabbled quite a bit in [...]

The post My First Steps with Docker – starting from Windows as the Host appeared first on AMIS Oracle and Java Blog.

]]>
After reading quite a bit about Docker – especially the great write up by Mark Nelson (Getting to know Docker – a better way to do virtualization?) I believe it is more than about time for me to delve a little further into Docker. Following Edwin Biemond’s lead, I have dabbled quite a bit in Vagrant and Puppet and had quite satisfying results. The attraction of Docker – the even leaner way of dealing with various virtual environments – is enough to try to get acquainted at least a little. The idea would be that with Docker, the containers I run – based on images I define – do not actually each require their own Virtual Machine -with all associated overhead. Additionally, one image can extend from another – so one base Linux image can be extended to become Linux+Oracle Database which can be extended to become Linux+Oracle Database + a special demo or training set up. All that comes later – first I want to get a simple example running on my Windows 7 laptop.

Steps:

Downloading Windows installer for boot2docker from GitHub

image

Run the installer, following the instructions in the “How to Use Docker on Windows ” article and the resource http://docs.docker.com/installation/windows/

 

imageSNAGHTML174e814

SNAGHTML17511d2

SNAGHTML17541c7

 

 

Run Boot2Docker

image

 

SNAGHTML189100e

Virtual Box is started through its API in the background:

SNAGHTML18b5b8f

(Boot2Docker creates a Linux VM that acts as the Docker Server – because Windows cannot be a native Docker server right now).

The logging for the VM gives some indication as to what Boot2Docker is taking care of:

SNAGHTML18c744c

Then, when the VM is created, boot2docker connects to it. Boot2docker is the client through which we can interact with the Docker Server that manages images and containers.

SNAGHTML18d4448

A simple test can be executed by entering

docker run hello-world

SNAGHTML18da377

This downloads a Docker Image and creates a Container based on that Image. The output shown in the screenshot indicates that the container was created, started and accessed correctly.

 

Start up an empty Ubuntu 14.04 container:

docker run ubuntu:14.04 ls

image

Again, the image is downloaded, a container is created and fired up and the ls command is executed in the container.

At this point, I start working my way through the Mark Nelson article – which explains things wonderfully well.

 

Stopping and Starting the boot2docker client

On the Windows Command Line, type “boot2docker stop” to quit a boot2docker session. This will power down the VM that was created. Using “boot2docker start” will start the client again by starting up the VM again:

image

In this case, the VM is started, but the command line interface is not available. With “boot2docker ssh” I can now get into the client.

image

Mapping between Windows Host File System and Docker Container

The boot2docker VM has a predefined folder mapping between Windows (folder c:\users) and the VM’s file system: c\Users.

SNAGHTML20cbc47

This means that whatever is in c:\Users is available inside the Linux VM:

image

Because we can also map from the Docker host into the container, we can make files on the Windows file system available inside Docker containers.

By starting the container using Docker with the following instruction, we map the local folder /c/Users/lucas_j (which itself is a mount on the Docker host of the Windows host folder c:\Users\lucas_j) to the folder /home/oracle/Downloads in the container. Anything we download on the Windows host can be used within the container.

docker run -ti -v /c/Users/lucas_j:/home/oracle/Downloads d6122c9300ee /bin/bash

SNAGHTML2140d91

 

Installing the JDK into the container

For example installing the JDK into the container is done as follows:

Download the TAR from http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html

image

The tar file is downloaded to c:\Users\lucas_j\Downloads

image

and is therefore available inside the container

image

 

Let’s untar it into our home directory:

cd

tar xzvf Downloads/Downloads/jdk-7u75-linux-x64.tar.gz

image

And the untarring takes place:

image

After exiting the container, we can create a new image for the container and tag the image so it is easier to recognize and reference:

image

 

At this point, even though I am impressed with what I am seeing, I do have a challenge: in some of the containers I am going to create and run, I will want to use a GUI. However, I have not found a way through boot2docker to have a graphical display, an X-terminal, in the container. It seems that the DockerGUI tool (http://dockerstop.com/) offers that option – and I like to look into it sometime. For now however, I will continue with the creation of a Linux Virtual Machine, installer Docker into that VM and start dockerizing from within the VM, rather than from my Windows operating system. More news on these experiences to follow.

(if there is an easy way of adding GUI support to the boot2docker VM, that would also be an option – but I do not know how to do that).

 

Resources

How to Use Docker on Windows  – http://blog.tutum.co/2014/11/05/how-to-use-docker-on-windows – very useful tips on sharing files between Windows and the Boot2Docker VM and username & password details

Download boot2docker for Windows – https://github.com/boot2docker/windows-installer 

Handy Docker Tips and Examples – http://flurdy.com/docs/docker/docker_osx_ubuntu.html

DockerGUI – http://dockerstop.com/

The post My First Steps with Docker – starting from Windows as the Host appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/03/15/my-first-steps-with-docker-starting-from-windows-as-the-host/feed/ 1
Quick Introduction to Oracle Stream Explorer Part Three– Business User friendly processing of real time events – some smart pattern detection https://technology.amis.nl/2015/03/14/quick-introduction-to-oracle-stream-explorer-part-three-business-user-friendly-processing-of-real-time-events-some-smart-pattern-detection/ https://technology.amis.nl/2015/03/14/quick-introduction-to-oracle-stream-explorer-part-three-business-user-friendly-processing-of-real-time-events-some-smart-pattern-detection/#comments Sat, 14 Mar 2015 16:27:00 +0000 https://technology.amis.nl/?p=34780 This article is part three in a series about Oracle – very recently released by Oracle and available from OTN. With Oracle Stream Explorer, business users and citizen developers as well as hard core developers can create explorations on top of live streaming data to filter, enrich, aggregate, inspect them and detect patterns. Stream Explorer [...]

The post Quick Introduction to Oracle Stream Explorer Part Three– Business User friendly processing of real time events – some smart pattern detection appeared first on AMIS Oracle and Java Blog.

]]>
This article is part three in a series about Oracle – very recently released by Oracle and available from OTN. With Oracle Stream Explorer, business users and citizen developers as well as hard core developers can create explorations on top of live streaming data to filter, enrich, aggregate, inspect them and detect patterns. Stream Explorer provides a business user friendly browser based user interface in which streams are configured, explorations are composed, enrichment is set up and publication to external consumers is defined. In the previous two articles I showed how to create a simple exploration based on a stream of events read from a CSV file and how to derive aggregate values from these events. The second article introduced enrichment based on data in database table, adding more meaning to the aggregation performed on the stream.

In this article, we will look at some of the built in patterns that we can easily include in our explorations to detect or use for further aggregation and interpretation.

The use case is: we are organizing a small conference. In three rooms, sessions take place simultaneously. Our attendees are free to decide which session to attend. We would like to know at virtually any moment how many people are in each room. We have set up simple detectors at the doors of the rooms that produce a signal whenever someone enters or leaves the room. This signal consists of the room identifier (1,2 or 3) and an IN/OUT flag (values +1 or -1).

In article one, we simply determined the number of people per room – updated every 10 seconds with the latest events. In the second article we enriched these updates with details about the rooms – name of the room as well as its capacity. I am still looking for a way to calculate the occupancy percentage (number of attendees divided by room capacity) in order to alert for rooms at higher than 90% occupancy. (if SX does not support these calculations, I could add a virtual column to the ROOMS table that returns the number of attendees equal to 90% capacity and work from there).

In the article you are currently reading I will make use of some built in patterns:

  • Top N – to report every 30 seconds what is the room with the highest number of people in it
  • Detect Missing Event – identify event sources that have stopped publishing events (for example to find a room with a jammed door, a broken detector or no human activity)
  • Eliminate Duplicates – do not report a specific event type (such as missing events because of jammed door) for the same room again if it was already reported in the last 30 seconds

Using a one of the predefined patterns is very simple: instead of creating a generic new exploration, we create a pattern-based-exploration and provide the parameters associated with the pattern.

 

Top N: report every 30 seconds what is the room with the highest number of people in it

In the Catalog, Create a new item and select Pattern as the type for the new item. A list of supported pattern types is shown. Pick the Top N entry:

image

The pattern template page appears – an exploration is configured, based on the template for the pattern.

The source for this exploration is EnrichedRoomFlow. We have to specify that we want to report every 20 seconds (slide 20) about the room that had the largest number of people in it (order by AttendeeCount). We want to look at the attendee count over the past 40 seconds – as to not call a room the most popular too easily. And since we are only interested in the single most popular room (instead of the top 2 or 3) we set N to 1.

image

Next we can configure the name and description of the exploration. Additionally, we can define a target – such as a CVS file – where the findings from the exploration are written to.

image

Click finish and publish the exploration.

The target file is created and before too long, the popular rooms are written to it:

image

 

 

Detect Missing Event

Create a new item of type Pattern. Select Detect Missing Event as the pattern.

image

We have to specify the source in which we want to detect the missing event. Select EnrichedRoomFlow – although NetRoomFlow would have done just as nicely. Specify the Tracking Field: which field(s) are we watching to check whether the event is reported or not? In this case, we are looking for rooms that do not report signals anymore. Therefore, the tracking field is the RoomId. We consider a room non-responding if we have not received any signal for 1 minute; that is our heartbeat interval.

image

The exploration can be configured in terms of its name and description:

image

and

image

Additionally, we can configure a target – a csv file to collect all reported events – all notifications of a non-responsive room:

image

Then, when the configuration is done, we can publish the Exploration – to have the OEP application deployed to the OEP Server. The target file is created, and events are started being written to it.

In order to test whether this exploration is working correctly, I have added a fourth room in the database table and included some entry and exit events for room #4:

image

The target file will contain – after a few minutes have passed – signals for room 4 – as expected. It turns out that room 3 is eventually reported as well. Apparently, my artificial set of events has a section where only rooms 1 and 2 are reported:

image

 

Eliminate Duplicates

Another pattern we can easily leverage is eliminate duplicates. It allows us to ensure that over a specified period of time an event of a certain type is not published more than once – for a certain combination of property values. For example the non responsive rooms discussed overhead are reported repeatedly. Room number 4 is non responsive and the exploration keeps on telling us that. Now we may not need to get that information for the same room more often than say once per 5 minutes. This can easily be taken care of with this duplicate elimination pattern.

Create a new item of type pattern and select the eliminate duplicates pattern:

image

Set as the source the output from the detect missing events exploration that reports non-responsive rooms. Select RoomId as the key to de-duplicatie on and set the Window to 5 minutes – as we want to prevent duplicate reports within a five minute period.

image

Configure a target file, to test the output from the exploration:

image

and publish.

The file is created and lo and behold: non responsive rooms are reported, but far less frequently than is done by the DetectJammedDoorsFailedDetectorsNonResponsiveRooms exploration in the nonResponsiveRooms

The post Quick Introduction to Oracle Stream Explorer Part Three– Business User friendly processing of real time events – some smart pattern detection appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/03/14/quick-introduction-to-oracle-stream-explorer-part-three-business-user-friendly-processing-of-real-time-events-some-smart-pattern-detection/feed/ 0
Quick Introduction to Oracle Stream Explorer Part Two– Business User friendly processing of real time events (enrichment, calculation) https://technology.amis.nl/2015/03/14/quick-introduction-to-oracle-stream-explorer-part-two-business-user-friendly-processing-of-real-time-events-enrichment-calculation/ https://technology.amis.nl/2015/03/14/quick-introduction-to-oracle-stream-explorer-part-two-business-user-friendly-processing-of-real-time-events-enrichment-calculation/#comments Sat, 14 Mar 2015 06:38:21 +0000 https://technology.amis.nl/?p=34745 Very recently, Oracle released the Oracle Stream Explorer product, available from OTN. With Oracle Stream Explorer, business users and citizen developers as well as hard core developers can create explorations on top of live streaming data to filter, enrich, aggregate, inspect them and detect patterns. Stream Explorer provides a business user friendly browser based user [...]

The post Quick Introduction to Oracle Stream Explorer Part Two– Business User friendly processing of real time events (enrichment, calculation) appeared first on AMIS Oracle and Java Blog.

]]>
Very recently, Oracle released the Oracle Stream Explorer product, available from OTN. With Oracle Stream Explorer, business users and citizen developers as well as hard core developers can create explorations on top of live streaming data to filter, enrich, aggregate, inspect them and detect patterns. Stream Explorer provides a business user friendly browser based user interface in which streams are configured, explorations are composed, enrichment is set up and publication to external consumers is defined. Note that Stream Explorer is built on top of Oracle Event Processor – any Stream Explorer exploration is in fact an OEP application. It can be exported as such and refined in JDeveloper. As such, Stream Explorer is also a great way to get going with OEP.

In a previous article, I introduced the very first steps with Stream Explorer. In this article, I set up a stream of events that report people entering or leaving the rooms of a small conference event. The use case is: we are organizing a small conference. In three rooms, sessions take place simultaneously. Our attendees are free to decide which session to attend. We would like to know at virtually any moment how many people are in each room. We have set up simple detectors at the doors of the rooms that produce a signal whenever someone enters or leaves the room. This signal consists of the room identifier (1,2 or 3) and an IN/OUT flag (values +1 or -1).

In this first article, I used StreamExplorer to process these events and produce an aggregate per room of the net number of people that entered or left the room. In the article you are reading right now, we will continue from where we left off. We will add enrichment – using a database table with room details that are correlated with the room events and the room capacity is to be used to explore the room occupancy rate and detect Standing Room Only rooms (next article). We will see that not only can we use a stream as the source for an exploration, we can also use one exploration as the source for the next – and correlate explorations in creating a new exploration.

Preparation

I am assuming Stream Explorer is installed on top of the OEP server and that the server is running. Stream Explorer can be accessed at http://host:port/sx. The same file with room events is used as in the previous article. Additionally, a table is created in database schema. It holds details about the rooms used for our conference.

create table rooms
( id number(3,0)
, name varchar2(50)
, maximum_capacity number(3,0)
)

And the data stored in the table

image

In order for Stream Explorer to access database tables, a Data Source has to be configured. This can be done through the Visualizer browser application for OEP, accessible at http://host:port/wlevs.

image

Login. Click on the node defaultserver and on the tab Data Sources.

image

Click on the Add button to create a new Data Source.

Provide the name and the jdbc name for the data source. Set the Global Transaction Protocol to One Phase Commit.

image

Open the Global Transaction Protocol tab. In my case, the database is of type Oracle, hence the following settings:

image

Open the tab Connection Pool. You can specify a query to test the connection:

image

when done, press Save to create the data source. You can now exit the Visualizer again.

 

And Action

In order to be able to use the data from the ROOMS table for enrichment, we have to create a Reference object in Stream Explorer.

In SX, on the Catalog page, create a new item of type Reference.

image

Provide the details for the new reference:

image

Select the data source

image

Select the database table (ROOMS) and click Create:

image

The new Reference object is created:

image

 

Enrich Exploration

With the Rooms reference at our disposal, we can now create a new exploration that uses the findings from NetRoomFlow and enriches them with room details as well as the data to calculate the room occupancy percentage. NOTE: I have not yet figured out how to add a property based on an expression or formula. I have high hopes that this will indeed prove possible.

Create a new Exploration:

image

Enter details and select exploration NetRoomFlow as the source for this new exploration:

image

Press create.

The Exploration page appears. Click on the sources field and select the Rooms reference to add as a source.

image

Configure the correlation condition:

image

RoomId in the NetRoomFlow source should match with Id in the Rooms source.

Next, you can organize the properties reported from the exploration – change their names and the order they are listed in or whether they are listed at all:

image

These settings influence the appearance of the exploration to external targets as well as to downstream explorations using this one as their source.

 

Behind the scenes

When you login to the Visualizer tool, you will find that each exploration (as well as stream and reference) corresponds with an actual OEP application deployed to the OEP server. In fact, every version that is published from SX results in a separate OEP application:

image

We can inspect these OEP application to see how our actions as a business user or citizen developer have been translated by Stream Explorer in OEP configuration and CQL statements. This can be very instructive for when we want to learn how to develop OEP applications of our own.

SNAGHTML3f8a42f

The post Quick Introduction to Oracle Stream Explorer Part Two– Business User friendly processing of real time events (enrichment, calculation) appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/03/14/quick-introduction-to-oracle-stream-explorer-part-two-business-user-friendly-processing-of-real-time-events-enrichment-calculation/feed/ 0
Using an aggregation function to query a JSON-string straight from SQL https://technology.amis.nl/2015/03/13/using-an-aggregation-function-to-query-a-json-string-straight-from-sql/ https://technology.amis.nl/2015/03/13/using-an-aggregation-function-to-query-a-json-string-straight-from-sql/#comments Fri, 13 Mar 2015 21:35:19 +0000 https://technology.amis.nl/?p=34702 Last week I read this blogpost by Scott Wesley. In this post he describes that he uses a custom aggregate function to create large JSON-strings. And for that he used a solution as described in this post by Carsten Czarski. That post of Scott reminded me of a post by my collegue Lucas Jellema, in [...]

The post Using an aggregation function to query a JSON-string straight from SQL appeared first on AMIS Oracle and Java Blog.

]]>
Last week I read this blogpost by Scott Wesley. In this post he describes that he uses a custom aggregate function to create large JSON-strings.
And for that he used a solution as described in this post by Carsten Czarski. That post of Scott reminded me of a post by my collegue Lucas Jellema, in which he uses the “normal” listagg aggregation function. When Lucas wrote his post I thought that I could beat the 4000 char limit of his aproach with a custom aggregate function.

I started out with “Tom Kytes stragg_type”, see here, just changed the type of the stragg_type attribute string to clob and the return-type of the functions to clob.

That worked, no problem to aggregate strings of size 4000, no problem for size 10000, but for larger strings it became slower and slower.
Too slow, 15000 bytes took 15 seconds.

So I changed the type back to varchar2, but with a size of varchar2(32767).
The worked, and fast. But only for strings shorter than 29989 bytes. For larger strings I would get a
ORA-22813: operand value exceeds system limits

To solve that I added a clob attribute, just as Carsten Czarski does in his listagg.
Used the varchar2 string for speed, and as soon as it result became to large, the clob for size.
And that worked too. But as soon as the aggregated string size exceeded 58894 bytes the ORA-22813 popped up again.
And as soon as the odciaggregatemerge function used the clob another error: ORA-22922: nonexistent LOB value
So I gave up, 4000 bytes is a nice limit for JSON, if you want something bigger you have to use PL/SQL. So I thought.

But after reading the post of Scott I compared my code with the code of Carsten Czarski to see how he solved my problems.
And it turned out that the first one was easy to solve, just limit the string to 4000 again.
And Carsten’s odciaggregatemerge function will raise a ORA-22922 too. I expect that it is an Oracle bug :)
But, because the odciaggregatemerge function is only executed if the optimizer decides that it will execute the aggregating query parallel, you aggregate very large strings without ever seeing that error.

So, now it’s time to introduce my JSON aggregator. It’s a custom aggregate function, which aggregates a query into a JSON-array. The elements of this array are JSON-objects.

create or replace type agg_json as object
( t_varchar2 varchar2(32767)
, t_clob clob
, static function odciaggregateinitialize( sctx in out agg_json )
  return number
, member function odciaggregateiterate
    ( self in out agg_json
    , a_val dbmsoutput_linesarray
    )
  return number
, member function odciaggregateterminate
    ( self in out agg_json
    , returnvalue out clob
    , flags in number
    )
  return number
, member function odciaggregatemerge
    ( self in out agg_json
    , ctx2 in out agg_json
    )
  return number
, static function json_obj( p_obj dbmsoutput_linesarray )
  return varchar2
, static function escape_json( p_val varchar2 )
  return varchar2
)

Just a type with two attributes, the standard functions for implementing a custom aggregation function, and two supporting static functions.
But notice the a_val parameter of odciaggregateiterate. dbmsoutput_linesarray is a varray of varchar2(32767).
Every name-value pair in the JSON-Object is formed by 3 entries in that varray.
The first entry is the name of the name-value pair.
The second entry is the value of the name-value pair.
And the third is a indicator for the value, is it a string or not.
The fourth entry is the name of the second name-value pair.
The fifth entry is the value of the second name-value pair.

After creating the aggregation function you can create JSON

create or replace function json_agg( agg dbmsoutput_linesarray )
return clob
parallel_enable aggregate using agg_json;

For example, this query

select json_agg( dbmsoutput_linesarray( 'id', level, 'n'
                                      , 'name', level, '' 
                                      , 'test', 'test' || level, ''
                                      )
               ) 
from dual
connect by level <= 3

produces this JSON

 [{"id":1,"name":"1","test":"test1"}
,{"id":2,"name":"2","test":"test2"}
,{"id":3,"name":"3","test":"test3"}]

And to get the JSON from Lucas example nest two calls to this new aggration function

select agg_json.json_obj
         ( dbmsoutput_linesarray( 'company'
                                , json_agg( dbmsoutput_linesarray( 'name', d.dname, ''
                                                                 , 'identifier', d.deptno, '' 
                                                                 , 'location', d.loc, '' 
                                                                 , 'employees', json_agg( dbmsoutput_linesarray( 'name', e.ename, ''
                                                                                                               , 'job', e.job, ''
                                                                                                               , 'salary', e.sal, 'n'
                                                                                                               , 'manager', nvl( ( select agg_json.json_obj( dbmsoutput_linesarray( 'name', m.ename, ''
                                                                                                                                                                                  , 'salary', m.sal, 'n'
                                                                                                                                                                                  , 'hiredate', to_char( m.hiredate, 'DD-MM-YYYY' ), ''
                                                                                                                                                                                  )
                                                                                                                                                           )
                                                                                                                                   from emp m
                                                                                                                                   where m.empno = e.mgr
                                                                                                                                 ), '{}' ), 'o'
                                                                                                               , 'hiredate', to_char( e.hiredate, 'DD-MM-YYYY' ), '' 
                                                                                                               ) 
                                                                                        ), 'a'
                                          )
                                          )
                                , 'a'
                                )
         )
from dept d
   , emp e
where d.deptno = e.deptno
group by d.dname, d.deptno, d.loc

Here is the code.

The post Using an aggregation function to query a JSON-string straight from SQL appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/03/13/using-an-aggregation-function-to-query-a-json-string-straight-from-sql/feed/ 0
Agile én fixed price – een uitdagende combinatie https://technology.amis.nl/2015/03/13/agile-en-fixed-price-een-uitdagende-combinatie/ https://technology.amis.nl/2015/03/13/agile-en-fixed-price-een-uitdagende-combinatie/#comments Fri, 13 Mar 2015 14:54:08 +0000 https://technology.amis.nl/?p=34686 Systeemontwikkeling op basis van een Agile methodologie is hot. Logisch, want de aanpak belooft veel functionaliteit voor een lage prijs. De aanpak gaat uit van globale ideeën die gaandeweg verder worden uitgewerkt. Maar, weten alle betrokken partijen bij de start wel waar ze precies mee aan de slag gaan? Of is het volledige project gebaseerd [...]

The post Agile én fixed price – een uitdagende combinatie appeared first on AMIS Oracle and Java Blog.

]]>
Systeemontwikkeling op basis van een Agile methodologie is hot. Logisch, want de aanpak belooft veel functionaliteit voor een lage prijs. De aanpak gaat uit van globale ideeën die gaandeweg verder worden uitgewerkt. Maar, weten alle betrokken partijen bij de start wel waar ze precies mee aan de slag gaan? Of is het volledige project gebaseerd op wederzijds vertrouwen?

Concrete afspraken

Voordat een project start verwachten alle betrokken partijen een duidelijk beeld van kosten en scope. Ook in een project met een Agile aanpak. Een offerte zonder duidelijke afspraken over scope en prijs is ondenkbaar. Tegelijk worden de meeste voordelen in een Agile project behaald door de vrijheid en creativiteit van het team. Ook wordt veel winst geboekt door het continu bijstellen van de scope. Duidelijke afspraken blijven noodzakelijk, maar mogen de werkwijze niet teveel beïnvloeden.

Een goede inschatting

Een Agile werkwijze voor software ontwikkeling biedt veel voordelen op het gebied van interactie, flexibiliteit en zichtbaarheid. Een gevolg van deze voordelen is dat scope, prijs en planning niet exact te voorspellen zijn. In het offertestadium moet je hier dan ook rekening mee houden. Er is behoefte aan een goede inschatting, maar waarvan? De functionaliteit is vaak onvoldoende uitgewerkt om een nauwkeurige inschatting op te maken en wordt continu bijgesteld. Schat daarom de gewenste functionaliteit zo nauwkeurig mogelijk in en vul dit aan met een inschatting van de capaciteit van het team. Met deze twee parameters reken je het project door.

Inschatten, dat doe je samen

Agile werkt met story points. Een story point staat voor de impact van een stuk functionaliteit (story). Een story krijgt een aantal story points toebedeeld. Het aantal points geeft de zwaarte aan ten opzichte van andere stories. In dit stadium worden benodigde tijd en kosten dus nog niet ingeschat. Hierdoor is het voor klanten (of de business) met enige technische achtergrond goed mogelijk om actief deel te nemen aan dit proces. Het is dan ook een goed idee om de klant te betrekken bij het inschatten van de functionaliteit in story points. Dit creëert draagvlak en betrokkenheid.

Allemaal tevreden met het resultaat

Binnen Agile projecten start een iteratie met een planningssessie en wordt afgesloten met een demonstratie van de gerealiseerde functionaliteit. In elke planningssessie wordt opnieuw een inschatting gemaakt van de ingebrachte stories. Deze nieuwe inschatting kan vergeleken worden met de inschatting die in de offerte is opgenomen. Dit is een belangrijke stap omdat het verschil het meer- of minderwerk aangeeft. Zorg dat het mandaat van het team vooraf is vastgelegd. Mag het team bijvoorbeeld schuiven met story points tussen verschillende stories? Welke toleranties krijgt het team (in budget en tijd)? Als het speelveld bekend is, kan het team zelfstandig naar een oplossing zoeken. Dit is geheel in de Agile gedachte en voorkomt onnodige escalaties.

Goede combinatie

Goede CombinatieDoor deze aanpak te volgen, is het mogelijk om duidelijke afspraken met klant of business bij aanvang van het project te maken zonder dat dit ten koste gaat van de flexibiliteit van de Agile werkwijze. Als het Agile proces goed wordt uitgevoerd, blijven de ingeschatte kosten gelijk bij gelijke doorlooptijd en wordt de functionaliteit steeds beter.
Essentieel voor het succes van deze aanpak is een goede inschatting van de capaciteit van het team. Dit is eigenlijk de enige “fixed price” inschatting die wordt afgegeven. Dit is dan ook de kritische succesfactor voor het project. Alleen voor een ingewerkt team met de juiste samenstelling en ervaring is vooraf een goede inschatting van de capaciteit te maken.

The post Agile én fixed price – een uitdagende combinatie appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/03/13/agile-en-fixed-price-een-uitdagende-combinatie/feed/ 0
Quick Introduction to Oracle Stream Explorer – Business User friendly processing of real time events https://technology.amis.nl/2015/03/13/quick-introduction-to-oracle-stream-explorer-business-user-friendly-processing-of-real-time-events/ https://technology.amis.nl/2015/03/13/quick-introduction-to-oracle-stream-explorer-business-user-friendly-processing-of-real-time-events/#comments Fri, 13 Mar 2015 11:01:00 +0000 https://technology.amis.nl/?p=34502 The new Oracle Stream Explorer provides us with a [business]user friendly facility to process real time events. Through visually appealing and functionally intuitive web wizards, Stream Explorer has us construct explorations that consume events from streams, process these events through filtering, aggregation, pattern matching and enriching and deliver these events to downstream destinations. Using Stream [...]

The post Quick Introduction to Oracle Stream Explorer – Business User friendly processing of real time events appeared first on AMIS Oracle and Java Blog.

]]>
The new Oracle Stream Explorer provides us with a [business]user friendly facility to process real time events. Through visually appealing and functionally intuitive web wizards, Stream Explorer has us construct explorations that consume events from streams, process these events through filtering, aggregation, pattern matching and enriching and deliver these events to downstream destinations.

Using Stream Explorer, we can tap into streams of events – frequently JMS messages and alternatively HTTP PUB/SUB, SOA Suite EDN events or REST calls. For testing and demo purposes, we can use an CSV file as the source for a stream exploration. A stream is fed into one or more explorations that do the interpretation and processing of the events. A target can be associated with an exploration to have the outcomes of the exploration – which are also events, at a more elevated level after all the processing has taken place – delivered for subsequent action or communication. Destination types available for targets are JMS, REST Service, HTTP Pub (channel) and the EDN of SOA Suite. Again, for development, testing and demonstrations, a CSV file can be set as the target.

In this first introduction to Stream Explorer, we will discuss a very simple challenge: we are organizing a small conference. In three rooms, sessions take place simultaneously. Our attendees are free to decide which session to attend. We would like to know at virtually any moment how many people are in each room. We have set up simple detectors at the doors of the rooms that produce a signal whenever someone enters or leaves the room. This signal consists of the room identifier (1,2 or 3) and an IN/OUT flag (values +1 or -1).

We will use StreamExplorer to process these events and produce an aggregate per room of the net number of people that entered or left the room. In subsequent articles we will do more advanced explorations in this same setting – looking at prematurely concluded sessions, jammed doors, overflowing rooms etc.

Preparation

The preparation consists of the installation of Stream Explorer on top of an OEP domain. Start the OEP Server. The Stream Explorer can be accessed at http://host:port/sx. Login using the same user used for the OEP Events Visualizer web application: wlevs.

The CSV file with room events is created as follows:

image

You can see people entering into the rooms. Row 15 has the first attendee leaving a room – room 2 – apparently not too happy with the session taking place.

And Action…

Login to Stream Explorer:

image

On the welcome page, go to catalog:

image

Create a new Item of type Stream:

image

Specify the name and a description for the Stream. Set the type – in this case CSV file:

image

Click on Next. Select the CSV file to upload – specify the interval (delay between reading events from the CSV file) and the initial delay:

image

Click Next.

Define the shape of the event data object – in this case with two attributes RoomId and InOutFlag.

image

Click on Create to complete the definition of the (input) Stream and have it created.

The wizard to define the exploration on top of this streams opens subsequently. Specify name , description and tags (optionally).

 

image

 

Click on Create.

Configure the Exploration with a summary (sum InOrOutFlag group by RoomId) and (first click on the clock icon) a time window (some fairly large value) and an Evaluation Frequency of 10 seconds – meaning that a summary is published every 10 seconds – if during that 10 seconds the value changed for the room.

image

 

The Live Output Stream in the bottom of the page starts showing events:

 

The results from the exploration can be forwarded to an external destination. Click on Configure a Target. Specify a CSV file as the target:

image

The exploration is published when the definition is complete.

image

From this moment on, the OEP application created on the background is live – deployed to the OEP server and active.

image

The results can be seen in the CSV file that is being appended to:

image

Here we see that after some 40 seconds,  11 people have gathered in room 3. Room 2 around that time has just a single attendee – after having had 5 at its peak. Note that finding maximum occupancy would be an easy task using Stream Explorer: just add another exploration that processes the outcome from the exploration created in this article – and have it aggregate with the MAX operator.

In the visualizer, we can check in on the OEP applications that were created in this example:

image

We can also export the Explorations from Stream Explorer and refine them as OEP applications in JDeveloper.

For details, documentation and download, see the OTN Pages.

The post Quick Introduction to Oracle Stream Explorer – Business User friendly processing of real time events appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/03/13/quick-introduction-to-oracle-stream-explorer-business-user-friendly-processing-of-real-time-events/feed/ 0
Mobile backend with REST services and JSON payload based on SOA Suite 12c for Live Mobile Hacking with an OFM 12c red stack – Part 2 https://technology.amis.nl/2015/03/12/mobile-backend-with-rest-services-and-json-payload-based-on-soa-suite-12c-for-live-mobile-hacking-with-an-ofm-12c-red-stack-part-2/ https://technology.amis.nl/2015/03/12/mobile-backend-with-rest-services-and-json-payload-based-on-soa-suite-12c-for-live-mobile-hacking-with-an-ofm-12c-red-stack-part-2/#comments Thu, 12 Mar 2015 10:30:00 +0000 https://technology.amis.nl/?p=34643 This article continues the story from Mobile backend with REST services and JSON payload based on SOA Suite 12c for Live Mobile Hacking with an OFM 12c red stack – Part 1. It is the story of how to expose SOAP/XML based Web Services – primed for enterprise integration and process automation – as REST [...]

The post Mobile backend with REST services and JSON payload based on SOA Suite 12c for Live Mobile Hacking with an OFM 12c red stack – Part 2 appeared first on AMIS Oracle and Java Blog.

]]>
This article continues the story from Mobile backend with REST services and JSON payload based on SOA Suite 12c for Live Mobile Hacking with an OFM 12c red stack – Part 1. It is the story of how to expose SOAP/XML based Web Services – primed for enterprise integration and process automation – as REST services with URL parameters and JSON payloads that provide the mobile backend APIs required for the development of mobile apps. The story began with the challenge Luc Bors and I were facing for the Live Mobile Hacking session at the Oracle Fusion Middleware EMEA Partner Forum (March 2015). For an audience of some 200 SOA Suite and Oracle Middleware experts – including the product management teams for several of the products we were going to demonstrate – we were to develop live both a mobile app as well as a mobile backend (REST APIs) to support the mobile app.

The use case was defined – an app for flight attendants to be used on tables to look up flight details, inspect the passenger list, update the flight status and record passenger complaints. Luc handled the mobile app based on the agreements we made on the REST resources, services and payloads. It was up to me to implement the backend – consisting of a database, a SOA composite to expose the database data and operations at middleware level (in SOAP/XML) – as described in the previous article. Next, I needed to implement a Service Bus project to expose the required REST service interface with JSON payload, leveraging the SOAP/XML service published by the SCA Composite. That is the story told in this sequel – the part highlighted in the red rectangle in the next picture:

image

This article will briefly describe the steps I went through to create the REST services on top of the SOAP/XML services. At the end of this article, the end to end flow from the above illustration will have been implemented.

All source code for the mobile backend can be downloaded from a GitHub repository.

 

Steps:

  • Design REST services, Resources, URLs and JSON payloads
  • Create REST proxy services in Service Bus project
  • Create Pipeline to connect REST proxy to Business Service for SOAP/XML services with appropriate transformations between message formats (JSON/XML and vice versa)
  • Publish Service Project and test end-to-end REST services (all the way down to the database and back again)

At the end of this article, here is what we have achieved:

image

 

At this point, the Mobile App can come off the mock services it has undoubtedly used during development, and start consuming the real services.

 

Step One – Design REST services, Resources, URLs and JSON payloads

Luc and I had discussed the functionality to be offered in the mobile app. Subsequently, Luc indicated the REST services he would require to implement the app. Both in terms of functions and in terms of the actual URLs patterns and JSON payloads. The latter can be found in this GitHub directory.

The former were defined as follows:

To retrieve flight details: GET request to /FlyingHigh/FlightService/FlightService/flights/<flight code>?flightDate=2015-03-07 (with a JSON response structured as shown below)

image

 

To retrieve the passenger list for a flight : GET request to /FlyingHigh/FlightService/FlightService/flights/<flight code>/passengerlist?flightDate=2015-03-07 (with a JSON response structured as shown below)

image

 

 

To update the status of a flight : PUT request to /FlyingHigh/FlightService/FlightService/flights – with a JSON request payload as follows:

image

 

The REST service for submitting a complaint from a passenger was defined the other way round: I dictated the service interface and Luc simply had to consume it – as an example of how ideally you should not do things in mobile app development. The call for submitting the complaint has to be sent as a POST request to: soa-infra/resources/default/CustomerService!1.0/CustomerCareService/complaints  with a JSON payload structured like this:

image

Note: the CustomerCareService is exposed from a SOA Composite directly, not through the Service Bus proxy that is created in the remainder of this article. The flow for requests to this service is as follows – a BPM process being triggered as a result:

image

 

Step Two – Create REST proxy services in Service Bus project

The REST adapter wizard in JDeveloper can be used in two ways: either take an existing Pipeline (Service Bus) or Component (such as Mediator or BPEL) and derive a REST binding from the existing SOAP/XML service interface (not an ideal approach) or design a REST service with corresponding payload as desired and have the adapter wizard generate the internal SOAP/XML representation that can be connected through a Pipeline or Mediator. The latter is the approach to be used when the REST service design is leading – as it usually is, including this case.

The starting point is a Service Bus project that already contains a Business Service for the FlightService SOA Composite. This business service obviously exposes the same SOAP/XML interface exposed by the composite.

image

What we are after, is a REST service exposed by this Service Bus project – of which the interface definition is predefined  by Luc. We can create this from the context menu in the Proxy Services swim

lane. The wizard for configuring the REST binding appears.

SNAGHTML2333454

Set the name to FlightRestService.

Now for each operation the service should expose, a Resource should be configured. For example for the retrieval of FlightDetails using /FlyingHigh/FlightService/FlightService/flights/<flight code>?flightDate=2015-03-07

 

Press the gears icon to specify the format for the response payload. This is expressed through an XSD document that describes the native format for the response; we call this type of document an NXSD document – in this case for example nxsd_retrieveFlightDetails.xsd.

Choose the type of format:

SNAGHTML24b19c2

 

And load the sample of the JSON message payload:

SNAGHTML24becba

The wizard will now create the NXSD representation of this JSON message format:

SNAGHTML24d59bf

Close the wizard.

Back in the Response tab, we can generate a sample of the message payload in JSON or XML format:

 

image

The operation to fetch the passengerlist can be defined a similar way:

SNAGHTML2515c6d

and the details:

image

 

Note: the endpoint definition for the REST service is set like this – a bit silly really, with the duplicate FlightService that we will have to use when invoking the service, for example from the browser:

image

 

Step Three – Create Pipeline to connect REST proxy to Business Service

Create Pipeline to connect REST proxy to Business Service for SOAP/XML services with appropriate transformations between message formats (JSON/XML and vice versa).

Add a Pipeline (for example FlightPipeline) that implements the WSDL created for the REST Proxy Service. The REST proxy can invoke this pipeline, handing it the XML representation (described by the NXSD document) of the URI parameters received in the REST call. It expects to receive the XML representation (also described in the NXSD) of the JSON payload of the response message.

XQuery of XSLT based transformations are required between the NXSD description of the request and the XSD description of the input to the business service FlightServiceBS and between the output from the FlightServiceBS and the required NXSD format to be returned to the REST Proxy Service.

The next figure lists the six transformations created between NXSD and XSD-for-Business Service:

image

 

The pipeline between REST proxy and Business Service contains an operational branch – for the three REST operations in the FlightService. In each operation, it does similar things: transform the request, route to the business service and transform the response.

image

The overall project looks like this:

 

image

 

Step Four – Publish Service Project and test end-to-end REST services

Nothing special about deployment, so I will skip that part.

After deployment, the REST services can be invoked. An example is:

image

This results in a SOA composite instance with the following flow trace:

image

We can look at the details of what happens inside the Mediator:

image

to learn that what we get in JSON is pretty much what was constructed in the PL/SQL package in the database. No surprise there.

Of course I can invoke the other REST services as well:

image

but that does not add much value.

The really interesting next step is when Luc starts implementing the mobile app against these REST services.

SNAGHTML21929f7

 

Perhaps the Passenger Complaint is mildly interesting – as it causes some activities, as illustrated by the next two pictures:

first the call to the REST service for (POSTing) customer complaints:

image

and then the result inside the SOA Suite and BPM Suite:

SNAGHTML21b9c87

 

 

Some artist’s impressions from the actual live demo

 

Here some pictures from the actual demo, courtesy of Luis Weir:

image

image

image

image

and Sten Vesterli:

image

and Jim Weaver:

image

and Vikas Anand:

image

and Jan van Hoef:

image

and José Rodrigues

image

and Simon Haslam:

image

and Jacco Cijsouw:

image

 

and one last one taken by me from stage:

image

The post Mobile backend with REST services and JSON payload based on SOA Suite 12c for Live Mobile Hacking with an OFM 12c red stack – Part 2 appeared first on AMIS Oracle and Java Blog.

]]>
https://technology.amis.nl/2015/03/12/mobile-backend-with-rest-services-and-json-payload-based-on-soa-suite-12c-for-live-mobile-hacking-with-an-ofm-12c-red-stack-part-2/feed/ 0