My OpenWorld 2009 started out this morning with a keynote by Tom Kyte named "What Are We Still Doing Wrong?". A lighthearted
presentation with lots of funny examples of bad coding practises. After that, things heated up with an "OSB Deep Dive",
presented in her usual inspired – and inspiring – fashion by Deb Ayers, who clearly had way too little time to tell everyone
about the cool stuff her team has been working on the last year. The feature that – judging by the amount of questions – raised
the most interest was the "Result Cache": a "checkbox-easy" feature using Coherence to cache results from earlier service calls.
With Coherence being part of the "fabric" of the 11g Fusion Middleware, it is only logical that more and more tools and products
start using this technology. What I find amazing, though, that this is the only case I’ve seen thusfar that actually uses
the caching features of Coherence. The BPEL dehydration store of SOA Suite 11g, a "usual suspect" to benefit from having a
clustered, in-memory implementation, is in fact still implemented in the database. And as I learned in the next presentation, on
High Availability best practises for Oracle SOA Suite, the same goes for the WebLogic server configuration data (the MDS):
implemented in the database. In fact each and every "failover" feature of the WLS is based on the database – that in case
of a HA configuration MUST be RAC-ed to not be a single point of failure. Coherence is only used as a notification mechanism
to broadcast messages (such as metadata changes in the MDS) across the cluster. I can’t help but wonder why that is. Everyone
praises Coherence for its incredible robustness, but I can help but feel that when push gets to shove, development teams at
Oracle will only place their trust in the database, not in an in-memory solution, no matter how robust and failsafe it might be.
My afternoon was filled with three hands-on sessions. The one that impressed me the most was a session on using the SOA Suite
Adapters with the OSB. That had nothing to do with the design time experience, which is a little rough around the edges. You
have to go into JDeveloper, create a dummy BPEL process and configure the Adapter you want to use. Then, you go into the OSB
console, and one by one you import each XSD and WSDL created by JDeveloper for the Adapter service. Sure, this will take
several minutes longer than when using BPEL or the OESB, but who cares? The big point is that you _can_ use the Adapters in the
OSB, because at runtime it works flawlessly. I have always felt that arguably the greatest asset of the entire SOA Suite is the
Adapters, and to have the extensive features of the OSB combined with the declarative power of the Adapters makes for a VERY
powerful Enterprise Service Bus solution. But depending on how you choose to look at it, it might change the concept of the OSB a
little bit. It has always been positioned very clearly as a bus that exposes, but does NOT "host" or lets you build your
business services – but having these Adapters available in the OSB is _almost_ the same. Ok, ok, the Adapters belong to the
application server, not the OSB, I know… "Frankly, dear, I don’t give a damn", as long as I can use my beloved Adapters ;-D
Another hands-on that I was looking forward to was on the Oracle Enterprise Repository (OER) en Oracle Service Registry (OSR).
This, however, proved to be a bit of a disappointment. Firstly, the integration with JDeveloper is rather rough. I expected
an "Oracle Enterprise Repository" JDeveloper Connection, and a "Harvest to" rightmouse menu entry on all files and projects in
JDev. Unfortunately, this functionality is only available through an "External Tool", which is an ultrathin wrapper around an OS shell script.
You’ll have to provide a list of nitty-gritty properties for it to run (you’ll even have to generate an encrypted password using
some script to provide a value for the "password" property). After you harvest your "artefacts" into the repository,
you get to view them – with their dependencies – in a graphical fashion using a web browser. Unfortunately, these "artefacts" are
very finegrained: each XDS, WSDL, porttype, endpoint etc. etc. is an artefact, with often very long, technical names. There
is no higher abstraction level than that, no "zooming out" to get a "birds eye" viewpoint on the dependencies in your entire service
landscape. You can see which endpoint is dependent of which WSDL – which is something altogether different than to see which
Service (or Service Composite) is dependent of which other services or composites. Then again, my opinion here is based on
a one-hour tutorial… Don’t take my word for it 🙂