AMIS Technology Blog » Andre Crone http://technology.amis.nl Friends of Oracle and Java Mon, 20 Oct 2014 11:11:30 +0000 en-US hourly 1 http://wordpress.org/?v=4.0 The influence of the experience economy on IT architecture http://technology.amis.nl/2008/07/23/the-influence-of-the-experience-economy-on-it-architecture/ http://technology.amis.nl/2008/07/23/the-influence-of-the-experience-economy-on-it-architecture/#comments Wed, 23 Jul 2008 07:31:13 +0000 http://technology.amis.nl/blog/?p=3343 AMIS is puting more and more emphasis on IT architecture. That is one of the reasons why I am doing my Masters in IT architecture and why AMIS is starting a knowledge center about IT architecture. The following paper is written for one of the masterclasses (Applying Architecture) that I have followed so far. IntroductionCompanies [...]

The post The influence of the experience economy on IT architecture appeared first on AMIS Technology Blog.

]]>
AMIS is puting more and more emphasis on IT architecture. That is one of the reasons why I am doing my Masters in IT architecture and why AMIS is starting a knowledge center about IT architecture. The following paper is written for one of the masterclasses (Applying Architecture) that I have followed so far.

Introduction
Companies that are able to provide their customers an experience by providing emotionally and psychologically gratifying products perform well in the currently very competitive marketplace (Free, 2006). Companies like Apple, Disney and Starbucks are able to sell their products based on an added user experience. Customers are willing to pay more for products largely based on the emotions that these products raise to their buyers. Maybe Apple is one of the most well known companies that excel in delivering an added experience with their products. Apple customers identify themselves with the companies’ products. They want to show that they are different (European Centre for the Experience Economy, 2005).
This paper describes how the experience economy influences the role of an IT architect.....

The Experience Economy and the role of an IT Architect
Companies that excel in creating an added user experience with their products show that they are able to put an emphasis on design and usability. IT systems that leave a memorable impression on their users need to have a highly learnable and usable attractive interface. It must be fun to use the software system. Learnability, attractivity, user-friendliness and usability are all quality attributes that are addressed by the Quint2 Extended ISO 9126 Model for software quality (Quint, 1991). Experience economy products put high demands on these non-functional requirements. Many vendors are able to produce MP3 players. These players all do the same; you can play music on them. The functionality of products can be the same, the user experience, which is determined by how products look and feel, can differ enormously. This can be said about software systems too. It’s not the user requirements that makes software systems differ, it’s the non-functional requirements, the how systems are built, that makes software systems different. Apple’s Mac OS X and Microsoft Vista are both fully fledged, modern, operating systems with mainly the same functionality. But the user experience of these software systems is very different. Microsoft is continuously putting an emphasis on the technical aspects of their operating systems; Apple puts an emphasis on the user experience of their software. The homepage of Windows Vista mentions service packs and upgrades (Microsoft Corporation, 2008), the homepages of Apples Leopard operating system mentions the Leopard experience  (Apple Inc., 2008).
It is the role of the IT architect to watch trends (Cibit, 2008). IT architects should incorporate these trends in the solutions that they are designing. Architects that design systems with a high user experience demand should of course pay attention to the requirements of the systems that they are designing, but an even higher effort should be spend in the definition of the non-functional requirements of the systems that they are designing. The definition of the non-functional requirements of systems should contain very explicit requirements about the usability of software systems. These usability non-functional requirements should be aligned with modern market trends on order to create attractive systems that follow the latest developments.
A trade-off is present between different non-functional requirements  (Kazman, Klein, & Clements, 2000). High demands about the user experience of IT systems will put a strain on for example the realisability of IT systems. This will lead to more complex systems, which negatively influences the time to market of new systems. An example of this is the postponement of Sony’s “Playstation Home” because of the complexity the user experience brings into this system (Tweakers.net, 2008).

Conclusion
The experience economy puts an emphasis on modern, trend related non-functional requirements of IT systems. An improved look and feel greatly determines an emotionally and psychologically gratifying user experience.
More emphasis on well-defined usability requirements introduces complex software that is difficult to develop. IT architects should design systems that have a great user experience, but that are realisable as well.

References

The post The influence of the experience economy on IT architecture appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2008/07/23/the-influence-of-the-experience-economy-on-it-architecture/feed/ 3
How to call a WS-Security secured web service from Oracle BPEL http://technology.amis.nl/2007/02/01/how-to-call-a-ws-security-secured-web-service-from-oracle-bpel/ http://technology.amis.nl/2007/02/01/how-to-call-a-ws-security-secured-web-service-from-oracle-bpel/#comments Thu, 01 Feb 2007 18:33:31 +0000 http://technology.amis.nl/blog/?p=1607 Introduction I have been investigating Oracle’s Web Service Manager recently. WSM is shipped with the new SOA Suite. The WSM is a service gateway. Existing services can be placed behind the gateway. Security and authentication of the services will be done by the service gateway. WSM also provides a lot of logging facilities. Call to [...]

The post How to call a WS-Security secured web service from Oracle BPEL appeared first on AMIS Technology Blog.

]]>
Introduction

I have been investigating Oracle’s Web Service Manager recently. WSM is shipped with the new SOA Suite. The WSM is a service gateway. Existing services can be placed behind the gateway. Security and authentication of the services will be done by the service gateway. WSM also provides a lot of logging facilities. Call to services behind the gateway can be logged. Authentication errors can be logged etc. Multiple services can be placed behing one gateway definition. All policies for that gateway definition, logging, authentication etc., will be applicable for all the services that are linked to the gateway.

Oracle provides a nice and complete tutorial that you can use when you are looking to the WSM for the first time. The tutorial can be found here. It was very straight forward to implement a authentication policy based on WS-Security. WS-Security is an OASIS standard that describes a uniform implementation regarding the security of webservices. The OASIS page regarding WS-Security can be found here. The following screen shot shows how easy it is to define a WS-Security policy. In this example a username/password file is defined that will be used for the authentication step. Note that the password in the file will be hashed with MD5.

 

How to call a WS-Secured service from BPEL

 

Following Oracle’s WSM introduction was easy. The hard part (for me) was how to call the now WS-Secured service from a BPEL process. The following steps decribe how to call a WS-Security secured service from a BPEL project:

1. Create a partner link to the gateway that is wrapping the actual service

2. Create a new BPEL variable

The username and password that should be provided to the service gateway should be in the SOAP header of the partner link call. For this we need a BPEL variable that is based on an XSD that is provided by OASIS. I have imported copy of this file into my BPEL project. The content of the file can be found at http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd. I can now create a BPEL variable based on the Security variable defined in this schema. This variable is of type: ANY_TYPE, but I will address that later.

 

3. Provide the authentication details to the new security variable

The variable of type security is of ANY_TYPE type. But are now going to assign a piece of XML as the variables value. This piece of XML will contain the username and password; in this case marcc/java1. Create an assign activity and copy the following XML to the Variable_1 variable:

<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
   <wsse:UsernameToken xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
      <wsse:Username>marcc</wsse:Username>
      <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">java1</wsse:Password>
   </wsse:UsernameToken>
</wsse:Security>

 

4. The value of the Variable_1 variable should be inserted in the SOAP header of the partner link call.

Loging details have to be provided in the SOAP header, that’s how WS-Security provides authentication details. That can be done by providing the the Variable_1 variable as a header variable during the Invoke activity. Of course you need to provide the proper input and output variables on the invoke activity.

 

That’s it. We have created a variable, provided our username and password to the variable. We have then put that variable in the SOAP header. The BPEL process we have now created looks like:

 

Testing it

After deployment and test of the BPEL project the result of the invoke activity shows:

<messages>
<ServiceIn>
<part xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” name=”format”>
<format xmlns=”” xmlns:def=”http://www.w3.org/2001/XMLSchema”
xsi:type=”def:string”
xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”/>
</part>
</ServiceIn>
<ServiceOut>
<part xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” name=”Result”>
<Result xsi:type=”xsd:string”
xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”>05:14 PM
</Result>
</part>
</ServiceOut>
</messages>

The following is the same output when I provide the wrong password:

&lt;messages&gt;
   &lt;input&gt;
      &lt;ServiceIn&gt;
         &lt;part xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               name="format"&gt;
            &lt;format xmlns="" xmlns:def="http://www.w3.org/2001/XMLSchema"
                    xsi:type="def:string"
                    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"/&gt;
         &lt;/part&gt;
      &lt;/ServiceIn&gt;
   &lt;/input&gt;
   &lt;fault&gt;
      &lt;remoteFault xmlns="http://schemas.oracle.com/bpel/extension"&gt;
         &lt;part name="code"&gt;
            &lt;code&gt;Client.AuthenticationFault
&lt;/code&gt;
         &lt;/part&gt;
         &lt;part name="summary"&gt;
   &amp;nbs
p;        &amp;
lt;summary&gt;Invalid username or password
&lt;/summary&gt;
         &lt;/part&gt;
         &lt;part name="detail"&gt;
            &lt;detail&gt;
               &lt;detail/&gt;
            &lt;/detail&gt;
         &lt;/part&gt;
      &lt;/remoteFault&gt;
   &lt;/fault&gt;
&lt;/messages&gt;

So, It’s working!!! One question remains. It is possible to provide wsseUsername and wssePassword on the property table page of a partner link definition. My first hope was that should be all I need. For me providing these two variables did not do anyting. But the above is working fine.

 

Resources:

http://weblogs.asp.net/gsusx/archive/2006/03/22/WS_2D00_Security-interoperability-with-Oracle-BPEL-and-WSE-3.0.aspx

 

 

 

The post How to call a WS-Security secured web service from Oracle BPEL appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2007/02/01/how-to-call-a-ws-security-secured-web-service-from-oracle-bpel/feed/ 14
How to build an Oracle Forms application on BPEL/WF http://technology.amis.nl/2007/01/19/how-to-build-oracle-forms-application-on-bpelwf/ http://technology.amis.nl/2007/01/19/how-to-build-oracle-forms-application-on-bpelwf/#comments Fri, 19 Jan 2007 14:46:43 +0000 http://technology.amis.nl/blog/?p=1524 Why Oracle Forms and BPEL? Old style Forms applications normally don’t hold much workflow functionality. Sure there is an order in which the forms should be used. The workflow of the application is in the mind and knowledge of the user. For my current project I am investigating, together with my colleague Peter Ebell, ways [...]

The post How to build an Oracle Forms application on BPEL/WF appeared first on AMIS Technology Blog.

]]>
Why Oracle Forms and BPEL?

Old style Forms applications normally don’t hold much workflow functionality. Sure there is an order in which the forms should be used. The workflow of the application is in the mind and knowledge of the user. For my current project I am investigating, together with my colleague Peter Ebell, ways how we can use Oracle’s BPEL Workflow engine to enable workflow functionality in existing forms applications.

The idea is to implement a Java class that interacts between Oracle Forms and the BPEL Workflow enginge. This class we be embedded in a new workflow Form. This form will be used to startup the existing forms of the application. Goal is to make the existing application a workflow enabled application without modifying it. The workflow functionality should be an add-on and not a modification of existing applications.

I will describe the BPEL process and Java/Forms class in a future post. This post will describe how we embedded the Java class in the workflow form.....

The Workflow Java class

The workflow Java class should return a list of object ID’s and form names. The form name will be used to startup the correct existing application form. The object ID should hold the primary key of the entity that should be queried by the form when it’s started up. Oracle BPEL’s workflow enginge holds worklists that determine these lists. 

The embedded class should be available for both Forms developer and the forms server. That way the class be used at design and run time. For Forms developer the correct setting in the registry key FORMS_BUILDER_CLASSPATH should be set:

 

 On the application server the Forms server configuration file default.env should be modified. Be sure to add the jar holding the Java class to the CLASSPATH variable.

 

Importing the Java class in Oracle Forms

Importing the Java class in Oracle Forms is very easy. Use the Import Java Classes funtion for that:

 
 

In the Import Java Class tool you can select the class that should be imported into Oracle Forms. You will only see classes that can be found in the FORMS_BUILDER_CLASSPATH registry entry.

 

The import of the Java class results in a machine generated PL/SQL wrapper package that can be used within Forms:

PACKAGE BODY TaskService IS<br /><br />  --<br />  -- DO NOT EDIT THIS FILE - it is machine generated!<br />  --<br /><br />  args   JNI.ARGLIST;<br /><br />  -- Constructor for signature ()V<br />  FUNCTION new RETURN ORA_JAVA.JOBJECT IS<br />  BEGIN<br />    args := NULL;<br />    RETURN (JNI.NEW_OBJECT('com/applied/bpel/worklist/TaskService', '()V', args));<br />  END;<br /><br />  -- Method: getTasksForUser (Ljava/lang/String;)[Ljava/lang/String;<br />  FUNCTION getTasksForUser(<br />    a0    VARCHAR2) RETURN ORA_JAVA.JARRAY IS<br />  BEGIN<br />    args := JNI.CREATE_ARG_LIST(1);<br />    JNI.ADD_STRING_ARG(args, a0);<br />    RETURN JNI.CALL_OBJECT_METHOD(TRUE, NULL, 'com/applied/bpel/worklist/TaskService', 'getTasksForUser', '(Ljava/lang/String;)[Ljava/lang/String;', args);<br />  END;<br /><br /><br />BEGIN<br />  NULL;<br />END; <br />

That is basically it. The Java class that acts as a brigde between Oracle BPEL and Oracle forms can now be used to query the Oracle BPEL WF worklists:

procedure populate_worklist<br />is<br />  l_workstlist 		ORA_JAVA.JARRAY;<br />  l_listLength   	NUMBER;<br />  l_object_data  	VARCHAR2(255);<br />  <br />  l_sep_pos       number;<br />BEGIN<br />	go_block('WRKLIST');<br />	clear_block(NO_COMMIT);<br />  -- Call out to Java to get the list of tasks<br />  l_workstlist := TaskService.getTasksForUser('toDo');<br /><br />  -- How many strings did the Java return?<br />  l_listLength := ORA_JAVA.GET_ARRAY_LENGTH(l_workstlist);<br /><br />  -- For each string, extract the details of the loan.<br />  for i in 1..l_listLength loop<br /><br />    -- Get element i from the array of strings.<br />    l_object_data := ORA_JAVA.GET_STRING_ARRAY_ELEMENT(l_workstlist, i-1);<br />    l_sep_pos := instr(l_object_data, '|');<br />           <br />    :wrklist.object_id := to_number(substr(l_object_data, 1, l_sep_pos - 1));<br />    :wrklist.form_name := substr(l_object_data, l_sep_pos + 1);<br />    :wrklist.object_name := get_object_name(substr(to_char(:wrklist.object_id),1,1), :wrklist.object_id);<br />    if i != l_listLength<br />    then<br />      next_record;<br />    end if;<br />  end loop;<br />  first_record;<br />END;<br /><br />

The code above performs a static call to the Java class and it populates a Forms block with the workflow data. This data can now be used to start the correct Forms as defined in the BPEL Workflow. The final form looks like this:

 

In a future post we will give more detail on the implementation of the Java class that interacts with Oracle Workflow. We will also post how we have used the ESB from Oracle as a legacy wrapper. That way we could expose existing functionality as a service that then could be used in a BPEL workflow.

 

 

 

The post How to build an Oracle Forms application on BPEL/WF appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2007/01/19/how-to-build-oracle-forms-application-on-bpelwf/feed/ 0
How to send large attachments? http://technology.amis.nl/2006/06/26/how-to-send-large-attachments/ http://technology.amis.nl/2006/06/26/how-to-send-large-attachments/#comments Mon, 26 Jun 2006 17:41:26 +0000 http://technology.amis.nl/blog/?p=1250 You all know the problem. You want to send that large file to someone, but how? Many email systems restrict the maximum attachment size. I have to send large attachments of several hundreds of megabytes to magazines all over the world quite frequently. There are some services that you could use for this purpose. My [...]

The post How to send large attachments? appeared first on AMIS Technology Blog.

]]>
Yousendit.comYou all know the problem. You want to send that large file to someone, but how? Many email systems restrict the maximum attachment size. I have to send large attachments of several hundreds of megabytes to magazines all over the world quite frequently. There are some services that you could use for this purpose.

My best experience is with  http://www.yousendit.com/. They offer the possibility of sending files as large as one GB. The recipient receives an email with a URL to their file, which they can download 25 times within 7 days. Yousendit.com also holds commercial services. You can then have your company logo on the site and you will have no advertisements. You can also track the number of downloads with these paid accounts.

The post How to send large attachments? appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2006/06/26/how-to-send-large-attachments/feed/ 5
IN or EXIST or doesn’t it matter http://technology.amis.nl/2006/05/24/in-or-exist-or-doesnt-it-matter/ http://technology.amis.nl/2006/05/24/in-or-exist-or-doesnt-it-matter/#comments Wed, 24 May 2006 17:20:37 +0000 http://technology.amis.nl/blog/?p=1210 In my previous post about package constants I mentioned the application I am working on right now. They gave me the task to speed up the application since performance was getting worse and worse. I analyzed the statspack results together with a DBA and we found two queries that together took 40% of the logical [...]

The post IN or EXIST or doesn’t it matter appeared first on AMIS Technology Blog.

]]>
In my previous post about package constants I mentioned the application I am working on right now. They gave me the task to speed up the application since performance was getting worse and worse. I analyzed the statspack results together with a DBA and we found two queries that together took 40% of the logical IO’s of the system. That’s a lot for only two queries, especially when you look how big our application is; we have many queries.

The two queries were both in one procedure. That must have been an off day for the original programmer :-). The queries were small and had an IN statement in the where clause. I simply rewrote the queries to use an EXIST and they became blazingly fast. That was strange. I attended the Tom Kyte seminar in Utrecht in 2005 and he claimed that it didn’t matter anymore. IN or EXIST, the database would see this and the optimizer would have the same execution plan for both, but not in my case. How could that be? I was running the queries in a 9R2 database.


So I wrote a (meaningless) example to show the difference between the IN and EXIST statements. The source of this test script is as follows:

set timing off
set autotrace off
set echo on

create table my_blocked_objects (object_name varchar2(200) not null)
/

create table my_objects (object_name varchar2(200) not null)
/

create index my_blk_idx on my_blocked_objects(object_name)
/

create index my_obj_idx on my_objects(object_name)
/

insert into my_blocked_objects
select owner || '.' || object_name
from   all_objects
/

insert into my_objects
select user || '.' || object_name
from   user_objects
/

commit
/

begin
  dbms_stats.gather_table_stats(user,'MY_BLOCKED_OBJECTS');
  dbms_stats.gather_table_stats(user,'MY_OBJECTS');
end;
/

set timing on
set autotrace on explain

select count(*)
from   my_objects obj
where  obj.object_name in (select blk.object_name
                           from   my_blocked_objects blk)
/

select count(*)
from   my_objects obj
where  exists (select ''
               from   my_blocked_objects blk
               where  blk.object_name = obj.object_name)
/

drop table my_blocked_objects
/

drop table my_objects
/

The autotrace results of this script on my database was:

SQL&gt; select count(*)
  2  from   my_objects obj
  3  where  obj.object_name in (select blk.object_name
  4                             from   my_blocked_objects blk)
  5  /

  COUNT(*)
----------
      3376

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=166 Card=1 Bytes=132
          )

   1    0   SORT (AGGREGATE)
   2    1     HASH JOIN (Cost=166 Card=3378 Bytes=445896)
   3    2       INDEX (FULL SCAN) OF 'MY_OBJ_IDX' (NON-UNIQUE) (Cost=2
          6 Card=3378 Bytes=101340)

   4    2       VIEW OF 'VW_NSO_1' (Cost=156 Card=27148 Bytes=2769096)
   5    4         SORT (UNIQUE) (Cost=156 Card=27148 Bytes=814440)
   6    5           INDEX (FULL SCAN) OF 'MY_BLK_IDX' (NON-UNIQUE) (Co
          st=26 Card=27522 Bytes=825660)




SQL&gt;
SQL&gt; select count(*)
  2  from   my_objects obj
  3  where  exists (select ''
  4                 from   my_blocked_objects blk
  5                 where  blk.object_name = obj.object_name)
  6  /

  COUNT(*)
----------
      3376

Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=3 Card=1 Bytes=30)
   1    0   SORT (AGGREGATE)
   2    1     INDEX (FULL SCAN) OF 'MY_OBJ_IDX' (NON-UNIQUE) (Cost=26
          Card=169 Bytes=5070)

   3    2       INDEX (RANGE SCAN) OF 'MY_BLK_IDX' (NON-UNIQUE) (Cost=
          1 Card=1 Bytes=30)

Again not the same execution plan. The IN version was less efficient compared to the EXIST version of the query. And this was not what Tom Kyte was saying during his seminar. I then tested the same script on another instance; but also a 9R2 database. That gave the following results:

SQL&gt; select count(*)
  2  from   my_objects obj
  3  where  obj.object_name in (select blk.object_name
  4                             from   my_blocked_objects blk)
  5  /

  COUNT(*)
----------
      2220

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=12 Card=1 Bytes=62)
   1    0   SORT (AGGREGATE)
   2    1     HASH JOIN (SEMI) (Cost=12 Card=2223 Bytes=137826)
   3    2       TABLE ACCESS (FULL) OF 'MY_OBJECTS' (Cost=4 Card=2223
          Bytes=71136)

   4    2       INDEX (FAST FULL SCAN) OF 'MY_BLK_IDX' (NON-UNIQUE) (C
          ost=5 Card=24923 Bytes=747690)




SQL&gt;
SQL&gt; select count(*)
  2  from   my_objects obj
  3  where  exists (select ''
  4                 from   my_blocked_objects blk
  5                 where  blk.object_name = obj.object_name)
  6  /

  COUNT(*)
----------
      2220

Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=12 Card=1 Bytes=62)
   1    0   SORT (AGGREGATE)
   2    1     HASH JOIN (SEMI) (Cost=12 Card=2223 Bytes=137826)
   3    2       TABLE ACCESS (FULL) OF 'MY_OBJECTS' (Cost=4 Card=2223
          Bytes=71136)

   4    2       INDEX (FAST FULL SCAN) OF 'MY_BLK_IDX' (NON-UNIQUE) (C
          ost=5 Card=24923 Bytes=747690)

Wow the same execution plan, but a slightly higher cost (Cost=12 compared to Cost=3). How was this possible???? TKPROF output showed the same results. It took some time to figure this out; but the difference was introduced by different optimizer parameters in the database. The database with two execution plans had the following parameters:

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------
object_cache_optimal_size            integer     102400
optimizer_dynamic_sampling           integer     0
optimizer_features_enable            string      8.1.7
optimizer_index_caching              integer     0
optimizer_index_cost_adj             integer     10
optimizer_max_permutations           integer     80000
optimizer_mode                       string      CHOOSE

The database with one execution plan had the following parameters:

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------
object_cache_optimal_size            integer     102400
optimizer_dynamic_sampling           integer     1
optimizer_features_enable            string      9.2.0
optimizer_index_caching              integer     0
optimizer_index_cost_adj             integer     100
optimizer_max_permutations           integer     2000
optimizer_mode                       string      CHOOSE

 

For some reason the DBA’s kept the optimizer_features_enable on 8.1.7 after the migration of the database. This prevented the optimizer from using new features like optimizing IN where clauses. The argument was that it could be possible that another value of this parameter could negatively impact performance. That could be true; but no one tested that; they just took tha assumption.
Setting the parameter to 9.2.0 would have automatically optimized the two bad queries that led to this post. They would never have been a problem. I would never have found them as being problem queries. My lesson learnt is that you never should rely on assumptions; you have to test your assumptions.

 

Tom Kyte was right in this case. There is no difference between IN or EXIST. I believe the difference in cost between the two databases (cost=3 vs. cost=12) is hard to compare because of the difference of the optimizer_features_enable parameter.

The post IN or EXIST or doesn’t it matter appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2006/05/24/in-or-exist-or-doesnt-it-matter/feed/ 7
Google releases AJAX toolkit http://technology.amis.nl/2006/05/22/google-releases-ajax-toolkit/ http://technology.amis.nl/2006/05/22/google-releases-ajax-toolkit/#comments Mon, 22 May 2006 17:14:24 +0000 http://technology.amis.nl/blog/?p=1203 Google has released their Google Web Toolkit (GWT). This toolkit can be used to develop AJAX applications in Java. Google describes the toolkit as follows: Google Web Toolkit (GWT) is a Java development framework that lets you escape the matrix of technologies that make writing AJAX applications so difficult and error prone. With GWT, you [...]

The post Google releases AJAX toolkit appeared first on AMIS Technology Blog.

]]>
Google has released their Google Web Toolkit (GWT). This toolkit can be used to develop AJAX applications in Java. Google describes the toolkit as follows:

Google Web Toolkit (GWT) is a Java development framework that lets you escape the matrix of technologies that make writing AJAX applications so difficult and error prone. With GWT, you can develop and debug AJAX applications in the Java language using the Java development tools of your choice. When you deploy your application to production, the GWT compiler to translates your Java application to browser-compliant JavaScript and HTML.

Look at Googles GWT page for more information

The post Google releases AJAX toolkit appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2006/05/22/google-releases-ajax-toolkit/feed/ 2
Wrong use of constant packages http://technology.amis.nl/2006/05/22/wrong-use-of-constant-packages/ http://technology.amis.nl/2006/05/22/wrong-use-of-constant-packages/#comments Mon, 22 May 2006 15:05:45 +0000 http://technology.amis.nl/blog/?p=1202 On my recent project we have a large package holding constants values. From a design point of view this is a very elegant solution because this way the constants are all defined in a centralized way. But the usage of this package lead to enormous performance issues. This post tells why we had these problems [...]

The post Wrong use of constant packages appeared first on AMIS Technology Blog.

]]>
On my recent project we have a large package holding constants values. From a design point of view this is a very elegant solution because this way the constants are all defined in a centralized way. But the usage of this package lead to enormous performance issues. This post tells why we had these problems and how we solved them.

An example of a constants package could be:

create or replace package my_constants
is
  my_name constant varchar2(30) := 'Andre';
end;
/

One of the problems is that package constants cannot be used in Oracle Forms or directly from SQL. So a function was written that dynamically retrieved the constants value from the package. This function can be used in Forms and SQL,
so that problem seemed to be solved. An implementation of a function that retrieves the constants value can be:

function get_my_constant
 (p_constant in varchar2
 )
 return varchar2
is
 l_return varchar2(2000);
begin
 execute immediate ('begin :1 := '||p_constant||'; end;') using in out l_return;
return l_return;
end get_my_constant;

And now the horror story starts. This function was being used inappropriately. Supposed to be used only in Forms, but used in a lot of packages. The statspack report counted well over 30 million calls to a single value from the package using this function. (Granted the Statspack report covered 28 hours, but still)

Performance of the system was becoming so poor that production problems were beginning to occur. Every call to the function results in a hard parse which is very expensive. The best way, of course, was to remove the function call whenever possible. But the application is so big, that this is not possible in a short timeframe.

The best way was to rewrite the function. Was there a way to make the function perform better? This way we didn’t have to touch the rest of the code. I shortly thought of using deterministic functions. But that is impossible in combination with execute immediate.

The final solution was as follows:

  • Lookup the requested constant value in a database table. If the value exists, return this value. This way no dynamic SQL is needed.
  • When we can’t find the value in the database table. perform the Dynamic SQL and store it in the table for later use.

The table looks like:

SQL&gt; desc my_constants
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 CONST                                     NOT NULL VARCHAR2(61)
 VALUE                                              VARCHAR2(2000)

I used an private procedure to insert the constant value in the table. This way we could use an autonomous transaction to commit the data that was inserted in the table without affecting the main transaction. The rewritten function looks like:

function get_my_constant
 (p_constant in varchar2
 )
 return varchar2
 is
l_return my_constants.value%type;

   procedure create_entry (p_con in varchar2
                          ,p_val in varchar2
                          )
   is
      pragma autonomous_transaction;
   begin
      insert into my_constants
      (const, value)
      values
      (p_con, p_val);
      commit;
   exception
      when dup_val_on_index
      then null;
   end create_entry;
begin
   select value
     into l_return
     from my_constants
    where const = p_constant;
    return l_return;
exception
   when no_data_found
   then
      execute immediate ('begin :1 := '||p_constant||'; end;') using in out l_return;
      create_entry (p_constant, l_return);
      return l_return;
end get_my_constant;

The new function “copies” the constant values from the package to our “cache” table. It’s important to delete the contents from the table whenever the constants package is modified.
During the Steve Adams seminar in the Utrecht, the Netherlands, Steve opted to use index organized tables (IOT) instead of “normal” heap tables for lookup tables. The my_constants table will not be a big table, so my colleague Alex and I investigated if there was something to gain by implementing the my_constants table as an IOT instead of a heap table.

Creating an IOT is not very different from creating a normal heap table:

CREATE TABLE CAD_CONSTANTS
 (CONST VARCHAR2(61) NOT NULL
 ,VALUE VARCHAR2(2000)
 ,CONSTRAINT CON_PK PRIMARY KEY
  (CONST)
 )
 ORGANIZATION INDEX
/

Only the organization index clause is extra. This makes sure that the complete table is actually an index. We looked at the number of consistent gets to check if the IOT was more efficient compared to a heap table.

The “normal” heap table results in:

          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        391  bytes sent via SQL*Net to client
        426  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

The IOT results in:

          0  recursive calls
          0  db block gets
          1  consistent gets
          0  physical reads
          0  redo size
        391  bytes sent via SQL*Net to client
        426  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

So I chose the IOT because it’s a little efficient than a heap table when the table holds a small number of rows.

Finally we checked if the new function was faster. We created a test script in which we retrieved the value of 30 constants. This was looped 5000 times. So we did 150000 calls to the function.
We started this script twice in two simultaneous sessions in order to check if the sessions were blocking each other.
The following is a comparison of the two functions. We used tkprof to analyze the trace files.

The old function has the following statistics:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00      10.24          0          0          0           0
Execute      1 354000.39  394291.20          0          0          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2 354000.39  394301.44          0          0          0           1

The new function has the following statistics:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1 177500.05  190986.24          0          0          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2 177500.05  190986.24          0          0          0           1

These are big number because the test was done on a 4 CPU machine. I believe you have to divide the number of CPU seconds by 4 (the number of CPU’s) to get the real number of CPU seconds. The new function is around 50% faster than the old one. This is a lot considering that the function is called around 5,575,762 times an hour in production.

On production this change was dramatically. Overall CPU load is now 20 to 30% less, which is enough the keep the system happy for now.

The post Wrong use of constant packages appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2006/05/22/wrong-use-of-constant-packages/feed/ 18
Generate a native Excel file with SQLX http://technology.amis.nl/2006/01/20/generate-anative-excel-file-with-sqlx/ http://technology.amis.nl/2006/01/20/generate-anative-excel-file-with-sqlx/#comments Fri, 20 Jan 2006 10:00:07 +0000 http://technology.amis.nl/blog/?p=1015 I wanted to generate an Excel file with data coming from an Oracle database. One of the best ways to do this is to generate an Microsoft Excel XML file. Starting from Office 2003 this XML format is supported. This way you are able to generate a native Excel file in stead of a CSV [...]

The post Generate a native Excel file with SQLX appeared first on AMIS Technology Blog.

]]>
I wanted to generate an Excel file with data coming from an Oracle database. One of the best ways to do this is to generate an Microsoft Excel XML file. Starting from Office 2003 this XML format is supported. This way you are able to generate a native Excel file in stead of a CSV file.

In a previous post I explained how SQLX can be used to generate XML. With SQLX I was able to generate the Excel XML with only one query. The input data (Excel cell data) is supplied into the query by using a table of objects. This table of objects can be instantiated and filled by any query that supplies the data that you want to have in your Excel file. The advantage of this table of objects is that the Excel data is provided by a parameter making the query generic.

....
– Object holding cell data
CREATE OR REPLACE
TYPE xls_cell AS OBJECT
( cell_col  NUMBER(8)
,cell_row  number(8)
,cell_Type varchar2(60)
,cell_value VARCHAR2(2000)
,cell_style VARCHAR2(10)
)

– Table of cells
CREATE OR REPLACE
TYPE xls_cells AS TABLE OF xls_cell ;

The following example fills the table with 3 rows and 5 columns:

function test
return  xls_cells
is
  l_cells xls_cells := xls_cells();
begin
  l_cells.extend(15);

  l_cells(1) := xls_cell(1,1,'String','test11','s23');
  l_cells(2) := xls_cell(2,1,'String','test12','s23');
  l_cells(3) := xls_cell(3,1,'String','test13','s23');
  l_cells(4) := xls_cell(4,1,'String','test14','s23');
  l_cells(5) := xls_cell(5,1,'String','test15','s23');

  l_cells(6) := xls_cell(1,2,'String','test21','s22');
  l_cells(7) := xls_cell(2,2,'String','test22','s22');
  l_cells(8) := xls_cell(3,2,'String','test23','s22');
  l_cells(9) := xls_cell(4,2,'String','test24','s22');
  l_cells(10) := xls_cell(5,2,'String','test25','s22');

  l_cells(11) := xls_cell(1,3,'String','test31','s22');
  l_cells(12) := xls_cell(2,3,'String','test32','s22');
  l_cells(13) := xls_cell(3,3,'String','test33','s22');
  l_cells(14) := xls_cell(4,3,'String','test34','s22');
  l_cells(15) := xls_cell(5,3,'String','test35','s22');

  return l_cells;
end;

The following query generates a CLOB holding the Excel XML data. The input data is provided in the p_data parameter. This parameter is of type xls_cells. The p_worksheetname is a varchar2 input parameter holding the name of the Excel sheet.

with row_counter -- Select the number of rows
as   (select distinct cell_row as cell_rows
    from table(cast(p_data as xls_cells)) -- Cast the table of object to a table, usable in a query
    )
,    col_counter -- Select the number of columns
as   (select distinct cell_col as cell_cols
    from table(cast(p_data as xls_cells))
    )
select XMLElement(&quot;Workbook&quot;, XMLAttributes('http://www.w3.org/TR/REC-html40' AS &quot;xmlns:html&quot;
                                            ,'urn:schemas-microsoft-com:office:spreadsheet' AS &quot;xmlns:ss&quot;
                                            ,'urn:schemas-microsoft-com:office:excel' AS &quot;xmlns:x&quot;
                                            ,'urn:schemas-microsoft-com:office:office' AS &quot;xmlns:o&quot;
                                            ,'urn:schemas-microsoft-com:office:spreadsheet' AS &quot;xmlns&quot;
                                            )
                  , XMLElement(&quot;Styles&quot;
                    , XMLElement( &quot;Style&quot;, XMLAttributes( 'Normal' AS &quot;ss:Name&quot;, 'Default' AS &quot;ss:ID&quot;) -- Generate a style
                                , XMLElement(&quot;Alignment&quot;, XMLAttributes ('Bottom' AS &quot;ss:Vertical&quot;))
                                , XMLElement(&quot;Borders&quot;)
                                , XMLElement(&quot;Font&quot;)
                                , XMLElement(&quot;Interior&quot;)
                                , XMLElement(&quot;NumberFormat&quot;)
                                )
                    , XMLElement( &quot;Style&quot;, XMLAttributes( 's23' AS &quot;ss:ID&quot;)                            -- Generate a style
                                , XMLElement(&quot;Alignment&quot;)
                                , XMLElement(&quot;Borders&quot;)
                                , XMLElement(&quot;Font&quot;, XMLAttributes ('1' AS &quot;ss:Bold&quot;,'Swiss' AS &quot;x:Family&quot;))
                                , XMLElement(&quot;Interior&quot;, XMLAttributes ('Solid' AS &quot;ss:Pattern&quot;,'#C0C0C0' AS &quot;ss:Color&quot;))
                                , XMLElement(&quot;NumberFormat&quot;)
                                )
                    , XMLElement( &quot;Style&quot;, XMLAttributes( 's22' AS &quot;ss:ID&quot;)                            -- Generate a style
                                , XMLElement(&quot;Alignment&quot;)
                                , XMLElement(&quot;Borders&quot;)
                                , XMLElement(&quot;Font&quot;)
                                , XMLElement(&quot;Interior&quot;)
                                , XMLElement(&quot;NumberFormat&quot;, XMLAttributes ('0' AS &quot;ss:Format&quot;))
                                )
                  )
                  , XMLElement( &quot;Worksheet&quot;, XMLAttributes( p_worksheetname as &quot;ss:Name&quot;)
                    , XMLElement( &quot;Table&quot;
                      , ( select XMLAgg( XMLElement( &quot;Column&quot;, XMLAttributes( '20' as &quot;ss:Width&quot;)) -- Predefine the columns
                                        )
                          from   col_counter
                        )
                      , ( select XMLAgg( XMLElement( &quot;Row&quot;                                         -- Generate a row
                                                , ( select XMLagg(XMLElement( &quot;Cell&quot;, XMLAttributes( cell_style as &quot;ss:StyleID&quot;) -- Generate a Cell
                                                                            , XMLElement(&quot;Data&quot;, XMLAttributes( cell_type as &quot;ss:Type&quot;), cell_value
                                                                              )
                                                                            )
                                                                  )
                                                    from   table(cast(p_data as xls_cells))
                                                    where  cell_row = row_counter.cell_rows        -- Make sure the cells are in the correct row
                                                  )
                                                )
                                    )
                          from  row_counter
                        )
                      )
                    )
                  ).getclobval()
into l_xls
from dual;

A drawback of this query is that it only generates one sheet. But this could of course by easily added to this query.

The post Generate a native Excel file with SQLX appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2006/01/20/generate-anative-excel-file-with-sqlx/feed/ 8
How to use weak ref cursors and bulk collects into a table of objects to clean up data http://technology.amis.nl/2005/12/30/how-to-use-weak-ref-cursors-and-bulk-collects-into-a-table-of-objects-to-clean-up-data/ http://technology.amis.nl/2005/12/30/how-to-use-weak-ref-cursors-and-bulk-collects-into-a-table-of-objects-to-clean-up-data/#comments Fri, 30 Dec 2005 09:26:15 +0000 http://technology.amis.nl/blog/?p=969 The problem I was facing was simple. You have a table with data. The data in that table should be validated. Invalid rows should be deleted. For each deleted row an entry in a logfile should be created. The validation of the rows could be performed by expensive queries. The simple solution would like: for [...]

The post How to use weak ref cursors and bulk collects into a table of objects to clean up data appeared first on AMIS Technology Blog.

]]>
The problem I was facing was simple. You have a table with data. The data in that table should be validated. Invalid rows should be deleted. For each deleted row an entry in a logfile should be created. The validation of the rows could be performed by expensive queries. The simple solution would like:

  for r_rec in c_badrows
  loop
    .... write data to the logfile
  end loop

  delete from mytable where ...
  

This way the query to lookup the bad rows has to be performed twice. ....
That is now what I need in the heavy batch I am currently implementing. The other problem is that I have several queries that determine the bad rows. The solution to the probem was as follows:

  • Use ref cursors for the different bad row queries. This way the queries could be arguments to the clean-up procedure
  • use bulk fetches to query from the ref cursor. Loop through the result and log the data into the logfile
  • Use the table cast operators to delete the bad rows.

The trick is to use weak ref cursors in stead of strong ref cursors (I looked this up with my colleague Alex). Strong ref cursors should be based on record types. But I needed to fetch into a table of objects to perform the bulk delete at once. You can do this with weak ref cursors as the sample code will show you.

First I created two new types. One object and one table of object based on the first type:

SQL&gt; create type myobject as object (id number, name varchar2(10));

Type created.

SQL&gt; create type my_object_table as table of myobject;

Type created.

The actual procedure could look like the following. It processes a 1000 rows at a time. I tested this also for a value of 100. This made the procedure several times slower. This is understandable since the number of context switches from PL/SQL to SQL is 10 times more.

create or replace procedure my_clean_up( p_fp   in utl_file.file_type
                                       , p_data in sys_refcursor)
is
  r_data my_object_table;
begin
  loop
    r_data := my_object_table(); -- Initialise the object
    fetch p_data bulk collect into r_data limit 1000;    -- Fetch 1000 rows at a time

    for l_index in 1..r_data.count
    loop
      utl_file.put_line(p_fp, r_data(l_index).name|| ' bad row is deleted from the table');
    end loop;

    delete from my_table
    where id in (select id
                 from table(cast(r_data as my_object_table)) -- cast the object to a table, so it can be used from SQL
                 );

    exit when p_data%notfound;
  end loop;

  close p_data;
end;

The trick is to use weak ref cursors, as I noted earlier. This way a refcursor returning myobject rows could be provided as the input parameter for the procedure. A function returning a ref cursor like this could look like this:

function get_bad_names
return sys_refcursor
is
  c_data sys_refcursor;
begin
  open c_data
  for
    select myobject(id, name) -- !! Call the objects constructor
    from my_table
    where .......

   return c_data;
end;

The last part of course is to use all this. The following final code example shows how. It cleans up the table and writes logdata to a file

procedure test
is
  l_fp         utl_file.file_type;
begin
  l_fp := utl_file.fopen('c:\temp', 'mylogfile','w');
  if not utl_file.is_open(l_fp)
  then
    -- handle the error
  end if;

  my_clean_up(l_fp, get_bad_names); -- This is the one doing it all

  fclose(l_fp);
end;

The post How to use weak ref cursors and bulk collects into a table of objects to clean up data appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2005/12/30/how-to-use-weak-ref-cursors-and-bulk-collects-into-a-table-of-objects-to-clean-up-data/feed/ 0
Oracle Lite Part 2, how to get started (continued) http://technology.amis.nl/2005/11/30/oracle-lite-part-2-how-to-get-started-continued/ http://technology.amis.nl/2005/11/30/oracle-lite-part-2-how-to-get-started-continued/#comments Wed, 30 Nov 2005 12:12:18 +0000 http://technology.amis.nl/blog/?p=924 Yesterday I had a few mind boggling experiences with Oracle Lite. Together with our customer and my colleague Arjaan, I tried to install Oracle Lite. This is a very important project with 300 Oracle Lite snapshots in the future. Again I experienced what I wrote in my previous post. It’s a nice product with a [...]

The post Oracle Lite Part 2, how to get started (continued) appeared first on AMIS Technology Blog.

]]>
Yesterday I had a few mind boggling experiences with Oracle Lite. Together with our customer and my colleague Arjaan, I tried to install Oracle Lite. This is a very important project with 300 Oracle Lite snapshots in the future.

Again I experienced what I wrote in my previous post. It’s a nice product with a lot of potential, but the documentation is very poor. The installer is awkward and the packaging tool (wtgpack) is far from production ready. The Oracle Lite database itself is very good both in performance and in its SQL support.
This is what we experienced:

  • The installer asks you for the repository database information very early in the installation process. At the end you are prompted for the system password in order to log onto that database. You cannot change the database information at that time so you cannot correct typing errors. This leaves you with a broken installation. It’s my experience that it’s best to remove the installation and start all over again.
  • We had the idea to install the Oracle Lite repository in another database than the database that we wanted to create snapshots of. At first you think this should work, since the packaging tool asks you for a database when you import tables in the application.
    Deployment of the application however stops with a WTG-20502 error. This is an error you always get when deployment fails. The tool doesn’t give any clue about what really went wrong. At the end we installed the repository in the same database as the main database that should be synchronized to Oracle Lite. All works well in that setup, no problems there. But this is still a big disappointment since we wanted to separate the two in order to keep the performance as good as possible. We could use the new development tool that Oracle is providing with release two of the product, but we ran into the next problem.
  • So we had to install the repository in the same database. That is an 8.1.7.4 database running on VMS; according to the documentation this should work. We tried the new release of Oracle Lite 10gR2. This version doesn’t install on an 8.1.7.4 database, since it is not able to logon as SYSTEM. This is needed so that the installer can create a repository user.
    We finally (to our luck) tried the previous release 1 version of Oracle Lite 10g. I have installed this version several times now with success on different systems including an 8.1.7 database running on Windows. We were still not able to install the software. The installer says it is running through the different steps of installing the repository (creating objects, populating tables). But that information is wrong since the installer didn’t create the repository user. That’s strange; it did this on every other installation I tried. In the end we created a repository user ourselves. This also generated an error during installation. We then created a repository user and granted the DBA role to it. That fixed our problem. We are now able to create an Oracle Lite 10gR1 repository in an 8.1.7.4 database running on VMS. We are still not able to install release 2 of Oracle lite, but we can at least continue development.

I truly hope that Oracle will enhance the level of documentation. The error messages of the packaging tool should be much more intuitive.

In the very near future we will migrate the 8.1.7.4 database to Oracle 10gR2. We will then also migrate Oracle Lite to 10gR2. We will then also be able to get rid of the wtgpack application since we can use the new Oracle Lite workbench. Hopefully that will give us a more stable development environment.

I still believe in the product; at least in the Oracle Lite database itself. Oracle even provides this database with their BPEL product and I have never had database problems while using BPEL. Once you have a successful deployment of your application you will have a very nice workable environment to create powerful offline database applications on laptops, PDA’s and smart phones.

The post Oracle Lite Part 2, how to get started (continued) appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2005/11/30/oracle-lite-part-2-how-to-get-started-continued/feed/ 6
Sending CLOB data from Tibco (Java) to PL/SQL stored procedures http://technology.amis.nl/2005/11/23/sending-clob-data-from-tibco-java-to-plsql-stored-procedures/ http://technology.amis.nl/2005/11/23/sending-clob-data-from-tibco-java-to-plsql-stored-procedures/#comments Wed, 23 Nov 2005 09:23:24 +0000 http://technology.amis.nl/blog/?p=909 On my current assignment I am working on interfaces between Tibco and Oracle PL/SQL applications. Tibco is a middleware solution that implements a messaging based interface solution between different software systems. For the interfaces I have implemented a generic bridge between Tibco and Oracle. This PL/SQL implementation provides one stored procedure that Tibco can call. [...]

The post Sending CLOB data from Tibco (Java) to PL/SQL stored procedures appeared first on AMIS Technology Blog.

]]>
On my current assignment I am working on interfaces between Tibco and Oracle PL/SQL applications. Tibco is a middleware solution that implements a messaging based interface solution between different software systems. For the interfaces I have implemented a generic bridge between Tibco and Oracle. This PL/SQL implementation provides one stored procedure that Tibco can call. This procedure will then determine the message type based on the root element of the incoming XML message. The interface will then lookup, from it’s configuration table, a specific stored procedure that can handle the actual XML message.

The argument of my stored procedure is a CLOB holding the incoming XML message. Then I called this procedure directly from PL/SQL there was no problem. Calling this procedure from Tibco (a Java based application) the call resulted in an "ORA-24805: LOB type mismatch" error. <!–more>This looked strange and google didn’t seem very helpfull in this. The first thing I did with the CLOB data was to create an xmltype variable holding the actual XML data. This way we are able to use all XML DB functionality on the data. The actual call generating the ORA message was:

procedure handleXML(p_xml in clob)
  l_XML xmltype;
begin
  l_XML := xmltype.createXML(p_xml);
end;

I then looked at the description of the DBMS_LOB package and read about lob locators. You can think of a LOB locator as a pointer to the actual location of the LOB value. So it looks like Oracle’s xmtype methods cannot access the CLOB data that is provided by Tibco. The workaround was to copy the clob data in a temporary clob that is available in the stored procedure. This way the lob locator has the correct type. Of course there is some (memory) overhead, but the XML messages are not that long te be any problem. The fixed code looks like:

procedure handleXML(p_xml in clob)
  l_XML xmltype;
  l_temp_XML      clob;
begin
  DBMS_LOB.CREATETEMPORARY(l_temp_XML, false, 2); — 2 makes the temporary only available in this call
  DBMS_LOB.COPY (l_temp_XML, p_xml, dbms_lob.getlength(p_xml),1,1);
  l_XML := xmltype.createXML(l_temp_XML);
end;

The second example can be called perfectly from Tibco.

The post Sending CLOB data from Tibco (Java) to PL/SQL stored procedures appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2005/11/23/sending-clob-data-from-tibco-java-to-plsql-stored-procedures/feed/ 0
Steve Adams is in town – Oracle with a calculator http://technology.amis.nl/2005/10/18/steve-adams-is-in-town-oracle-with-a-calculator/ http://technology.amis.nl/2005/10/18/steve-adams-is-in-town-oracle-with-a-calculator/#comments Tue, 18 Oct 2005 18:19:28 +0000 /?p=860 Today I attended the workshop that Steve Adams is giving in the Netherlands this week. We were with a group of four AMIS employees. The day started in a strange way. We all got a calculator. The reason why became clear during the day. Steve really showed us the internals of the Oracle database. We [...]

The post Steve Adams is in town – Oracle with a calculator appeared first on AMIS Technology Blog.

]]>
Today I attended the workshop that Steve Adams is giving in the Netherlands this week. We were with a group of four AMIS employees. The day started in a strange way. We all got a calculator. The reason why became clear during the day. Steve really showed us the internals of the Oracle database. We now know everyting to know about database blocks. We know how the block headers are build up. We know how log headers look like. We know the exact amount of bytes these headers are build of. And of course what these bytes mean.

The calculator was for calculating adresses and offsets that you can find in the Oracle data blocks. My head is still working on all this. But I am very curious about the upcoming two days. This is stuff you will not see (or need) every day. But it will help you when nothing helps you out with problems like performance tuning. We will post a more detailed review of the three days that Steve is in Utrecht soon.

(C) André Crone
.
.
.
Andre Crone.
.

—//—

So, what did bring this first day?

 

First of all, that the automatic self tuning new 10g features are great but they have a buildin drawback. Until now, Steve pointed out, the standard (not so OK) way was to create a working application on a database, and then afterwards look for how to tune the database (performance wise) to handle the application performance imperfections. Until now you could, most of the time, find on database level, ways to tune and improve performance. On Oracle 10g this is not so likely anymore – the database will be almost at it’s peak performance. If you have to increase performance, it will be hard to achieve and/or find workarounds for bad architectual design/testing/etc. This could, and will, result in more applications, which won’t perform on the workfloor.

If you want to increase performance, that is before Oracle 1og, the way you arange your columns does matter. Put your most selectable columns first. This will decrease your IO/CPU. Oracle internally jumps from “column pointer” (aka length fields) to “column pointer” to find the correct column it needs for the asked data. Every “jump” is one to many.

You shouldn’t create more than one controlfile. On it’s self this is logical, because every update on the controlfile (SCN’s/log switches/etc) will induce multiple IO’s. If you created more than one controlfile, these updates will then be sequentialy be applied to the other controlfiles. Instead off creating more than one controlfile, we should mirror it on the hardware level . In worse-case-scenario’s we always have the “alter database backup controlfile to trace” statement – right?.

It is possible, and Steve showed us, to show multiple after images of deleted data, which apparently is not so “deleted” as one might think. Not only on harddisks (a format is not enough to delete the old data) but also in the Oracle database, there are traces to be found of residues (flashback was already builtin years ago) ;-)

Block corruption, if not detected early (checksum not enabled), can really make a mess of your database. The database will show – initially only simple forms of data corruption – “normal” behaviour. There is of course a trade-off in performance when block checksum is enabled, but you don’t want the mess i have seen today happening.

Oracle sneaky introduced file “external headers” (since 8.1.5/7?), so nowadays there are two headers (the “old” (internal) header and the external one). I really mist/overlooked this new item.

…and for today my last remark…“size does really count”. If one doesn’t give precision to column declarations (“varchar2″, instead of varchar2(2), number(5), etc.) you really create a lot of (empty) space which is often not (logically) wanted and will increase IO, BUT we already knew this – right – and therefore always declare columns correctly.

Tomorrow more goodies to be seen from Steve, so it’s time to go to bed. Let’s see if someone IS faster than Steve, with our new calculators at hand, to give the answer on some HEX to Decimal questions ;-)

Marco Gralike.

ps.:
Oh yeah, to see how / what and were click the link

The post Steve Adams is in town – Oracle with a calculator appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2005/10/18/steve-adams-is-in-town-oracle-with-a-calculator/feed/ 2
SQLX How to easily generate XML http://technology.amis.nl/2005/10/07/sqlx-how-to-easily-generate-xml/ http://technology.amis.nl/2005/10/07/sqlx-how-to-easily-generate-xml/#comments Fri, 07 Oct 2005 07:35:51 +0000 /?p=831 I recently had the need to generate XML files based on data stored in relational tables. This was done via an XML DOM implementation on the project I am currently working on. This works fine, but it’s difficult to maintain. Modifications have to be made simple because the XSD’s are not stable yet. This was [...]

The post SQLX How to easily generate XML appeared first on AMIS Technology Blog.

]]>
I recently had the need to generate XML files based on data stored in relational tables. This was done via an XML DOM implementation on the project I am currently working on. This works fine, but it’s difficult to maintain. Modifications have to be made simple because the XSD’s are not stable yet.

This was asking for a new approach. The decision was made to go for SQLX after a discussion with my colleague architect Aino. SQLX gives you a very simple way to generate XML just by using SQL statements. Time for a first example:

  1      SELECT XMLElement("TheDate", sysdate
  2             )
  3*     FROM dual;
XMLELEMENT("THEDATE",SYSDATE)
---------------------------------------------
<thedate>07-OCT-05</thedate>

The return type of the XMLElement funtion is of type xmltype. The xmltype datatype is an object which gives you a lot of nifty methods that can be used to modify (xmltype.updateXML) or create XML (xmltype.createxml(‘ 07-OCT-05‘). There are also methods available for stylesheets conversions of schema validations.

The example above can easily be modified to return a clob instead of an XML type variable. Just add the .getClobVal() method:

  1      SELECT XMLElement("TheDate", sysdate
  2             ).getClobVal()
  3*     FROM dual

There is more than XMLElement. You will at least need to use XMLattribute XMLAgg and XMLForest to generate your XML data. The following example shows how to generate attributes in your XML:

  1  select XMLElement("TableName", XMLAttributes(owner as "Owner"), table_name
  2         ).getCLobVal()
  3  from all_tables
  4* where table_name like 'MY%'
SQL> /

XMLELEMENT("TABLENAME",XMLATTRIBUTES(OWNERAS"OWNER"),TABLE_NAME).GETCLOBVAL()
--------------------------------------------------------------------------------
<tablename Owner="THE_OWNER">MYTABLE</tablename>

Tables can have more than one column. You will need to use the XMLagg function to aggregate a repeating list into your XML data definition. The following query shows how to query the columns of our example table:

select XMLElement("MyDatabase"
       , XMLElement("table", XMLAttributes(tab.table_name as "name")
           , XMLAgg((select XMLAgg(XMLElement("ColumnName", col.column_name))
              from   all_tab_columns col
              where  col.table_name = tab.table_name
              and    col.owner      = tab.owner
                     )
             )
         )
       ).getCLobVal()
from all_tables tab
where tab.table_name like 'MY%'
group by tab.table_name;
<mydatabase>
  <table name="MYTABLE">
    <columnname>COL1</columnname>
    <columnname>COL2</columnname>
  </table>
</mydatabase>

XMLforest can be used to create a simple list of XML elements ( a forest). We will extend the example to show some basic descriptive attribute of our example table:

select XMLElement("MyDatabase"
       , XMLElement("table", XMLAttributes(tab.table_name as "name")
                  , XMLforest( tab.initial_extent as "InitialExtent"
                      , tab.tablespace_name as "Tablespace"
             )
           , XMLAgg((select XMLAgg(XMLElement("ColumnName", col.column_name))
              from   all_tab_columns col
              where  col.table_name = tab.table_name
              and    col.owner      = tab.owner
                     )
             )
         )
       ).getCLobVal()
from all_tables tab
where tab.table_name like 'MY%'
group by tab.table_name,initial_extent,tablespace_name;
<mydatabase>
   <table name="MYTABLE">
    <initialextent>1024000</initialextent>
    <tablespace>DAT_MEDIUM</tablespace>
    <columnname>COL1</columnname>
    <columnname>COL2</columnname>
  </table>
</mydatabase>

The previous simple examples show how easy it is to create XML directly from a query. This actual query could be hidden in a view. This way the creation of XML becomes completely transparent for the other developers who don’t have knowledge about the powerful XML capabilities of the Oracle database.

The post SQLX How to easily generate XML appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2005/10/07/sqlx-how-to-easily-generate-xml/feed/ 2
Oracle Lite Part 1, how to get started http://technology.amis.nl/2005/08/18/oracle-lite-part-1/ http://technology.amis.nl/2005/08/18/oracle-lite-part-1/#comments Thu, 18 Aug 2005 14:09:48 +0000 /?p=720 This post is the first one of a set that I will write about Oracle Lite. I have been working with Oracle Lite and I am impressed. Not by the level of documentation, but by the functionality of the package. Oracle Lite gives you the ability to create data snapshots on laptops, PDA’s, mobile phones [...]

The post Oracle Lite Part 1, how to get started appeared first on AMIS Technology Blog.

]]>
This post is the first one of a set that I will write about Oracle Lite. I have been working with Oracle Lite and I am impressed. Not by the level of documentation, but by the functionality of the package. Oracle Lite gives you the ability to create data snapshots on laptops, PDA’s, mobile phones etc. The synchronisation mechanism (provided by the tool msync) will synchronise data bi-directionally between the snapshot and the main database server. Client applications that use the Oracle lite snapshot can also be distributed en synchronised by Oracle Lite’s synchronisation functionallity. This makes deployment of applications very simple. I have written and distributed a simple .Net client written in C++ this way. My application uses ODBC to access the database. JDBC is also provided. Unfortunately it’s not possible to access the database by using OCI which is my favorite while programming in C/C++.

Oracle Lite applications can be defined with the wtgpack tool. With this tool you can define the snapshots by entering select statements. Queries can be restricted by using bind variables. The actual values of these variables can be entered by the administrator on a user of group level. The wtgpack tool can be used to deploy the application on the Oracle Lite middle tier. Clients use this tier for their synchronisation or to install the initial Lite database software. Deployment with wtgpack can end up with a 401 of 500 error. Tool at the tips below when you have one of these errors because the description of the error will not help you at all. The wtgpack tool will be described in a future post.
,
Installation of Oracle Lite is easy. Just install the server part by using the supplied installer. Here are a number of apparently simple and easy tips that will make life easier when you start:

  • Make sure to install patch set 1 after the installation of Oracle Lite 10.0.0.0.0. This way the deployment of an application defined by wtgpack will work much better and reliably.
  • The tables that are synchronised MUST have a primary key. Oracle Lite can also work with virtual primary key (mandatory unique keys?). But my experience tells me to use real primary keys. wtgpack will error with a 401 or 500 error during deployment of the application when one or more primary keys are missing.
  • Oracle Lite will create three tables per synchronised table in the schema that owns the tables. This messes up your database. It’s better to create an extra user holding synonyms to the synchronised tables. Synchronisation should be done via this extra user on the synonyms.
  • The msync tool will not work on very large tables because it will only commit after all the records are copied to the Oracle Lite database. It’s best to modify polite.ini (it’s in the windows directory) before you perform the first msync. Add AUTO_COMMIT_COUNT=500 in the [SYNC] section of the file. This makes sure the during synchronisation a commit is performed after each 500 rows.
  • wtgpack will display the tables that are used in the snapshot in a (for me) random order. Deployment of a snapshot will fail when there is a syntax error in the query; again a 401 or 500 error during deployment. Count the number of snapshots created in the log of OC4J. This way you can easily tell which table is failing because the snapshots are created in the order they are displayed in the wtgpack application.

The post Oracle Lite Part 1, how to get started appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2005/08/18/oracle-lite-part-1/feed/ 8
How to use the BPEL file adapter to load data into Oracle http://technology.amis.nl/2005/04/15/how-to-use-the-bpel-file-adapter-to-load-data-into-oracle/ http://technology.amis.nl/2005/04/15/how-to-use-the-bpel-file-adapter-to-load-data-into-oracle/#comments Fri, 15 Apr 2005 13:50:29 +0000 /?p=516 This post shows a simple example about how to use Oracle’s BPEL server to load and parse a datafile. The data in the file is then inserted into a database table. We use the file adapter and the database adapter to accomplish this task. The nice thing of this BPEL process is that we don’t [...]

The post How to use the BPEL file adapter to load data into Oracle appeared first on AMIS Technology Blog.

]]>
This post shows a simple example about how to use Oracle’s BPEL server to load and parse a datafile. The data in the file is then inserted into a database table. We use the file adapter and the database adapter to accomplish this task. The nice thing of this BPEL process is that we don’t have to instantiate the BPEL process ourselves. A BPEL process is started as soon as the file holding the data is detected by the BPEL engine.

The following pictures shows the BPEL process in jDeveloper:
Process overview

Note that the process has no user interaction. The process has no manual input. A BPEL process is instantiated as soon as a file is detected. This is accomplished by entering the “Receive instance” flag on the file receiving activity:

The input file is detected and parsed by a file adapter. This is a special add-on to the Oracle BPEL engine which is based on WSIF. The file adapter acts like a partner link in the BPEL diagram. Note that the actual adapter is running in the BPEL engine, it’s not an external partner link. Creating a file adapter is very simple. The following shows some screenshots of the wizzard:

The file adapter holds it’s own parser. This parser is XSD based. There is even a wizzard that helps you create a XSD file:


The previous steps result in a XSD file which can be used by the file parser of the file adapter.

I finally created a (very very simple) PL/SQL procedure. The data found in the file is passed to this procedure and inserted in the database.

create or replace procedure fillAndre(a number, b number, c number, d number)
is
begin
insert into andre values(a,b,c,d);
commit;
end;

The invoke activity in the diagram invokes this procedure and that finishes up the BPEL process.

The post How to use the BPEL file adapter to load data into Oracle appeared first on AMIS Technology Blog.

]]>
http://technology.amis.nl/2005/04/15/how-to-use-the-bpel-file-adapter-to-load-data-into-oracle/feed/ 4