The show is back in town, and so am I. Precisely one year a go, the ODTUG crowd assembled in Scottsdale, Arizona. This year we all flocked to New Orleans (LA), for another edition (my tenth) of the ODTUG Conference for Oracle Development Tools users. AMIS is represented by Alex, Harm and myself. Between the three of us, we are doing five presentations. And we are having some good fun in between; Bourbon Street is actually quite nice….
At ODTUG 2005, Oracle Product Management will present plans and strategies for the next year(s) with regard to the Development Tools: Oracle Designer, Oracle Forms, Oracle Reports, the BI and DWH Tools: Oracle Warehouse Builder, Discoverer and BI Beans, the J2EE stack: JDeveloper, ADF and JHeadstart and and Portal and HTML DB. They will not only present strategy (as it is thought of today), they will only demonstrate the current tools as well as the soon to be released versions, such as OWB 10gR2, JDeveloper 10.1.3 with ADF Faces, JHeadstart 10.1.2 – The Summer of 2005 Upgrade and so on.
In addition to the guys from Oracle, probably even more interesting are the 100+ presentations by technical people like myself, developers working with Oracle Technology every day, now presenting on their findings, experiences and best practices. Hearing someone actually using the tool explain what he is doing with it and what tips and tricks are is infinitely more valuable than have a product manager explain what he thinks you probably could do with a product. (having said that, most Oracle staff at this conference is actually quite experienced and can offer sensible practical advice).
It’s now early Monday morning (6AM local time). Yesterday we had the unofficial start of the conference, with 7 parallel three-hour tool-topic sessions, followed by a mini-seminar by Tom Kyte and the Welcome Reception. Today is the official opening, the keynote by Christophe Job, vice president Application Development Tools Division at Oracle, followed by four rounds of a total of nine parallel presentations. I am in today’s round 3, presenting under the title: Oracle 10gâ€™s Finestâ€”The Top 3 SQL and PL/SQL Features New in 10g . This presentation introduces three brand new features in 10gR1: the SQL Model Clause, Expression Filters and the PL/SQL Data Mining API DBMS_FREQUENT_ITEMSET. I have just completed the presentation and my demoscrips and uploaded them to the AMIS Weblog. If you are interested, click here:10gtop3SQL_PLSQL.zip.
Oracle 10g Discoverer Drake (10.1.2)
I went to see a 3-hour presentation on Oracle Discoverer, the 10.1.2 Drake release, by Michael and Darlene Armstrong-Smith. I have not been a frequent Discoverer user and my main experiences are with Discoverer Desktop 9i. This presentation was mostly on Viewer and Plus, the web based End-User Tools, that starting with this release all but make Desktop redundant. Following is what struck me most in the presentation:
- Plus can now do anything Desktop can, and more.
- Viewer and Plus both run in a browser, powered by an application server middle tier. Viewer is HTML Based, Plus is and Applet that requires either the SUN JVM or Oracle JInitiator in the browser
- For setting up and using Oracle Discoverer Drake, you do not need Oracle Portal or the Application Server 10gAS (or 9i) Infrastructure. So it can run more or less on its own, with nothing but a standard OC4J installation. Note: without infrastructure you have no central connections, no single sign on (SSO) support. But otherwise, everything runs fine! Without Portal, you clearly do not have the Portlets and for some reason, not the Gauge-style graphics, as they are part of the BI Dashboard in Oracle Portal.
- Using both Viewer and Plus seems very easy and intuitive. Apparently, Plus used to be not very good and Viewer all but useless, but with Drake that has thoroughly changed. Viewer can show practically anything set up in Plus. Typically 80% of end users may be best of with using Viewer. Note: Viewer now support emailing workbooks, export to PDF and printing as hard copy. Save a workbook as file is not supported, workbooks are always saved to the database.
- Using hyperlinks between worksheets, even across workbooks and End User layers, you create many easy to develop and easy to use drill down paths. This is a really cool way of working. You can define a CrossTab on Regions with Countries and Cities, drill down on a certain Region and open a Table worksheet with the Cities and then drilldown on the details for a City which is another Crosstab for Products and Time for example. Any cell can be made to drill down to many different destinations, available from a small popup menu on the cell. A hyperlink does not need to refer to worksheet, in can also refer to a real http:// based URL; Discoverer supports placeholders in URL definitions, so that the actual values from the selected row can be inserted into the URL and passed along with it (something like
http://myhost/reportserver/showExcelReportForCity?City=Amsterdam&Year=2005. Unfortunately, Plus does not support drill back up. You have to create another hyperlink from the lower drill level to link to the higher level. Viewer does support drill up, simply through the Browser’s Back button!
- Cascading Parameters – very easy to set up a parameter whose values are restricted by the value of another parameter. For example: parameter Cities that has its list of values restricted by the Country chosen for the Country parameter. By the way: parameters can be optional now!
- Apparently new and quite important: in 10.1.2 the results of scheduled workbooks can be shared (so heavy duty queries do not have to be run for every user using a Workbok).
- A nice new feature: text annotations in worksheets that help the user select parameter values or interpret the report. Annotations are not printed or exported.
- Stoplight Formatting, choose your own colors and value ranges; e.g. under 100 is blue, between 100 and 150 is green, 150-200 is orange and above 200 is dark brown.
- Support for Analytical Functions – Plus now has convenient templates for basing Calculated items on Analytical Expressions. And interestingly it also supports the use of one Analytical Expression based calculated item as the order by value in another Calculated Item based on an Analytical Expression (that requires the underlying SQL to have an inline view for the first Analytical Expression); smart cookie, this Plus edition.
Tom Kyte – The Things We Know
Tom Kyte’s presentation was very entertaining, even though for us -Alex, Harm and myself attended the three days Tom Kyte seminar back in February in The Netherlands; see post Lots of little interesting notes on Oracle 9i & 10g – database design, DBA, architecture, performance etc. from the Tom Kyte seminar – it was already quite familiar. He demonstrated how easily you grow into habits that may once have been valid but may very well be not valid any longer or act based on assumptions that are not true (anymore or all the time) or even may never have been true. Do not trust authority – bit rich coming from him you might say -, have everyone proof their statements. And have them proof the statements in your circumstances; what may work for him may not work for you, given the conditions that you are dealing with.
Kyte presented some infamous examples of myths that have circled round for many years, propelled by believers had no real foundation for their believes:
- Select into is wrong. You must open a cursor, fetch and close explicitly. This has never been true in Stored PL/SQL
- Bulk Collect should always be used when processing more than several dozens of records from a cursor for loop, even though it is more and complex code – true from 8i to 9iR2; in 10g, a cursor for loop (“slow by slow”) is automatically rewritten by the PL/SQL engine into a bulk collect operation – so we can go back to the simple code that used to be bad because of slow-by-slow
- Data and indexes should be in separate tablespaces – only perhaps for transportable tablespaces if you want to transport the data and rebuild the indexes. Because of advanced disk-reading capabilities such as EMC read-ahead, it may make sense to distribute tables and indexes over different tablespaces by type of access: tables and indexes that are typically read in multi-block mode (full table scan) should be together in a tablespace. Tables and indexes accessed usually in single-block (index scan) mode should be put together in another tablespace.
- NOT IN in SQL queries should always be rewritten with EXISTS – not true, in 10g both are treated the same by the CBO and result in the same execution plan
- Buffer Cache Hit Ratio should be over 99% – it is like saying ‘your car should be returned to the dealer when it is running on average at less than 99% of its document maximum horsepower; ratios are indicative only, never a goal on their own
select count(1) from tableis faster than
select count(*) from table– not true, never was true
Select * from table where nvl(:bindvariable, column) = columnis bad programming; you should (probably in the client) program an if-then-else that decides whether to do the query
select * from tableor the query
select * from table where column=:bindvariable– the CBO in 10g is smart enough to create this if-then-else construction in the execution plan itself!
- Full table scans are bad; you should force the CBO to use indexes – sometimes true when you want to query only few rows from very large tables. However: the CBO knows best! Full table scan can be the optimal approach in many cases
- Commit frequently! – not true: it generates lots of extra undo and redo, causes a log file synch (wait!) and you may break the logical cohesion of your transaction. Transaction is first and foremost a logical concept, not a physical one. Committing may be required in databases that do not support versioning (read consistency), and have table locks when inserting or updating.
Questions at the end:
What is the best way for finding out the optimal percentage for collecting statistics on (quickly rephrased from the original wording: what is the best percentage…. which would have invited a somewhat cynical response from Kyte, stating that ‘if there were an optimal percentage, that would have been the value used by the database and you would never have had to worry about it’). There really was no clear cut answer to this question, except: make a copy of your production system, set up a test environment with it, collect metric on the current performance of your system and then start introducing some changes. Watch what the metrics do, which execution plans change and whether they change for the better, the worse or the more or less same performance. Once you are satisfied that you have gotten to a better performing system, make sure that you save the current statistics in the production database (so that you can immediately return to a more or less OK situation if anything goes horrendously wrong) and make the changes in the production environment that you arrived at in the test environment.
Our system has a column called LIMIT in many tables and therefore PL/SQL and SQL statements. However, in turns out that in 10g LIMIT is a reserved word. How can we deal with this? Can we upgrade to 10g without having to change all column-names and hence all code referring to those columns? Kyte told the story of the OBJECT_ID function that was to be introduced into 10g. However, since a lot of the internal (data dictionary) tables have columns called OBJECT_ID, development is being persuaded to not introduce a function that has that name. He suggested that if you are a really, really big customer, you might be able to get Server development to a point where they choose a different name for the reserved word (unlikely) or allow you to set some event that has Oracle 10g (temporarily) ignore the LIMIT reserved word and regard it as any other column name. The general best way of dealing with column names that clash, is to define those column names using double quotation marks:
create table x ("LIMIT" varchar2(10)). However, then you still need to go in and make all the changes to the SQL and PL/SQL code. The good news is: those changes are probably easy to automate. In fact, Alex suggested that Harm could use this example in his JDAPI presentation where he will demonstrate the AMIS WebForms Toolkit that can among many other things automatically search and replace in Forms Sources (fmb, pll).