I am currently attending the OUGF 2014 Harmony Conference in a beautiful setting on a Finnish lakeside.
This conference has a great line up in terms of speakers and sessions – truly impressive. It is a two day event with 6 parallel tracks – complemented with sauna, camping and pretty good food. ACE Directors and Oracle product managers have flown in from various parts in the world to present on their various specialties. A very special guest star is Chris Date who presents a seminar on Temporal Data in the Relational Model. All in all, an agenda for the connaisseur (who wants to learn still more).
I am presenting three sessions myself: A Case of Fusion Middleware (labeled as the keynote session for the Fusion Middleware track), Introduction of Oracle SOA Suite 12c and ADF in action – getting (re)acquainted with Oracle’s premier application development framework. I will shortly publish these presentations on my Slideshare site.
Chris Date – Temporal Data and the Relational Model
Edit (14th June 2014): Temporal data are covered by the relational model (not specifically, but they are part of the model just like any other type that can be generated). However,it does make sense to discuss in some details how temporal data might be dealt with.
As Chris Date describes [his approach]: It’s merely a set of carefully thought out shorthands for things you can already do in any relationally complete system. The only really new construct is the INTERVAL type generator, and [..] the relational model has nothing to say regarding what types and type generators need to be supported (well: In fact there are two exceptions (but only two): Type BOOLEAN must be supported, and so must the RELATION type generator. But that’s all.) ”
The relational model needs NO extension — and no “subsumption,” and above all no perversion — in order to solve the temporal data problem. ”
End of Edit
In his talk today, Chris Date discussed intervals. He focused on date intervals – that contain an ordered, countable number of data points (one for each day, hour or second). The theory equally applies though to other intervals or ranges, such as alphabetic ranges or numeric ranges.
Chris explained how we need a number of new operations to take into account the data points that each interval represent. These operations turn out to be generalizations of existing ones.
The notions of PACK and UNPACK are important. These combine all overlapping and meeting intervals into a single one and turn each interval into the expanded set of unity intervals. These operations are performed frequently to perform logic, enforce constraints and perform DML operations on tuples with interval based attributes.
Data Modelling – do not not do as suggested in many publications. Do not design your data model without regard for the temporal aspect, and then just add the temporal column(s) to your tables. This would suggest that all attributes we want to keep track of change at the same rate. Which they usually do not. Instead, use vertical decompositions and horizontal decomposition – to take into account these varying degrees of change. Distinguish between historical records (that were valid DURING a certain interval) and current records (that have been valid SINCE a certain date) – and even use distinct relations for these two categories.
The book that Chris co-authored on this topic is called Temporal Data and the Relational Model. It can be found for example on Amazon.
In the near future I hope to do more justice to Chris’ talk – which he will continue tomorrow.
Tom Kyte: Oracle Database 12c SQL Translation Framework can be used to rewrite the SQL statements executed by COTS applications (as well as statements issued by applications that think they run against a MS SQL Server Database). Interesting article http://kerryosborne.oracle-guy.com/2013/07/sql-translation-framework/. Also note in the documentation this section on dealing with bind parameters. Interesting approach to temporal validity it supports flash-forward queries, looking into the future of the data. In a sense, that is true: we can specify beforehand when which records in the database will become ‘current’ or ‘valid’, This means that we execute queries that are executed in the context of a future date to find out what data is valid at that point in time.
Bjorn Rost: if the FBDA. Flashback Data Archiver Process cannot write against an Flashback Data Archive, then the corresponding entries in the UNDO data cannot be removed. If this goes on for long enough, there may not be enough UNDO space available to perform any new DML operations – against any table in the database! Better monitor the free space in the Flashback Data Archives pretty closely. Bjorn is using Flashback Data Archives in a production environment. One of the applications used in his environment allows users to ‘go back in time’ and look at the data from one month ago (or some other random point in time). Hardly any coding or configuration is reqiured to make this feature available. I thought everyone already new – but perhaps not everyone does. His experiences are very positive – and very good to learn about. He wants people to know that starting with Oracle Database 12c – Flashback Data Archives is a free feature in all editions of the database.
Bryn Llewellyn talking about improvements in Oracle Database 12c for PL/SQL development. Key topics: SQL <=> PL/SQL interoperability and the reduction of the context switch issue. For example: execute immediate <statement> using <some_package_variable based on some PL/SQL Record Type>. Before 12c, the bind variable would have to be defined using an ADT (aka UDT): type created by the DBA in the database schema. Next quote: “You can catch SQL Injection of a lavatory seat”.
Interesting discussion over a cup of coffee: how useful is it in terms of performance to use native compilation for PL/SQL stored programs. The good news: there is no inherent risk or overhead with using native compilation. There is a hit at compilation time – so packages that frequently become invalid and/or are recompiled are perhaps not great candidates for native compilation. The somewhat more down to earth news: the performance impact of using native compilation is likely to be not very large (even noticeable) at all. It really depends on what the PL/SQL code is doing. If its main function is the execution of SQL (or rather asking the SQL engine to perform SQL), there is no gain whatsoever from using native compilation. If the PL/SQL code spends most of its time calling out to other PL/SQL units, then again, there will be no real advantage. However, when the PL/SQL code is doing serious number crunching, iterations or recursive operations without callouts, then native compilation may really contribute to better performance. The message of course is: you have to try out to find out what it mind mean to you. However: the cost is small so you might as well give it a try.
Bryn was good for a few juicy quotes – such as “Low hanging fruit that is so low it is subterranean” and “You can catch SQL Injection from a lavatory seat”