A day with Jonathan Lewis
A little while ago, Jonathan Lewis presented a three day seminar in The Netherlands. You may have read other blogs on this site about this seminar. AMIS had sent six people to attend. I was one of the lucky ones. In this blog I will describe some of the things that I thought were really remarkable.
One of the things that Jonathan really stresses is that things change. With different versions of the database, or even minor versions, settings can change. Some of the default values of database parameters, whether they are documented or undocumented, change from version to version. One of the examples he gave was the hidden parameter _unnest_subquery. The default value in 8.1.7 is False, where as this value changed to True in 9.0.1. Why is this important? It changes the way the database works and could be responsible for a change in performance.
Jonathan knows his versions of Oracle, throughout the seminar he points out differences between the different (minor) versions of the database. "While this is true in 10.1.0.2, it’s changed in 10.1.0.4"
Although the seminar hardly contained any demos, except for the two times when he opened up a SQL*Plus session, Jonathan showed a lot of results which he had gathered when conducting experiments. He showed the proper way of conducting tests. How to make sure that the tests you conduct really what the intention is.
In the section on the cost of parsing, Jonathan also points out the different data-dictionary tables which can assist you in finding what you need, such as the OBJ$, TAB$, TRIGGER$, and many more.
When a query is parsed, a lot of recursive SQL is performed on your behalf. The amount of recursive SQL that is done shows up in the statistics when you set autotrace on. Which statements are done, can be found in a trace-file. Here you will find queries against these dictionary-tables mentioned before.
For a simple query against a view, such as:
select [list of columns] from view_xyz;<br />
Oracle runs recursive SQL, such as:
select text from view$ where rowid =:1;<br />
One of the remarks Jonathan makes regarding this recursive SQL and views is: "If you do lots of small (unshareable) queries against views, then the normal penalty of unsharable SQL is made worse by the presence of the view." Views require this recursive SQL to be executed every time with every optimization call.
New features are not always heaven sent. They are not always for free, but come at a cost. An example of this is Index Monitoring on whether they are used in your system. This sounds like a good idea. But, there’s always a "but", it only records when an index is used during a hard parse. When the index is "hit", it updates OBJECT_USAGE. This table can be found in the SYS-schema. It only records whether and index is used, not how frequently it is used. Monitoring your indexes adds a lot of overhead.
Row level security also adds overhead. This is called every time on a parse and execute, it’s secure but it comes at a cost. In 8i, it was only done once at optimization time. Sounds good, but when security changes after optimization you had a problem. This has been solved in later versions.
During the Undo-section in the seminar, we saw a "live" demonstration of how undo works in combination with read-consistency. Two "volunteers" were the buffer cache, three others were the undo segments, another two were long-running queries (no, they stayed in their seats) and one was the SCN-timer. The SCN, System Change Number, is a ticker which is incremented during a commit.
During this demonstration, I will show the same demo during a Knowledge Center meeting at AMIS, it became clear that you can have many out-of-date copies of the same block because multiple users can query the same information at different times.
Undo block, in contrast to table blocks, are very tidy. Pointers are not being moved around. This is not necessary in an undo block.
A detailed description of how undo blocks logically work was next. Generating undo is a lot of work but guaranties read consistency.
Undo is very, very complicated. "Anyone who claims to know it all is probably being a little over-optimistic".
If the user interface can’t detect which items have been changed by the end user it will create update-statements like:
update t set col1 = col1, col2 = col2 â€¦<br />
This will generate a lot of undo, because of all the changes. On the other hand, if it can detect which fields have been changed, the user interface will send a different update statement every time. A new piece of SQL is submitted every time. This will require less undo to be generated. That sounds like a good idea, but the number of (hard) parses will skyrocket. That sounds like choosing between a rock and a hard place.
Redo is one of the most essential parts of the Oracle database. It is used for the recovery of the database in case of a failure. The redo log contains a stream of all changes to a block. If it’s not in the log file, it didn’t happen. Even uncommitted changes are in the redo log file. Everything is geared toward creating redo, going forward, committing changes. When making changes to data, the LGWR writes redo to files.
The actual commit is reduced to only recording a commit record in the redo log file. During a commit, the session must wait until the LGWR returns. That’s the reason that the commit-time is "flat", it doesn’t go up as your transaction gets bigger.
PL/SQL doesn’t commit as often as you might think. The commits still happen, but in a single database call a single synch write is issued. This means you can’t test a load of a highly concurrent system in a pl/sql loop. This doesn’t mean either that committing in a pl/sql is free either, it still a bad practice.
Reducing redo safely can increase database performance. One example of this is use Global Temporary Tables (GTT), instead of a regular heap table for temporary data, will not generate redo. Well, almost no redo. Undo is generated, and undo is protected by redo. Use GTT properly, meaning don’t rollback or delete from it but commit or truncate it. There’s also no physical storage for it, segments are allocated dynamically.
During a rollback, datablocks are being reconstructed based on the undo information. It reconstructs the block logically, not physically. If it would physically reconstruct the block, other sessions changing blocks are affected by the rollback. Because of all the changes that need to happen to the datablocks again (they have been modified otherwise that wouldn’t be anything to rollback) logging occurs. The time it takes to rollback changes does increase as the transaction gets bigger.
Short story: Latches are locks to protect memory. These lightweight locks serialize the actions that can be performed and thus create a less scalable application. Reduction of the amount of latches increases concurrency and scalability.
Hard parsing of SQL will increase the amount of latches and eventually kill your application. Bind variables is the way to go. Even better is to parse just once, and reuse all the time.
A few tidbits
It’s possible to give your tracefile a proper name, instead of a system generated one.
Alter session set tracefile_identifier = 'â€¦';<br />
Are you using triggers to keep track of who’s modifying your data? And you are working on a 10g database? [10.1.0.2.0]
There’s a NO_TRIGGER hint, and it’s bad as it sounds.
SQL> create table t<br /> 2 (x int<br /> 3 ,name varchar2(25)<br /> 4 );<br /><br />Table created.<br /><br />SQL><br />SQL> create or replace trigger trg<br /> 2 before insert on t<br /> 3 for each row<br /> 4 begin<br /> 5 :new.name := 'Alex';<br /> 6 end trg;<br /> 7 /<br /><br />Trigger created.<br /><br />SQL> insert into t (x) values (1)<br /> 2 /<br /><br />1 row created.<br /><br />SQL> select * from t<br /> 2 /<br /><br /> X NAME<br />---------- -------------------------<br /> 1 Alex<br /><br />SQL> insert /*+NO_TRIGGER */ into t (x) values (2)<br /> 2 /<br /><br />1 row created.<br /><br />SQL> select * from t<br /> 2 /<br /><br /> X NAME<br />---------- -------------------------<br /> 1 Alex<br /> 2<br />
Scary isn’t it? When you think you have an auditing in place which can’t be circumvented, apparently there is a way to get around it without disabling triggers.
The material covered is this blog was presented on the first day of the three day seminar, but in a lot more detail.
Some of the material was not really geared toward developers, but more toward DBA’s. Most of the V$ and X$ tables I have never heard of before, but it’s good to know they exist. Being a developer myself, I learned a lot about Oracle and how it does what it does.
I am very glad I was able to attend this seminar.