This week I followed the Seminar Optimizing
Oracle Performance by Design by Jonathan Lewis in which he talked about locks,
trouble shooting waiting and trouble
shooting working on day 2.
Session 1 of the day about locking Jonathan start
telling us that Oracle has no structure which is a list of all row locks
currently active in the system, or by user.
The row is simple locked if its lock byte
points to an entry in the ILT (interested transaction list) and that
transaction is still active.
He explains that the size of the ILT is limited by the
maxtrans which in fact is limited by Oracle to approximately half the size of a
block. Big plus point is that you do not need to perform "tricks" for holding
as little locks as possible since there is no limit on the number of concurrent
row locks you can hold. Jonathan continues explaining what a lock is and how deadlock are
created. Best part of this session in my opinion are the risks of the
"new" functionality of the merge statement.
Although the merge statement is much more
effective in processing data since it uses array processing, there could be
side effects of using the statement.
Merge into old_data od
from new_data) nd
update odd.columnname=nd.columnname where …
delete where …
when not matched then
If the old_data table is the master table
of a master/detail relation and the detail table is not indexed Oracle will
issue a locktype 4/5 on the detail table (Share and share row exclusive). This
side effect results in a row by row processing of the statements, no array
processing here anymore!
We also get a demonstration on how you can
use the DBMS_LOCK package to schedule a number of conflicting processes. EG function
A may not run concurrently with function B. Only one function A may exists
while there may be several copies of function B running at the same time.
DBMS_LOCK has some great functionalities to perform such a task and I would
advise everybody needing to execute a scenario as described above to dig into
the documentation of this package.
Session 2 Application overheads.
Jonathan talks about Materialized Views,
Fine Grain Access Control, Fine Grained Auditing, Pipelined and Analytical
functions and the impact on the performance of your system. The slide on External Tables
was pretty eye opening. I used External Tables in the past several times and
never considered the fact that if anything goes wrong after you start reading
it , you might need to rollback lots of data and address the OS to find out
what went wrong. In other words I will consider again the use external tables
unless I am 100% sure that the data is ok.
In these sessions the strategy is discussed
to trouble shooting waiting and working.
Jonathan shows us that it quite easy to
develop a good strategy for pinpointing the pain areas. Most important thing is
to do it on a live production system with the correct dataflow. Then it is very
important to ask the correct questions.
- When is the problem occurring? It simply
makes no sense debugging on a Monday if the issue always occurs on a Tuesday.
- Where is the problem? Client, server,
With the use of the correct scripts and the
analysis of the correct database views it is easier to detect the issues. A big
help in this will be if you have something to compare the outcome with. What
did the system when it was performing ok?
After the detection you can determine if
there is too much work done one the server or if there are too many waits.
Now you can address the problem.
I must admit that this is easier said than
done and you will not be a tuning expert after following the seminar
was not built in a day wasnâ€™t it?