From 21
February till 23 February the OakTable seminar Optimising Oracle (Performance
by Design) by Jonathan Lewis was held. The seminar had been organised by Lex de
Haan, who recently died, and his wife, Juliette Nuijten, from Naturaljoin (http://www.naturaljoin.nl). Despite his
death the seminar continued, just as Lex wanted. Naturaljoin will continue to
organise seminars. There were 82 attendants, an all-time high record for Naturaljoin.
The guest
speaker was Jonathan Lewis, an independent consultant who has written a few
Oracle books, including Cost-based Oracle fundamentals (part one in a volume of
three). Jonathan is a very good speaker with lots of humour and details. This
was a big plus since not all the stuff was always that interesting for all the
people. His motto is: don’t believe it because it is written in the book; just
learn how to discover, because Oracle keeps changing (even with patch
releases).
Day 1 was
about:
- Parsing and Optimising costs
- UNDO costs
- REDO costs
- Latching Issues
Parsing and Optimising
costs
This part
was about:
- discover how expensive SQL is
- learn what code suits Oracle
- understand the ‘quick fixes’
To discover
how expensive SQL is:
- create an object (you might use
dbms_random to create random data) - bounce the database (or execute
alter system flush shared_pool) - alter session set sql_trace
true - execute a simple statement
- check the trace file (using
tkprof for instance)
New
features like monitoring index usage, dynamic sampling of statistics, fine
grained access control (also known as virtual private database) have an impact
on the recursive SQL, so be careful with these features.
To see the
details of how the optimizer evaluates and selects a path this command may be
used:
alter session set events ‘10053 trace name context
forever, level 1’;
Coding
strategies:
- Literal strings throughout
- needed for Decision Support
Systems (data warehouse)
- non-scalable for OLTP
- Bind variables (nearly)
everywhere
- Can still end up ‘soft’
parsing
- Parse once and hold
- Harder to code safely
Parse once
and hold is done automatically by PL/SQL for cursors. For dynamic SQL
(dbms_sql) or ODBC/JDBC it means: parse or describe the statement only once and
keep the cursor.
Undo costs
This
session was about:
- understand how undo works
- understand what undo does
- be able to monitor the costs of
undo - investigate options for
minimising undo
As a
general rule, Oracle tries to avoid destroying information for as long as possible,
so you can’t make a change to a data block until you have a safe copy of the
original data – and you need some space to make that copy. This is were undo
segments (previously known as rollback segments) come into play.
Undo is
needed for rollback and read consistency. Oracle now offers automatic undo for
easier management of undo segments.
The command
alter system dump undo header { segment name };
can be used
to dump the contents of an undo header.
The
dictionary tables v$transaction, v$undostat, dbs_undo_extents are useful for
monitoring undo costs.
Rollback is
a very expensive operation, so minimising the number of rollbacks will reduce
the costs of undo. No change updates like
update t1 set c1 = c1, c2 = c2, c3 = c3
are expensive
too.
Redo costs
Why redo:
- Smarter I/O
- Recoverability
What does
it cost:
- Redo imaging
- Serialisation
- Archiving
Redo
logging:
- a stream of all changes to
blocks - may contain uncommitted changes
- must be written on a commit
- if it’s not in the log file, it
did not happen (create table/index… nologging) - not re-read in normal
processing
Effects of
commit:
- a ‘commit record’ must be
created - redo in the log buffer is written to disc
- the session ‘must’ wait for lgwr
to return - other processes may keep
writing to the buffer - other processes may commit and
then wait - lgwr writes all the pending
commits
The costs
for updating 17 rows in a PL/SQL loop:
|
Commit at |
Commit |
redo size |
5440 |
10220 |
redo |
18 |
34 |
redo |
16 |
4660 |
redo |
11 |
30 |
How to
avoid redo costs:
- use ‘proper’ temporary tables
- do not rollback or delete on
temporary tables but use commit/truncate, or make sure the table is created
with on commit delete rows
- use NOLOGGING for create
table/index - look at options for using insert
/*+ append */ - use direct loads with
SQL*Loader
As far as
archiving and backup is concerned: the Oracle backup and recovery tool RMAN
knows about all the Oracle internals.
Latching issues
Oracle uses
latches to protect memory (for instance several users trying to read/update a
linked list). Historically, Oracle handled the problem though exclusive
latches. Oracle 9i introduced the shared read latch.
Demonstration
code:
create table kill_cpu (n integer, primary key (n))
organization index as
select rownum n from all_objects
where rownum <= 23;
select count(*) from kill_cpu
connect by n > prior n
start with n = 1;
This simple
query takes one minute on an average NT machine (HP Omnibook 700 MHz).
The view
v$session_event shows as that there is the event ‘latch free’ is responsible
and from v$latch we find that the latch ‘cache buffer chains’ is responsible.
Options to
improve on pinning databuffers: set the arraysize larger (SQL*Plus).
Again:
parse once improves on latching issues.
The example
is a select from a table for all the numbers from 1 to 10000. There are four
variants:
- execute immediate using literal
text - execute immediate using bind
variables - parse once using dbms_sql
(dynamic) - using standard PL/SQL (static)
These are
the results:
|
Literal |
Bind |
Dynamic |
Static |
Parse CPU |
17.40 |
0.66 |
0 |
0 |
Parse |
10001 |
10001 |
2 |
1 |
Hard |
10000 |
2 |
2 |
1 |
Soft |
1 |
9999 |
0 |
0 |
CPU Time |
21.55 |
3.74 |
2.86 |
1.06 |
Recursive |
10000 |
10000 |
30003 |
10000 |
Things to
avoid:
- large triggers with SQL statements,
because they are parsed over and over again. - ref cursors do not use the
pl/sql cursor cache.
Can you please share the material,I am from india ,it is unforunate that,we have no seminars by lewis here in india