Bloom Filters, Hierarchical Profiling, Synopses & One pass distinct sampling, jumping the GAPP … – feast for DBAs (considered harmful..)

 

It’s Monday night and a select company of DBAs and Database developers have gathered at AMIS HQ in Nieuwegein for the ‘Looking Back at Hotsos 2009’ session. It is one of those sessions that has a lot of energy, laughter and a wealth of serious content. Four very experienced speakers relate their best experiences and most important learning points from the Hotsos conference. And the audience is allowed to participate. Speakers are Marco Gralike, Toon Koppelaars, Gerwin Hendriksen and Jeroen Evers.

Bloom Filters, Hierarchical Profiling, Synopses & One pass distinct sampling, jumping the GAPP ... - feast for DBAs (considered harmful..) toppertje

The evening brings us – apart from a quite good dinner – interesting topics such as Bloom Filters, Hierarchical Profiling,..

(note: the text in the title between parentheses are for insiders only)

Toon – Hierarchical PL/SQL Profiling

Toon summarized the presentation at Hotsos by Bryn Llewellyn on the PL/SQL Hierarchical Profiler. Toon enthousiastically explained how this profiler can help us gain a lot of insight into the execution of our PL/SQL applications. Information about time spent in program units, types of statements executed (such as Static SQL) and

Just like the ‘normal’ database trace, the hierarchical profiler writes a trace file on the file system of the database server. That file is then formatted/aggregated by the plshprof utility. Alternative, calling dbms_hprof.analyze will process the trace file and load into reporting tables (created with dbmshptab.sql).

Toon demonstrated the results from hierarchically profiling for the execution of RuleGen – Toon’s package for Business Rule implementation. On a simplified version of EMP he implemented the business rule that states that a Department may hold no more than 100 employees. The implementation in RuleGen uses several queries for checking the validity of the data. These queries are executed from the RuleGen PL/SQL package.

The plshprof utility creates a bunch of HTML documents that present the report based on the tracing data. The demo involved 207000 calls that took 15 secs. The data by Namespace (type of statements) clearly showed that 82% of the time was spent on SQL execution, a useful insight. The Report by total subtree elapsed time – per module (with all its nested calls). With very little effort, Toon identified a single function that accounted for 6 secs (of the 15). The line number is shown so we can find the exact location in the source code where the time is spent.

Toon could draw some meaningful conclusions from the profiling reports – conclusions that he can now turn into actions for further improving RuleGen by eliminating some framework overhead.

Finally, a very  brief Performance Analysis Methodology:

1. start with DBMS_HPROF – check time per namespace (type of operation)

2.  if SQL takes the longest, use SQL trace

3. if PL/SQL continue with HPROF or dbms_profiler

It would be nice – Toon suggested – if the trace file also contained the values for the PL/SQL parameters.

Conclusions: elapsed time by namespace – is cool! It is easy to use. If PL/SQL is your bottleneck, Hprof is your startpoint.

Jeroen – One Pass Distinct Sampling

Jeroen introduced his presentation by stating that it was about something not very useful – but very interesting (as was Marco’s talk on Bloom Filters). Jeroen was impressed what Amit was able to tell about an Oracle patented technology – and regarded it a enormous challenge to explain to his audience what exactly Amit had investigated and presented.

The Database gathers Statistics for guiding the optimizer. The NDV (number of distinct values) is a critical statistic for the query optimizer In 11g, Oracle has a new way to deriving the NDV. That is the topic for Jeroens/Amits talk.

We can trace the query that is executed for statistic gathering. It involves some count distinct statements – that in Jeroens/Amits example take 15 minutes, especially spent on sorting the rows. The time spent on sorting explodes exponentially with the number of rows. Therefore, Oracle used sampling to estimate the NDV. However, for NDS, the sampling yielded suboptimal results, due for example to skewed data distribution.  The sampling algoritm assumed a uniform distribution of distinct values – but that does not need to be the case and then sampling (with small sample sizes) is unreliable.

Jeroen/Amit demonstrate that with scattered value distribution a 10% sample (block and row sampling) results in a 0.98 accuracy. For Clustered, Normal and Lognormal distribution, the accuracy of the NDV ‘guesstimate’ is far less accurate  Some suboptimal NDV workarounds prior to 11g include Outlines, Hints, SQL Profiles, Adaptive Sampling etc. What we really need is a new, improved way for distinct sampling.

Oracle introduced this new way in 11g – and has a patent on it. A new algoritm can determine the number of distinct values without having to sort and without dependency on the data distribution.One pass distinct sampling – no second sort pass is required.

In brief:

– an hash function that maps columns values to a 64 bit has value

– after hashing the column values, the data distribution (of the hashed values) is uniform

– the hash values are collected in memory (the memory area is called Synopsis) – every value is loaded just once; a maximum of 16384 distinct values are kept in the memory area

– when the area is full, 50% (approximately) of the values is discarded (all values that start with 0); the number of times we flush the synopsis is remembered; the hashing is continued (hash values that start with 0 are not accepted) until the area is full – hash values starting with 10 are then discarded and the process continues (with values starting with 0 or 10 no longer accepted). The total number of distinct (hashed) values is easily determined.

The algoritm is scalable as it linearly walks through the rows and does not need to do a sort operation. The time taken for determining the NDV statistic in this way increases with the number of rows – in linear fashion. The accuracy of 1 or 10% sample is quite good – and more or less equally good regardless of the data distribution.

Dbms_Stats.gather_table_stats uses the new NDV algorithm provided two database parameters have correct (default) value.

Conclusion: use dbms_stats.auto_sample_size, database parameters ‘APPROXIMATE_NDV’ should be set to true and ‘ESTIMATE_PERCENT’  to dbms_stats.auto_sample_size; both values are set by default in Oracle 11g. NDV sampling for partitioned tables are done in lean manner that builds on top of the synopses.

 

Gerwin: Practical use of GAPP – Jumping the Gap

Gerwin did the last presentation on this General Approach Performance Profiling (GAPP). GAPP is Gerwin’s method for analyzing the performance of complex infrastructures across all components using statistical analysis of tracing data. Using GAPP, he is able to find bottlenecks – pinpointing the one component out of dozends or hundreds (storage, network, cpu/servers, database, application server, LDAP) that is responsible for peaks in performance. GAPP also helps to predict performance issues.

The steps in GAPP are:

1. data collection (gathering of performance data and tracing details for infrastructure components),

2. data synchronization – ensure that all tracing/logging data is linked on the same start time and time scale

3. data mining – statistical analysis of (correlations in) performance data and load data (CPU graphs, I/O measurements, etc.)

Gerwin continued to show how he was able in several real life environments with limited information and complex technical architectures and infrastructures to find the bottlenecks and thereby the solutions for performance challenges. The amazing results of statistical analysis – that are easily misinterpreted by the way – as long as you have the right data (that is properly aligned on time) are impressive.

Resources

Bloom Filters – by Christian Antognini.

Doing SQL from PL/SQL:Best and Worst Practices (Oracle White Paper) by Bryn Llewellyn,

Using the PL/SQL HIerarchical Profiler – Oracle Documentation

Website Amit Poddar and his paper on One Pass Distinct Sampling.