When defining sequences, you can choose to set a cache size or disable the sequence cache:
create sequence tst_nc2 start with 1 increment by 1 nocycle nocache;
create sequence tst_ch start with 1 increment by 1 nocycle cache 50000;
I wondered, does it make an difference in performance?
A simple test:
declare l_id number(12); begin dbms_output.put_line( 'start nocache op '||to_char(systimestamp)); for r in 1..49999 loop select tst_nc2.nextval into l_id from dual; end loop; dbms_output.put_line( 'end nocache op '||to_char(systimestamp)); dbms_output.put_line( 'start cache op '||to_char(systimestamp)); for r in 1..49999 loop select tst_ch.nextval into l_id from dual; end loop; dbms_output.put_line( 'end cache op '||to_char(systimestamp)); end;
start nocache op 22-09-06 10:39:41,163681000 +02:00
end nocache op 22-09-06 10:40:39,500215000 +02:00
start cache op 22-09-06 10:40:39,500591000 +02:00
end cache op 22-09-06 10:40:55,823251000 +02:00
Without cache, 50k of sequence numbers are fetched in about 58 seconds. Filling the cache with 50000 records at once and fetching them takes 16 seconds.
So, there is a noticable benefit of using the cache.
When I fill the cache with values before running the procedure:
select tst_ch.nextval from dual;
1 rij is geselecteerd.
1 declare l_id number(12);
3 dbms_output.put_line( ‘start cache op ‘||to_char(systimestamp));
4 for r in 1..49999
6 select tst_ch.nextval into l_id from dual;
7 end loop;
8 dbms_output.put_line( ‘end cache op ‘||to_char(systimestamp));
start cache op 22-09-06 10:51:25,604447000 +02:00
end cache op 22-09-06 10:51:32,852197000 +02:00
PL/SQL-procedure is geslaagd.
I see that just reading 50k values from the cache takes just over 7 seconds.
So, if you really, really, really need to squeeze the last bit of performance improvement out of you batch, check your sequence’s cache size.
6 thoughts on “How does the sequence cache affect performance?”
I have addressed a similar issue on my blog, have a look at it. .
Don’t forget that 1 REDO block will be created for each update of the sequence, which occurs every cachesize entries.
So if you are APPEND INSERTING 10s of millions of records and have a cachesize of 20 then you will still be generating quite a bit of REDO
Caching makes a significant different to batch data loads. Its a good tool to have when trying to tune a third party app without changing code – if the app is batch. I’ve found that a cache size of about 100 is optimal. Also coding the sequence directly into the DML cuts down the work to do. eg: insert into t ( col1 ) values ( my_seq.nextval ) returning col1 into l_my_col1; Of course, you can see greater benefits if the batch can be bulked up ie: using an insert into select …from…. Mathew Butler
I think the cache can be very usefull if you have a RAC environment, because otherwise the different instances are playing ping-pong with the sequence dictionary tables all the time.
I think there is an other optimal setting for each situations and each type of DML. My assumption was that cache size should be set just above the expected number of sequence calls within a large batch or procedure.
Testing some in-between values would yield a more reliable result of this test. Then we could also see if the relation is lineair or exponential, for instance.
interesting would be how good/bad the performance is if you cache 1000 Vals.
Sure the dictionary mut be updated a lot more but i am still interested in.
Comments are closed.