Comments on: Functional Partitioning Friends of Oracle and Java Fri, 27 Mar 2015 04:47:22 +0000 hourly 1 By: Marco Gralike Sat, 22 Oct 2005 09:39:42 +0000 /?p=868#comment-2579 By the way, i pointed out your solution to Mark Rittman as an item in the discussion on “Column Based Databases” (

Sometimes a tradeoff, isn’t always a bad thing ;-)

By: Marco Gralike Sat, 22 Oct 2005 09:02:08 +0000 /?p=868#comment-2578 Bert-Jan, thanks for sharing, great idea, but i couldnt help myself…I think there is a limitation build in here.

After reading stuff on Mark Rittmans blog about Temporal databases, Column Based databases, and lessons learned from the Steve Adams seminar this week (including some stuff i learned Oracle internals given a long wile ago by Anjo Kolk), i was wondering…

Correct me if i am wrong. The way i read is -> company : tablespace = 1 : 1

This means that the maximum amount of tablespaces equals the maximum amount declarable companies. The maximum amount of tablespaces you can declare is the maximum amount of datafiles you can declare. The amount of datafiles is set during database creation by the DB_FILES parameter and the MAXDATAFILES clause (which in the end is OS specific).

From the manuals (10g):


Specify the initial sizing of the datafiles section of the control file at CREATE DATABASE or CREATE CONTROLFILE time. An attempt to add a file whose number is greater than MAXDATAFILES, but less than or equal to DB_FILES, causes the Oracle Database control file to expand automatically so that the datafiles section can accommodate more files.

The number of datafiles accessible to your instance is also limited by the initialization parameter DB_FILES.

DB_FILES specifies the maximum number of database files that can be opened for this database. The maximum valid value is the maximum number of files, subject to operating system constraint, that will ever be specified for the database, including files to be added by ADD DATAFILE statements.

I am not really worried by the restriction, it could be suffice and otherwise goto the overflow partition. What i am worried about is that you will increase the controlfile, AND as we know (again), will introduce extra IO when the controlfile size will increase – beyond the max bytes per single read action of the OS (is it still 64K?). So if we are unlucky we introduced a double read and/or write action to implement this solution for every change in the controlfile (eg. update on SCN numbers…).

In other words, say the contents of the controlfile was in 2 clusters and by specifing a lot datafiles, the controlfile will increase to 3 clusters, this will cause more IO per update controlfile action.