OOW 2007: Mark Townsend explains "Storage is not cheap" Oracle Headquarters Redwood Shores1 e1698667100526

OOW 2007: Mark Townsend explains "Storage is not cheap"

My first formal activity at Oracle Open World 2007 was the Oracle Ace Director Product Briefing. A closed session where 30 odd ACE Directors (and you read that any way you like) had assembled to get some glimpses of what Oracle’s future has in store as well as some in depth insights into key products in the Fusion Middleware area as well as the database. The session was concluded by Mark Townsend (Oracle’s senior director of database product management), who delivered what will probably one of the best presentations during the week.

Mark ran us through some of the key features in RDBMS 11gR1 and in some areas laid down the road ahead to 11gR2. Not all of what I heard was new – we  have been Beta testing 11gR1 since October 2006, so we have seen quite a bit of it – but some of the things became much clearer or useful than they had seem before.

One of the topics that caught my attention – which frankly it normally never does – was Storage.
I know how we use to just shrug when it comes to storage as that is only iron and dead cheap; store Gigabytes, Terabytes – whatever: it costs nothing compared to CPU, Memory and manual labor. Mark made me realize that there is more to actual Data Storage than just iron. And he pointed out that because of the floor space required for storage, the power (electricity) required to keep the storage running and the cooling required to siphon of the heat produced by all that power, storage is in fact quite expensive. Or at least not something to ignore when making architectural blue prints for complex database environments.

Oracle 11gR1 provides Transparent Data Compression as one way of dealing with the storage challenges. Data Compression will reduce the size of the data on disk by a factor of three on average, though frequently that number can get up to four. Since most database environments are I/O bound – not CPU bound – the additional overhead in terms of CPU processing power to perform compression and decompression does not affect performance. (note: I/O bound means that adding CPUs to the system will not make it go faster – in fact it might even slow the database down – because the speed with which the disks can deliver the data is the bottleneck in handling the workload.) In fact, since using compression a single block read will bring in 3 times more data as it is read in compressed format, compression may actually even help to speed up performance as I/O becomes faster.

Townsend mentioned that because of the I/O boundness of the system, using compression typically will not allow you to remove disks from the system: with fewer disks, the I/O bottleneck becomes worse, even if those disks have less data to carry around. However, the same number of disks will be able to hold more data moving forward or can be used for additional purposes.

I asked him with all this good news whether in fact there is any reason at all not to use compression in 11g – it sounds like a win-win approach! Other than ‘we are charging you a fortune for it, as it is an extra option to the database’, there is not. And of course it is relatively easy to calculate the additional license costs vs. the savings in storage costs.

Three Tier Storage

Another contribution to lowering storage costs is by using differentiated storage. High performance storage infrastructure can cost up to $72/Gb. While that is justified for the most active data in your system, it is absolutely not required for historical data – which typically constitutes the bulk of the data. So Oracle recommends using three tiers of data storage: active, less-active (3-9 months old) and historical data, which come in a ratio of for example 10%, 30%, 60% or something like that. By using less expensive storage solutions for the bulk of the data – say down to $7/Gb for historical data – you can tremendously lower the cost of the storage solution.

To make this happen, we need something else. Since usually one table contains both active, less active and historical data, we have to have some means of distributing the data from a single table across three storage tiers. The answer to that particular challenge obviously is Partitioning. Various partitions can have various chunks of the table’s data. Partitioning by timestamp is very common and simple to implement (in fact in 11g we can even set up automatic partition creation based on time – start a new partition when the new month starts for example) and each partition can have different storage characteristics. The Partition by Reference option in 11g is another very useful mechanism of making partitioning of data much easier: it allows detail data to be partitioned along with its master (order lines with orders). Other new partioning options allow data to be partitioned by list-list, list-range and other combinations allowing schemes like: partition by month and then by status or partition by month and by region etc.

Both the content of what he told as well as the mannner in which he deliveried his talk were very good and inspiring. Here I am, at 4.30am on the floor of in my San Francisco hotel room’s bathroom (I do not want to wake my room mate with the lights) writing a blog on storage!

One Response

  1. Marco Gralike November 13, 2007