Migrating an old (10.2.0.4) database to Oracle Cloud with minimal downtime dbcloudmigration

Migrating an old (10.2.0.4) database to Oracle Cloud with minimal downtime

Unlike most of our posts this post will contain almost no code or examples. But it hopefully will help  somebody who ends up to be in the same situation we landed on: migrating data from very old versions to a new environment.

Recently we were tasked with the migration of several databases from on-premise to Oracle Cloud Infrastructure (OCI). The on-premise variation of databases was a challenge on its own:

  • versions from 10.2.0.4, 11.2.0.4 and 12.1.0.2
  • most Enterprise Edition, some Standard Edition
  • some with partitioning, most without
  • all running on windows (few on 32 bit, rest on 64 bit)
  • some production databases were really different from their acceptance sibling (schema’s and/or edition)
  • but in the end, it all was Oracle ?

Several techniques and approaches have been used for different databases. This blog-post will describe the migration of the oldest and largest database towards OCI. On the other challenges we will probably write some blogs in the (near) future.

As common for everybody, we saved the worst for last. Hoping we gained enough experience and confidence in the migration strategy to move this database towards OCI.

The source database involved was a Oracle Enterprise Edition 10.2.0.4 database running on Windows 2003 (32 bit). Database size was approx. 7 TB, using partitioning & and some compressed tables.

Additional complicating factors: several tables with more than 1.000.000.000 records. Based on timestamp only records after august 2017 (+/- 33%) had to be migrated, everything older should be left behind. Oh, and don’t forget the business wished to reduce downtime as much as possible, but absolutely not more than 18 hours.

Quite a challenge ?.

Bases on all information above (which was gathered during several test-runs, causing to fail almost every strategy we developed), the only way to be successful would be using GoldenGate (GG from now on). And how nice… using GG in OCI is on promotional offer until dec. 31 2020 if used to migrate into OCI.

Hold your horses, it’s not that easy…

Diving into the documentation made clear that the latest GG version that could be used against 11.2.0.4 database would be GG 11.x (which is already several years out of support).

GG 11.1 was the only version available for download so that was our starting point. Too bad, after a few test runs we ran into the issue that this version could not handle compressed tables, which is supported from 11.2.1.0.x onwards. With a little help from our friends from Oracle Support I managed to get GG 11.2.1.0.1 for Windows 32 bit. Installed this on the database server and…

This didn’t do the trick ?. Taking another deep dive into the documentation cleared out that for migrating compressed tables from database version 11.2.0.4 (and nothing older) we needed to install a downstream database which would read all archive logs from the source database and present the content to the GG extract process. The shipping of the archive.log files is similar to techniques used with Dataguard.

After testing, running into some old bugs which were luckily solved in a patched version we ended into the situation as below using GG 11.2.1.0.33 with a downstream database 11.2.0.4 on premise and a GG 18.1 in OCI:

Migrating an old (10.2.0.4) database to Oracle Cloud with minimal downtime Overview

Once we were convinced that this solution proofed to be working we could step on to the migration path.

Decided was to take the following approach:

1. Create and start all GG extracts with “begin <timestamp>” parameter

GG01> register extract e_jeroen database
GG02> add extract e_jeroen, integrated tranlog begin 2020-03-19 07:55
GG03> add exttrail e:\dirdat\xa, extract e_jeroen, megabytes 50
GG04> start extract e_jeroen

2. Find corresponding SCN from this specific timestamp

SQL> select timestamp_to_scn(to_timestamp('19/03/2020 07:55:00','DD/MM/YYYY HH24:MI:SS')) as scn from dual;

SCN
----------
12763967147187

3. Start datapump extract with correct settings. Our.par file looked like this:

directory=exp_dir
dumpfile=all_schemas_a_%U.dmp,all_schemas_b_%U.dmp
filesize=10G
logfile=all_schema.log
parallel=2
query=(
AAA.TABLE_01:"WHERE DATUMTIJD > to_date('31-08-2017 23:59:59','dd-mm-yyyy hh24:mi:ss')",
BBB.TABLE_02:"WHERE DATUMTIJD > to_date('31-08-2017 23:59:59','dd-mm-yyyy hh24:mi:ss')",
.....

4. Start the export with the following command:

expdp xxx@sid parfile=all_schema.par full=y flashback_scn=12763967147187

5. At some moment the GG pump process needs to be started to send all trail files to the Cloud GG.  We did this when data-pump export was finished (to not overload the CPU). Nothing fancy on this, just follow the docs.

GG01> add extract p_jeroen, exttrailsource e:\dirdat\xa
GG02> add rmttrail ./dirdat/xc, extract p_jeroen
GG03> start extract p_jeroen

6. Once the data-pump is finish transfer all files to the Cloud database server and run a normal data-pump import (only tables & indexes) on this. Our par file looked like this:

directory=IMP_DIR
logfile=imp_all_schema.log
parallel=6
schemas=AAA,BBB,CCC,DDD,EEE,FFF
dumpfile=ALL_SCHEMAS_A_%U.DMP,ALL_SCHEMAS_B_%U.DMP
exclude=db_link,table_statistics,index_statistics,job,package,procedure,function,trigger,view,sequence

7. Start the import with:

imp system@xxxx parfile=imp_all_schema.par

8. When data-pump import is finished first create a full backup of the database to be sure you have a solid restore point.

9. Start the GG replicat process to process the backlog of transactions since GG Extract was started. We needed about 2 days to process a backlog of 4 days, but this is highly depending on volume and processing power.

GG01> add replicat r_jeroen, exttrail /u02/trails/dirdat/xc
GG02> start replicat r_jeroen

9. Once both databases are in sync (maybe with some minutes delay, depending on your configuration) you’re ready to migrate. This should be nothing more than copying users, packages, synonyms, db-links, triggers,  etc. to the new environment via a data-pump export/import.

With the process described above, we were able to migrate this environment with approx. 2 hours required downtime. Because the migration also involved another database (which is  worth another blogpost), actual downtime was a lot longer. If planned strictly, I’m convinced it could be done with less than 1 hour downtime.

During the whole process we discovered a lot, most of them is well-known but often forgotten:

  • Lifecycle management is crucial. Saving some money now by not upgrading to a recent version will hurt somewhere in the future. Then it really hurts, and dealing with that costs a lot of energy, time and money
  • Ensure your acceptance environment is representative for your production environment. We ran into several issue’s in production which would never exist in acceptance
  • GoldenGate is a great tool in migrating data between different versions, OS and locations.
  • During the migration process you might need a lot of storage for storing your datapump export and GG tail files.  Especially if you want to keep trail files until migration is completely finished this might run into a few TerraByte.

2 Comments

  1. lokesh April 21, 2020
  2. Łukasz April 17, 2020