The production database is the holy grail of any organisation or at least any microservice or business application. The data and the current structures and definitions of the database objects is essential for the correct functioning of the applications running against the database. Equally important are the database and the data and database objects for testing, development, production incident analysis and hot fixing. Making production-like clones of the database available for these activities is important in order for these activities to render results that will hold up in production.
Several challenges are typically faced at this point, such as:
- the production database contains data that cannot be made available outside the production environment (PII, commercially sensitive, otherwise protected)
- creating clones takes long and requires a lot of effort and skill
With the RMAN or OCI Auto Recovery Service, OCI Data Safe service (free for the features discussed here) and the Exadata Storage Clone (if you happen to be on Oracle Exadata Database Machine) or PDB Clone (for everyone on multitenant architecture, even without multitenant database option) it is pretty straightforward to build an automated pipeline that produces one, a few or many non-production databases, production like to the degree you want. This pipeline can be triggered for example from a self service portal by product / DevOps teams for their test and development activities.
The creation of the read-only Production Master Clone with sanitized data that is the starting point for all non-prod PDBs is performed in two steps:
create a clone of the production database – with an exact copy of all database objects and data
sanitize the contents of this clone to remove all PII and otherwise sensitive data (by anonymizing, pseudonomizing, scrambling, synthesizing and plain removal)
Create a clone of the production database – copy of all database objects and data
Two straightforward procedures with minimal (actually: no discernible ) impact on the production environment for creating the clone of the production database that is the starting point for the non-production database derivatives:
A: starting from a database backup (the same one that is created for recovery purposes)
this procedure is the same as would be used for a database recovery after a disaster; it uses a backup (or combination of level 0 and level 1 RMAN backups) to create a PDB that has the complete situation from production at a particular moment in time (using point in time recovery based on additional archive logs to get to an exact moment – for example in case a specific incident needs to be investigated and/or a fix developed). Using this procedure to create the non-production clone of production is also a good way to frequently test the recovery procedure that might one day be needed to restore production from a real failure. This approach can be be fully automated. It is a little slower and more cumbersome than the use of of Virtual Backup
B: create the PDB using the Virtual Full Backup on OCI Autonomous Recovery Service
If OCI Autonomous Recovery Service is used, all backup processing and storage requirements are offloaded to Oracle Database Autonomous Recovery Service and we get the Virtual Full Backup (real time data protection, Virtual Level 0 Backup) at our disposal. This means that without having to (manually/explicitly) create Level 1 backups and collect Archive Logs we have the ability to restore a database to a specific point in time in a single operation.
Creation of the Production Clone is performed after each (major) database release – and perhaps more frequently to ensure we have a fresh dataset that is close to the production reality. The procedure can be fully automated. It will take minutes to hours, depending on size of the database and of course the hardware used for running it.
Sanitize the contents of the PDB clone of Production
Before anyone gets access to newly created PDB that is at this moment an exact replica of the Production database, its contents need to be sanitized. That means: all sensitive data needs to be removed or scrambled through anonymizing, pseudonomizing, scrambling, synthesizing (generating mock data) and plain removal. Once this sanitization is complete, we can start creating clones of this database that the product teams can make use of. These clones still represent the production database in almost every aspect – structure, objects, data – except for the sensitive records that have been altered or removed.
The sanitization of the master clone of the production database should be done through an automated process. This process will have to be performed many times – as often as a new production clone is created. Note: once a production clone is sanitized, it can be used for creating many PDBs for different ACC, TST and DEV purposes. Only when we clone production again, do we need to sanitize again.
We can use OCI Data Safe to mask all sensitive data in this database. With OCI Data Safe we run a Data Masking Job on the fresh clone of the production database to perform the required sanitization. This job uses the predefined masking policies that were created for the store database. These policies describe exactly which tables and columns need to be processed – and in what way.
- When we start to use Data Safe, we will explore the store database, discover and mark sensitive columns and tables and define the policies:
- Data Safe recognizes columns with sensitive data – and can be pointed at columns it did not recognize
for each sensitive column, a masking strategy is defined and executed
the meta-data regarding the sensitive columns in the database is remembered and can be reapplied when next the production database is cloned
This masking action is performed immediately after the production clone is created and before anyone gets access to this clone; it will take seconds to minutes; the time will depend on the volume of (sensitive) data and the number of policies that needs to be applied.
Using OCI Data Safe for this masking operation is – as far as I can tell – free. for PaaS databases on OCI. I am not sure what the cost is for database elsewhere is; either $200 / month or free. Ask your Oracle Sales rep.
Reduce Data Size
In addition to non production databases that have the full (but sanitized) data set from the production database – used for real-world acceptance testing, performance/scalability/stress testing and production incident replay – we may feel a need for smaller sized databases. For most test and development activities, it is not necessary to have the full blown production data set; in fact, having the complete data set will probably slow down activities and complicate matters without having any added value. When not using Exadata Storage Snapshot (see next section), then having the full production data set will also mean that each non-production database eats up more storage space and takes longer to be created. (if we do use these snapshots, then storage and creation time are not a consideration).
Following the sanitization of the production database clone using Data Safe Data Masking there could be an additional step that creates slim version of the sanitized clone that contains only a subset of the data. This slim or lite version is created simply by creating a full clone of the sanitized database and removing the data that we do not want (for example dropping the partitions for all departments/regions/periods we do not want to include – in case we have a partitioned database; dropping a partition is a very quick operation)
Producing Non Production Databases – clones from the sanitized Production Clone
Non-Production Store Databases are PDBs that run in a Non Production Container Database. The creation of such as a non-production PDB starts from a Production clone. This clone contains all database objects from the production database as well as all database from that database. However: the data has been sanitized (anonymized, pseudonomized and synthesized).
From this “master clone“, non-production PDBs can be created in several ways – that differ in speed, ease and limitations.
simple, straightforward, proven and relatively slow is the creation of non-prod database using RMAN Duplicate or even from a full database backup . The resulting non-prod database is stand-alone and can be used for any subsequent action. The creation of such a database can take up to several hours (the exact timing needs to be tested). The PDB has a fully copy of the original database and consumes the corresponding amount of storage space.
simpler even is the creation of a non-prod PDB through the use of the (local) Clone PDB mechanism. A simple statement, easily executed. It will create a full copy of all files that make up the database and create a new PDB based on them. This process is probably a little faster and simpler than the first approach listed; this needs to be verified through testing . The resulting PDB is a stand alone database with no limitations. The PDB has a fully copy of the original database and consumes the corresponding amount of storage space.
an alternative PDB cloning method is Snapshot Clone (instead of the default of full clone). This can create a clone of a PDB very quickly – in a few minutes. However: the resulting PDB is read-only. That means that it can be used for limited use cases (bug reproduction, testing, reporting). Probably not very relevant.
An elegant, fast and attractive mechanism to create non-prod PDBs is called Exadata Storage Snapshots: a feature provided by Oracle Exadata Database Machine. With this mechanism, a clone of the non-prod master can be created in seconds (!) with minimal usage of storage. Note: the resulting database is not really equivalent to the regular production database; I would suggest to not use a snapshot for “formal“ Acceptance testing but use a full blown, regular clone – PDB Clone or RMAN Duplicate – for this.
a snapshot is created instantaneously – it presents a regular database while under the hood linking to all data blocks of the non-prod master database; initially the snapshot is like a view on the original database
only once DML is executed in the snapshot, blocks will be created for the snapshot. The storage required for a snapshot therefore is minimal – basically storage is required only for this that are changed during Dev or Tst compared to the original Production database. A snapshot can easily be reset (all changes in the snapshot can quickly be reset, providing a clean starting point for the testers or development team
the creation (or reset) of a PDB using this mechanism can be performed very frequently – whenever a team or even an individual developer or tester would like a clean/pristine database that resembles the current production situation; this action takes less than one minute. This approach is so fast that it can be used in automated test scenarios that need to have a fresh testing environment available in a few seconds or a clean environment for a developer trying out things
Once the non-prod PDB has been created, it supports the release that is currently supported by the production database. Depending on the purpose of the environment – bug reproduction and analysis, production hotfix, acceptance testing (on a new release), development or testing (on a feature not even assigned to a release) – the database contents will have to be upgraded to a specific release beyond the one currently supported.
Using the scripts developed by the product team for applying changes to a database to bring it from one release to the next, the non-prod PDB can now be moved to where it needs to be – in a similar way that is used to bring the production database to the next release. Presumably this is an automated deployment process.
Manage (Delete) Non-Prod environments
It is not enough to have an easy way to create non-prod databases. Each Non-Prod database occupies resources and adds to over all complexity. Any environment no longer in use should be discarded as quickly as possible. Recreating databases is simple, fast and fully automated. We should not hesitate with dropping environments that do not have a clear purpose.
The pipeline described overhead for creating environments should be complemented with a simple pipeline for removing an environment. Dropping a PDB from a container database is a simple operation, followed by a second background action: When you drop a PDB, the control file of the multitenant container database (CDB) is modified to remove all references to the dropped PDB and its data files. Archived logs and backups associated with the dropped PDB are not deleted at that point. You delete them in a second step using Oracle Recovery Manager (RMAN).
OCI Data Safe – product page https://www.oracle.com/security/database-security/data-safe/
OCI Auto Recovery Service – documentation – https://docs.oracle.com/en-us/iaas/recovery-service/index.html
Oracle Database PDB Clone – documentation – https://docs.oracle.com/en-us/iaas/dbcs/doc/clone-pluggable-database.html