Continuous Delivery and the Oracle database (I) Continuous Delivery process diagram

Continuous Delivery and the Oracle database (I)

In this Blog I will explore the possibilities to apply Continuous Delivery (CD) for the Oracle database.

What is Continuous Delivery?
The introduction on Wikipedia Continuous Delivery says:

“Continuous delivery (CD) is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time. It aims at building, testing, and releasing software faster and more frequently. The approach helps reduce the cost, time, and risk of delivering changes by allowing for more incremental updates to applications in production. A straightforward and repeatable deployment process is important for continuous delivery.”

CD principles
According to Wikipedia:

“Continuous delivery treats the commonplace notion of a deployment pipeline as a lean Poka-Yoke: a set of validations through which a piece of software must pass on its way to release. Code is compiled if necessary and then packaged by a build server every time a change is committed to a source control repository, then tested by a number of different techniques (possibly including manual testing) before it can be marked as releasable.

Developers used to a long cycle time may need to change their mindset when working in a CD environment. It is important to understand that any code commit may be released to customers at any point. Patterns such as feature toggles can be very useful for committing code early which is not yet ready for use by end users. Using NoSQL can eliminate the step of data migrations and schema changes, often manual steps or exceptions to a continuous delivery workflow. Other useful techniques for developing code in isolation such as code branching are not obsolete in a CD world, but must be adapted to fit the principles of CD – for example, running multiple long-lived code branches can prove impractical, as a releasable artifact must be built early in the CD process from a single code branch if it is to pass through all phases of the pipeline.”

As often a picture says more than words:

Continuous_Delivery_process_diagram

Which technologies are suitable for CD?
Java related technologies like ADF and SOA are suitable and much used in CD projects. But the (Oracle) database? This article Database Continuous Delivery says:

“Dealing with database deployments is tricky; unlike other software components such as code or compiled code, a database is not a collection of files. The database is a container of our most valued asset – the business data, which must be preserved. It holds all application content, customer transactions, etc. In order to promote database changes, a transition code needs to be developed – scripts to handle database schema structure (table structure), database code (procedures, functions, etc.), and content used by the application (metadata, lookup content, or parameters tables).”

This paragraph describes very well all the difficulties you encounter in a database environment. Because there is data in possibly differing table structures, just applying the source code is not enough. You have to do more than just install the source code.

What are the possibilities to deploy to an Oracle database?

Liquibase
An interesting approach is described on www.liquibase.org. The tool Liquibase forces developers to manage their changes in a Changelog XML file. Liquibase will then apply them to a local database. Eventually the change sets and changes will be promoted to production. But there are some disadvantages: it seems more suited for small (Open source) database applications, not for full-blown Oracle databases with a lot of extra functionality like stored procedures. Another disadvantage I see is the overhead of maintaining the Changelog: I do not think that my colleagues will be happy when they have to spend more time doing administration. They prefer spending their time using their favourite IDE like PL/SQL Developer or SQL Developer.

Using an intermediary reference database
Most Oracle developers are used to maintain create scripts for all tables, views, packages and so on (DDL). Also reference data like parameters (DML) can be maintained like that. So when we have an empty database and we apply those scripts to that database (in the right order), we have a reference database at the end. There are tools for migrating the actual target database by comparing them to the reference database. Those tools usually create migration scripts which are then applied to the target database.

This approach is very pure: differences are calculated as late as possible so it is guaranteed that the target and reference database are equal in terms of structures and reference data after the installation (provided no other processes run on the reference and target database of course).

The main disadvantage I see is the lack of (human) control. For a production database this feels awkward. You may run into unforeseen problems, for instance when the reference database is used by two simultaneous installations (for instance one to production and one to test). I think it will be a very career limiting move when you have to explain to your boss that the transaction table is dropped because of some bug in the CD process. Another disadvantage I see is the requirement to maintain all those DDL and DML scripts in sync with the development database. Once more, developers hate administration. And I am not even mentioning the licence costs for all those reference databases.

So what do we need?
We need a more simple and robust approach. Let us try to please the lazy developers. They just want to develop functionality using their favourite IDE, unit test the changes and, oh yeah, they must promote those changes to the next phase (for instance the integration test database) but that should be a quick and simple process. That should be all!

When the promotion from one phase to the next is done using migration scripts we can make CD on the database work. We just require that those migration scripts (DDL and/or DML) are always applied in the same order to the next phase and that it is done automatically. This means that manual deployments are forbidden because if you allow that, you cannot guarantee the correct order. Of course this does not apply to development databases. But a manual deployment to a test, acceptance or production database is out of the question. This approach is also handy for provisioning: when you can easily create a template database with test data, you can apply all the changes to it and you have a test database.

So we need a tool which registers which migration scripts have been installed and, when given all the migration scripts ever, just installs those which have never been installed before (in the right order).

Enter Flyway
After a tip from a colleague I got to know Flyway, a Java application:

Flyway

How Flyway works is well described on How Flyway works. In short:
• There is a meta table which registers the migrations already applied.
• During deployment the environment (file system) will be scanned for migration scripts.
• Migration scripts which have never been installed before, are installed in the correct order (based on the file name which includes a version number).
• The meta table is adjusted after each migration script applied.

More advantages:
• There is a command line interface.
• Flyway works with Java, Maven, Ant and more.

The integration with Java is interesting: you can deploy a Java application and, when the new version is started, it will first execute the database deployment (if necessary). Cool!

XL Deploy
At MN Services in The Hague, Netherlands, we will use the XL Deploy to deploy all technologies automatically. XL Deploy has a plug-in for Flyway. The process will be like this:
1. Develop code in the database;
2. Maintain reference data;
3. Create migration scripts (DDL or DML) by comparing with the next phase database;
4. Put those migration scripts under version control;
5. Create a script archive with a build tool;
6. Make the script archive available for XL Deploy;
7. Deploy the script archive to the target database(s).

Design decisions
Location of the Flyway meta table
The Flyway meta table is configurable. The best choice is to have one meta table per schema and not just one for the whole database. This will enable us to deploy to several schemas at the same time independently. Another advantage is that a schema export and import is simple: all the relevant meta data resides already in that schema. This also means that script archives contain code and data for that schema only.

Must a script archive contain the whole history or not?
Must a script archive contain all migration scripts ever or just those who are new/changed since the previous release? The answer is simple: all. Because if you do not, you can only deploy from release 1 till N by applying all the script archives 1 till N. This is time consuming, error prone, so wrong. The bonus is that the script archive of latest release N enables you to easily provision a new database from a template database which is at any release version between 1 and N and applying release N to it.
24×7
Ok, very nice that deployments can now be automated. But what about 24×7 support? Luckily the combination of Flyway and Oracle 11gR2 with Edition Based Redefinition solves this nasty problem. But more on that later…

In the next Blog I will describe how you can create migration scripts quickly. After that another Blog follows with the implementation of 24×7 functionality.

2 Comments

  1. Raj Jamadagni April 11, 2016
    • Gert-Jan Paulissen April 18, 2016