Continuous Delivery and the Oracle database (II) Diff wizard e1460706647683

Continuous Delivery and the Oracle database (II)

In the previous Blog I have described how to implement CD for an Oracle database by using migration scripts.

In this Blog I will describe how to create migration scripts (automagically).

DML scripts
This is the more simple case because you need them less often. There are various data compare tools which create scripts to migrate the differences between a source and target schema:
dbForge Data Compare for Oracle, v3.7 Express (free)
redgate Data Compare for Oracle (not free)

And you can do it by hand using insert, update, delete or merge statements.

So I will ignore DML scripting for the rest of this Blog.

DDL scripts
‘Off the shelf’ tools
So what is left now, is how to create DDL migration scripts. There are various tools available on the sites just mentioned above. And SQL Developer has its Database Diff tool:

SQL Developer Database Diff

SQL Developer Database Diff

I tried all these tools and I found that these tools:
• are slow;
• have a bad user interface;
• have no (good) command line interface;
• are UI based so another UI to learn;
• cannot logon with OPS$ accounts;
• do not always generate the code correctly which is the main reason for their existence (!).

Do it yourself…
So what to do? Good old DIY. Some investigation of the SQL Developer Database Diff tool showed me that it uses the Oracle packages DBMS_METADATA and – new since Oracle 11g – DBMS_METADATA_DIFF (see SQL Developer Database Diff – Compare Objects From Multiple Schemas). An important remark is that you need an extra license for Oracle 11g DBMS_METADATA_DIFF: the Oracle Enterprise Manager Change Management option is necessary. For Oracle 12c the licensing has changed but you need still something extra. Check it! Luckily my company had all licensing necessary.

Based on these Oracle packages I thought it would be feasible to create a command line tool without all the inconveniences. In the end I succeeded but it took some time to solve the technical problems… I will describe them later, after the requirements.

Design requirements
The design requirements:
• A command line interface is necessary (for Continuous Delivery)
• You have to be able to specify a source and target schema
• Filtering on object type (TABLE, PROCEDURE, VIEW, etc.) must be possible
• Filtering on object names (inclusive or exclusive) too
• Source and target schemas may have different names
• Source and target schemas may reside in different databases
• You must use database links to logon to a remote source or target schema
• The account which runs the tool does not have to be the source or target schema
• The privileges of the account which runs the tool must be sufficient to lookup any schema object


A PACKAGE for instance cannot be compared using DBMS_METADATA_DIFF, but there is a simple work-around: use DBMS_METADATA to generate the code for both schemas and compare them.

DBMS_METADATA_DIFF has some handy COMPARE functions which return a CLOB containing the differences (one or more DDL statements):

The biggest difficulty with these functions is to extract the separate DDL statements for one object if the CLOB contains several DDL statements. For example when a TABLE CLOB contains various ALTER statements or when a TRIGGER CLOB contains the trigger definition and an extra enable trigger statement (as DBMS_METADATA_DIFF creates for you).

That is why I did not use those DBMS_METADATA_DIFF COMPARE functions but some more basic subprograms:

Subprogram Description
OPENC Input: the type of objects to be compared
Output: handle
ADD_DOCUMENT Specifies an SXML document to be compared. SXML documents are created by DBMS_METADATA.
FETCH_CLOB Returns a CLOB showing the differences between the two documents specified by ADD_DOCUMENT
CLOSE Invalidates the handle returned by OPENC and cleans up associated state

You see in this table that SXML documents are created. These DBMS_METADATA subprograms actually:

A package which is a layer on top of the DBMS_METADATA packages is the main program.

I used one of my favourite PL/SQL constructions, a pipelined function, to supply arguments like object type, object name, source schema, etcetera to queries showing differences.

Here an excerpt of the package specification:

subtype t_metadata_object_type is varchar2(30); /* longer than all_objects.object_type%type */

subtype t_object_name is varchar2(4000); /* normally 30 is enough but some synonym names can be very long (see SYS.KU$_SYNONYM_VIEW), just like XML schema names */

-- Some objects have lines longer than 4000 characters like
-- COMP_STK.STK_V_SPT_SOORT_BEDRAGEN.  Since querying a CLOB through a database
-- link poses problems, you have to split the lines in pieces of 4000 characters
-- (the limit for a SQL varchar2 type).  In total you get 
-- 32767 (the maximal length of dbms_sql.varchar2a) which should be enough.
type t_ddl_line_rec is record(
   object_schema all_objects.owner%type
  ,object_type   t_metadata_object_type
  ,object_name   t_object_name 
  ,grantee       all_users.username%type default null
  ,ddl#          integer /* DDL statement number (indexes and comments for
                            a table may generate several statements) */
  ,line#         integer /* DDL line number starting from 1 within
                            (object_schema,object_type,object_name,grantee,ddl#) */
  ,text1         varchar2(4000) default null
  ,text2         varchar2(4000) default null
  ,text3         varchar2(4000) default null
  ,text4         varchar2(4000) default null
  ,text5         varchar2(4000) default null
  ,text6         varchar2(4000) default null
  ,text7         varchar2(4000) default null
  ,text8         varchar2(4000) default null
  ,text9         varchar2(767) default null);

type t_ddl_line_tab is table of t_ddl_line_rec;

function display_ddl_schema_diff
  p_object_type in varchar2 default null
 ,p_object_names in varchar2 default null
 ,p_object_names_include in natural default null
 ,p_schema_source in varchar2 default user
 ,p_schema_target in varchar2 default user
 ,p_network_link_source in varchar2 default null
 ,p_network_link_target in varchar2 default null
) return t_ddl_line_tab pipelined;

The query with the pipelined function is executed in SQL*Plus.

Here an excerpt of the SQL*Plus script:

var c refcursor

  open :c for
    select  t.*
    from    table
            ( comp_stm.stm_ddl_util.display_ddl_schema_diff
              ( p_schema_source => '&1'
              , p_schema_target => '&3'
              , p_network_link_source => '&2'
              , p_network_link_target => '&4'
              , p_object_type => '&5'
              , p_object_names => '&6'
              , p_object_names_include => to_number('&7')
            ) t

print c

Technical issues
During development I encountered several technical issues. I will show them to you…

Migration order
When the migration scripts are run, the execution of the various DDL statements must be without errors. For instance, you have to create a table first before you can create an index on it. The DBMS_METADATA_DIFF package is object based, not set based, so you do have to figure out the order yourself. This was solved using the Oracle dictionary.

The base package is created like:

create or replace package comp_stm.stm_ddl_util authid current_user is

This fulfils the requirement mentioned earlier: the privileges of the account which runs the tool must be sufficient to lookup any schema object. You only need to grant the SELECT_CATALOG_ROLE role to the account which runs the tool.

CLOBs and remote databases
A query using a database link cannot include CLOBs, so

select  from object@db

does not work.

To get around this, I used 8 fields of 4000 characters each and 1 field of 767 characters. This sums up to 32767 which should be sufficient for one line of code.

Pipelined functions and remote databases
The following query syntax is not accepted:

select * from table(comp_stm.display_ddl_schema_diff(...)@db)

The work-around for this was to supply the arguments first to a helper procedure set_display_ddl_schema_diff() and then to use a different pipelined helper function get_display_ddl_schema_diff() in a view:

create or replace view comp_stm.stm_v_display_ddl_schema as
select  t.object_schema
,       t.object_type
,       t.object_name
,       t.grantee
,       t.ddl#
,       t.line#
,       t.text1
,       t.text2
,       t.text3
,       t.text4
,       t.text5
,       t.text6
,       t.text7
,       t.text8
,       t.text9
from    table(comp_stm.stm_ddl_util.get_display_ddl_schema) t

Now you can rewrite the query like this (after supplying the arguments first):

select * from comp_stm.stm_v_display_ddl_schema@db

Global database links
The URL describes very well how this works.

In short it tells us that when a database link is not known as a private or public database link but it is known via LDAP, that in that case a connection to the remote database is made using the same account and password. So when you use for instance OPS$ accounts (which have no password), you are done: no need to create database links on all databases used for creating the migration scripts.

DBMS_METADATA_DIFF generates a DDL statement for a SYNONYM even though it has not changed. I solved this by creating the DDL statements via DBMS_METADATA and compare them later on.

This one gave a run-time error. Also solved by using DBMS_METADATA and compare them later on.

DBMS_METADATA_DIFF does not generate a CREATE OR REPLACE VIEW as you would expect when a view has changed. Maybe it will work in Oracle 12c. I solved this by creating the DDL statements via DBMS_METADATA and compare them later on.

A COMMENT is for DBMS_METADATA.SET_FILTER not a base object but a dependent object of a TABLE or VIEW. In order to determine all COMMENTs of a schema, you first have to know all TABLEs and VIEWs of the schema and then to determine the COMMENTs of those TABLEs and VIEWs.

Public synonyms
Only public synonyms who point to schema objects which are already part of the comparison object set must be compared too. For DBMS_METADATA.SET_FILTER you have to specify as schema PUBLIC and the base schema must be the object schema.

DBMS_METADATA_DIFF sometimes returns comments
An example for a SEQUENCE:

-- ORA-39305: Cannot alter attribute of sequence: START_WITH

Those lines are ignored. In Oracle 12c this has changed because then the sequence start_with clause can be modified.

Creation of UTF8 files
UTF8 is the standard encoding of the tool Flyway and UTF8 is very portable. That is why I have decided to use it as the standard for creating migration scripts.

In order to transform the database character set to an UTF8 client character set, you must set the Oracle client environment variable NLS_LANG to _.UTF8. Now the Oracle client (SQL*Plus in this case) will retrieve the data in UTF8. After the retrieval, the files must have to be created too in UTF8 encoding. Using the script language Perl this was easy. Perl allows us to set the environment variable NLS_LANG, call SQL*Plus and create UTF8 files.

Whitespace differences when promoting
Sometimes database object code (packages for instance) without tabs was converted into code with tabs so when the migration script was run there was still a difference. This has to do with the SET TAB ON setting in SQL*Plus. Turning it to OFF solved this problem. In general it is a good idea not to use tabs in database code anyway.
Another issue was that trailing whitespace was not preserved during promotion. The problem is that in SQL*Plus you can either print a line (a query column) of size N either completely or trimmed (SET TRIMSPOOL ON), but neither way gives you the exact line. The solution for this was to calculate the line size in the SQL*Plus script and supply that as well to the Perl script. Now the Perl script could create a line of exact that length.

In the next Blog I will show you how to make deployments more robust which covers things like 24×7 deployments and rollback of deployments.


  1. Jaideep singh March 16, 2019
    • Gert-Jan Paulissen March 18, 2019
  2. Jeff smith April 16, 2016
    • Gert-Jan Paulissen April 18, 2016