My first article I wrote about controlling deployment sequence by artifact type extensions and modifications.
That way, the deployment process itself was still completely left over to the capabilities within the distributed plug-ins.
What happens if a legacy deployment process is very robust, tested out and trusted by the organization? On the other hand, the artifact may not necessarily align with the requirements of the plugin distributed for its purpose.
1 The distributed plug-in
A good example can be an archive of database scripts with the freedom in execution order, script naming and use of subfolders.
The standard database plugin has well defined sql script and rollback script recognition patterns, which can all be re-defined.
Yet, there is still a very tight coupled relation between the scripts and their subordinates, name matching of subfolders and so on.
The standard plugin is described at https://docs.xebialabs.com/xl-deploy/5.1.x/databasePluginManual.html
2 The Case
Let’s see an artifact from the legacy with quite a simple and flat structure (fictive sample). Rollback scenario is out of scope (internal policy forces the use of database restore points only):
dbconfig
|– init
|– insert_refdata.sql
|– insert_userdata.sql
|– create_indexes.sql
|– create_sequences.sql
|– create_tables.sql
|– grants
| |– privileges.sql
| |– roles.sql
|– logging
| |– loginit.sql
| |– log_functions.sql
|
|– patch.sql
|– setup.sql
|– update_packages.sql
The scripts regulate all further sequencing and logic inside.
Two scripts will drive the execution:
1. setup.sql: executes the scripts in the init folder
2. patch.sql: executes the create scripts in the root folder, the grant scripts in the grants folder
Both scripts applies different functionality from scripts in the logging folder. Naming the scripts like this makes the database plugin completely ignore them.
In the legacy system, a shell script, settled on the actual target host, with some joined properties and environmental settings, is wrapping the whole execution with restore point, mailing and security management.
3 Going for the plugin
One scenario is to break up this existing structure, send the process back to the design and development table to fit into the plugin requirements. Something like this:
dbconfig
|– 00-setup
| |– insert_refdata.sql
| |– insert_userdata.sql
|– 00-setup.sql
|– 01-patch
| |– create_indexes.sql
| |– create_sequences.sql
| |– create_tables.sql
| |– privileges.sql
| |– roles.sql
| |– update_packages.sql
|– 01-patch.sql
|– common
| |– loginit.sql
| |– log_functions.sql
|
The two main sql scripts must be modified to use their matching subfolders for the subscripts and common for the logging scripts. This will be tested all over the pipeline.
The wrapping shell script can be adapted to the freemarker context and used through the rule system within an os-script type step: https://docs.xebialabs.com/xl-deploy/5.1.x/referencesteps.html
4 Virtual plug-in
In the other scenario, we have to consider the incoming artifact just as a generic folder, which we try to target to the plugin container with of a subtype of sql.SqlClient. The practical reason is that the database related properties (for Oracle it is schema login and SID) are available there and the artifact does not mandatorily need to carry (or placeholder) them as overhead and the mapping mechanism will automatically detect it too.
File plugin CI type file.Folder will not work as its container class (overthere.Host) is inconsistent with the sql.SqlClient container superclass generic.Container.
Generically, if the requested container of an extension type is on a hierarchy branch different from the extended type’s container, the server refuses the whole synthetic.xml on startup and fails.
This is only visible in the log file, so in case of a service wrapper startup mode, the service itself will run but the server will be unreacheable. A developer would probably test it tailing the log after change or start the server with command line rather than service but it is good to know about the behavior.
sql.SqlScripts itself is a generic.Folder subclass. However, we must not “thin” this type itself in synthetic.xml with more relaxed script recognition patterns. Doing that we would break the original plug-in. We need a new type.
The superclass generic.Folder would be a better choice. Yet, it has all the properties (mandatory script and rollback recognition patterns) bothering us without those being useful (database user and password).
Instead, let’s define a new classifier (the “virtual” plug-in) called (as sample) legacy. With the type definition we extend one level higher into the udm package, namely udm.BaseDeployableFolderArtifact.
That one is free of recognition properties but is still handled as a directory object and gets unpacked on the target.
1. The deployable artifact is derived from the deployed one so all properties are propagated back to it
2. We add the credential fields to cover the connection part
3. The executorScript is an OS script template. Its name is adopted from the original plugin.
It will be responsible for wrapping the execution of the sql scripts (e.g. sqlplus) same way as the plugin does, with the possibility of adding the surrounding functionality. Its usage may be determined or even overruled by the rule definitions
4. The scriptsToRun property is important to sequence the execution. We removed the alphabetical order coming along with the plug-in. That, however, forces us to set our own.
In synthetic.xml it looks like this:
<type type=”legacy.ExecutedDbScripts” extends=”udm.BaseDeployedArtifact” deployable-type=”legacy.DbScripts” container-type=”sql.OracleClient“>
<generate-deployable type=”legacy.DbScripts” extends=”udm.BaseDeployableFolderArtifact“/>
<property name=”username” required=”false” description=”`The username to connect to the database” />
<property name=”password” required=”false” password=”true” description=”The password to connect to the database” />
<property name=”executorScript” required=”false” hidden=”true” default=”resource-config/installdb” />
<property name=”scriptsToRun” kind=”list_of_string” required=”false” default=”patch.sql” description=”The list of SQL scripts to execute directly in sequence”/>
</type>
We also need at least one rule in xl-rules.xml. Without a rule the the deployment plan for this type of artifact would be empty.
See the logic in
Steps Reference: https://docs.xebialabs.com/xl-deploy/5.1.x/referencesteps.html
Rules Tutorial https://docs.xebialabs.com/xl-deploy/how-to/xl-deploy-rules-tutorial.html
<rule name=”legacy.ExecuteDbScripts” scope=”deployed”>
<conditions>
<type>legacy.ExecutedDbScripts</type>
<operation>CREATE</operation>
<operation>MODIFY</operation>
</conditions>
<planning-script-path>resource-config/dbscripts.py</planning-script-path>
</rule>
The rule here uses an intermediate Jython script, which is supposed to add a step calling the value of executorScript for each sql script in the scriptsToRun list. This extra layer is necessary due to the dynamic number of scripts.