For several years now, I’ve been a fan of backups controlled by Enterprise Manager (of Grid Control, or Cloud Control, call it what you want). It is relatively easy to configure backups for your databases and it is well known in our DBA world. Only one thing bothered me though, until I found out the solution had been at hand for years and years already…
Parallel versus serial backup.
The brochure looks nice: create an rman job and assign multiple databases as target. Done.
Now what if 12 databases are targeted, all running on 1 host? They all start backing up at the same time, using a lot of cpu power and threatening other processes like nightly batches on the same server. This is often solved by creating an rman job per (subset of) database(s) and spreading the load by assigning different starting times to each backup, i.e DB1 backup starts at 10PM, DB2 backup starts at 11:30PM and so on.
What we’d like to see is some form of serialization, where the finish of one backup is a trigger for the next one to start. And now the good news: This is possible.
Multi-task
In Enterprise Manager you can create a job of type multi-task. Within this job several tasks can be executed in serial. Let each of this tasks be a database backup and alls tasks together be the multi-task job that handles the backup of all databases on 1 host.
In the examples I will use a library job, for purposes of easier maintenance.
Create the job
Click on Enterprise->Job->Library
Choose Multi-Task en click Go
For ease of management I like to use a name representing the host name and optionally provide a description.
As Target, choose Different targets for different tasks and set Maximum Parallel Execution to 1 (this option only exists in Cloud Control 13c) so only one tasks runs at a time.
Click on tab Tasks
Choose RMAN Script.
Click Add
Again, use a name that clearly describes what the task will do.
Choose the first database to be backed up as Target (or several databases but they will then be backed up in parallel).
Click on tab Parameters.
Fill out the complete Rman script to back up this specific database. Or use an OS script. Or use an rman stored script from the catalog.
Click on Credentials.
The familiar page. Fill it out and click Continue (not in picture above 🙁 )
Now we’ve got one database backup in place. For the next one choose RMAN script again and click Add.
Mind the Condition: here you can choose when this task will run, depending on the status of a previous task: Always, On Succes or On Failure? This can build quite sophisticated job flows. For our backup purposes the best choice seems to be Always.
So, add the next database as a Target and go through the same steps for Parameters and Credentials.
Do this for every database on the host that needs to be backed up.
Eventually we’ll end up with something like
In the upper right corner click on Save to Library.
Activating the job
Make sure that you selected the right job in the above screen and click Submit.
Choose Maximum Parallel Executions=1 and click on tab Schedule.
No screen of this, but simply fill out the schedule and click Submit.
Changing an active job.
Active jobs can not be changed. They need to be deleted and a new job needs to be created. That is the reason I like the Job Library: you can edit a Library Job and then simply Submit it.
So: Click on Enterprise->Job->Activity
Press the Big button shown below:
Find your job and select it by clicking NEXT to it. Do not click on the name. That is not what we want now.
Under the Actions drop down menu choose Stop.
This is exactly what we want so click Stop Job.
Now we can go to the Job Library and edit the job (for instance add a task because a new database was provisioned), save the job and submit it.
And teher is more… When tasks inside job independed (we set condition “asways”) – the entire job wil have status “Succseeded” even if half of tasks inside fail. It look like OEM developers do everything to make grouping and batching capabilities useless.
Such a stupid behaviour for OEM. It already has “Maximum Parallel Executions” job parameter, but it’s usesless. Now we try to workaround this using multitask jobs. But what if I need to backup 1500 databases using one NFS backup server? Now I need to create multitask job with 1500 tasks ….fck