Pipelining 10g Database Export and Import


A very interesting feature in the Oracle 10g Data Pump I becase recently aware of is the option to link an Export job in one database to an Import job in another (or presumably the same) database through a pipe. This allows for direct data transfer without the creation of the intermediate dmp file. It also allows the Import to start processing as soon as the Export has started, without having to wait for the Export to complete. I just stumbled across an Oracle Magazine article that discusses this feature in much more detail:Tuning: Speed and Simplify Data Movement
Export and Import Over the Network

Export and Import Over the Network

Typically, Oracle Data Pump Export results in a binary format dumpfile that is then loaded (using Oracle Data Pump Import) into another database. However, in addition to this file-based Oracle Data Pump Export and Import, Oracle Data Pump provides a network-based approach for direct loading to or unloading from one database to another.

Most DBAs are probably familiar with using the legacy Oracle Export and Import utilities over a network, using named pipes as the mechanism through which to pass the data. One of the problems with that approach can be that named pipes aren’t supported on every platform.

Instead of using named pipes, Oracle Data Pump uses database links for exporting and importing data between source and target over the network. And, as with the named pipe mechanism, the network_link feature entirely bypasses dumpfile creation, transmitting data from one database to another across the network.

This feature should be of keen interest to any DBAs spending lots of time doing production-to-test refreshes for development or extracting data from a read-only data warehouse. (For a read-only database, you must use Oracle Data Pump Export’s network_link mechanism, initiating the process on the target database, because Oracle Data Pump won’t be able to create the necessary master table on the read-only database.)

Because network bandwidth is the limiting factor, the network_link capability is best used in conjunction with the filtering capability, as a means of siphoning off a subset of data (rather than an entire multigigabyte or -terabyte database) from one database to another.

About Author

Lucas Jellema, active in IT (and with Oracle) since 1994. Oracle ACE Director and Oracle Developer Champion. Solution architect and developer on diverse areas including SQL, JavaScript, Docker, Machine Learning, Java, SOA and microservices, events in various shapes and forms and many other things. Author of the Oracle Press books: Oracle SOA Suite 11g Handbook and Oracle SOA Suite 12c Handbook. Frequent presenter on community events and conferences such as JavaOne, Oracle Code and Oracle OpenWorld.


  1. Pingback: glamour-agency

  2. A lot of times I need quickly transfer data from my VLDB Oracle tables into flat csv files for various reasons. Even in 10g there is no efficient way to do this. We ended up using fastreader to do this job: http://www.wisdomforce.com

  3. In pre Oracle10g releases , we can also use a named pipe to export/import datas.
    mknod web.dat p
    exp userid=user/pass@host1 buffer=21971520 file=web.dat …. &
    imp userid=user/pass@host1 buffer=21971520 file=web.dat ….