Large XML file

Processing large XML files in the SOA Suite

Read large XML files in chunks



At my current project, XML files are uploaded by the end-user in order to be processed in the Oracle SOA Suite. The XML files contain information about employers and their employees. Because an employer can have hundreds and even thousands of employees, these XML files can be quite large.
Processing such large XML files consumes a lot of memory and can become a bottleneck ,especially when multiple end users are uploading large XML files at the same time. It even can cause a server to crash because of an OutOfMemory problem.
The best way to solve this, is to read and process these large XML files in chunks, so XML fragments are read and processed, instead of the complete XML file.
My colleague, Aldo Schaap, already did and described this for CSV files in his blog “Processing large files through SOA Suite using Synchronous File Read“. I thankfully used his blog to do the same for XML processing. However, a few things are slightly different in reading XML instead of CSV, so that’s the reason for this blog.
Another reason is that I ran into another problem, which I will describe later on in this blog. To be able to solve this problem I have to ‘pre transform’ the XML file. This means the XML file needs to be transformed before it is read by the SOA Suite. To achieve this I used the pre processing features of the file adapter with a custom (Java) valve. This pre en post processing is described in the blog “SOA Suite File Adapter Pre and Post processing using Valves and Pipelines” by colleague Lucas Jellema.
The combination of these two blogs provided me the solution for my problem.

Problem Description

Back to my problem. The large XML files, which have to be parsed, contain one ‘Message’ element as root. This root element contains one or more employers with some basic employers information and each employer can contain multiple employee elements, up to thousands, with employee information and employment information. In the real use case the XML structure contains Dutch element names and the XML is very specific about the business problem. For the purpose of this blog, I’ve reduced the problem to a basic XML structure with English names and used some basic sample data. XSD source:

<schema attributeFormDefault="unqualified" elementFormDefault="qualified"
  <element name="Message">
        <element name="MsgProperties" type="tns:tMsgProperties" minOccurs="0"/>
        <element name="Employer" type="tns:tEmployer" minOccurs="0" maxOccurs="unbounded"/>
      <attribute name="test" type="boolean"/>
  <complexType name="tMsgProperties">
      <element name="MsgId" type="string"/>
      <element name="SenderId" type="string" minOccurs="0"/>
  <complexType name="tEmployer">
      <element name="Name" type="string"/>
      <element name="EmployerNr" type="int"/>
      <element name="Address">
            <element name="Street" type="string"/>
            <element name="PostalCode" type="string"/>
            <element name="City" type="string"/>
            <element name="CountryCode" type="string"/>
      <element name="Employee" type="tns:tEmployee" minOccurs="0" maxOccurs="unbounded"/>
  <complexType name="tEmployee">
      <element name="EmployeeNr" type="string"/>
      <element name="DOB" type="date"/>
      <element name="FamilyName" type="string"/>
      <element name="Initials" type="string"/>
      <element name="Gender" type="string"/>
      <element name="Nat" type="string"/>
      <element name="EmploymentDate" type="date"/>

Structure (XSD):
My test data as ‘large XML file’:

<?xml version="1.0" encoding="UTF-8"?>
<Message xmlns="" test="true">
    <Name>AMIS Services BV</Name>
      <Street>Edisonbaan 15</Street>
      <PostalCode>3439 MN</PostalCode>
      <FamilyName>van der Kleij</FamilyName>
    <Name>Oracle NV</Name>
      <Street>250 Oracle Pkwy</Street>
      <PostalCode>CA 94065</PostalCode>
      <City>Redwood City</City>

Chunk reading with BPEL and the JCA File Adapter

Because the XML file needs to be processed chunk by chunk, a BPEL process with a loop is used. Each iteration reads next chunk from file and processes this XML snippet. This continues until end of the XML file is read.

First an XML file reader has to be configured for synchronous read as External Reference. So drag a File Adapter from the Component Palette to the External References swim lane and configure it the same as described in the blog by Aldo, so:
Give it an appropriate name, e.g. ‘SynchReadXML’.
Choose ‘Define from operation and schema (specified later)’
Choose ‘Synchronous Read File’ and enter an appropriate name, e.g. ‘SynchReadXML’.
Choose for ‘Logical Name’ directory and enter an appropriate name e.g. ‘CHUNKED_FILES_DIR’.
Enter a dummy file name. We will overwrite this in the read call in the BPEL.
Select the XSD file and the root element:
And finally finish the wizard.

Now the ‘magic’ starts. Open the just created jca file, in our case the SynchReadXML_file.jca and

  1. change the implementation class from “oracle.tip.adapter.file.outbound.FileReadInteractionSpec” to “oracle.tip.adapter.file.outbound.ChunkedInteractionSpec”.
  2. Add property ChunkSize with value 1 for now (later you’ll see what it stands for)

The file should now look like this:
For this test project I use the name of XML file which has to be read as input string and as output just an “OK” string. For flexibility is always handy to have a Mediator as composite entrance.
So drag a Mediator in the composite, give it an appropriate name, apply the “Synchronous Interface” template and check the “Create Composite Service with SOAP Bindings” checkbox with the singleString as input and output.
Now we need the BPEL process in which we’re going to read the XML bit by bit, not literal of course :-), but chunk by chunk or XML fragment by XML fragment. Drag a BPEL process into the composite and select BPEL 2.0 Specification, enter an appropriate name and choose template “Synchronous BPEL Process”, uncheck the “Expose as a SOAP service” checkbox and select as input and output the singleString:
Wire them together with the connection points.
Now open the BPEL process and add the following variables (name, type, initialization):

  • isEOF, boolean, initialize with false
  • filename, string, no initialization needed
  • lineNumber, string, initialize with empty string! (is different from reading csv)
  • columnNumber, string, initialize with empty string! (is different from reading csv)
  • isMessageRejected, string, no initialization needed
  • rejectionReason, string, no initialization needed
  • noDataFound, string, no initialization needed

In source code:
The only difference from reading a CSV file as described in the blog of Aldo, is that the lineNumber and columnNumber variables must be initialized with an empty string, otherwise it’s not going to work!

Drag an Assign activity in the BPEL processs (between receive en reply), name it ‘AssignFilename’ and assign the filename with the input variable.
Drag a While loop in the BPEL process (between AssignFilename and reply) and loop while not EOF.
Drag an Invoke activity inside the While activity, give it an appropriate name, invoke the file adapter and create both the input and output variables (with the green + icon).
Open the Properties tab of the Invoke and add ‘To property’ jca.file.FileName with the filename variable as value.
Now go to source mode and add missing To and From properties (they are not present in the wizard).
Drag an If activity just below the Invoke, but still inside the While loop and check if variable noDataFound contains the string ‘true’.
Label in the BPEL flow the if branch with “NoDataFound” and the else branch with “DataFound”.

Drag an Assign activity in the if branch and assign true to variable isEOF, so the while loop will end.
We now put an Empty activity in the else branch, because this satisfies the purpose of this blog. Give the Empty activity the name “ProcessXMLfragment”, because this is the place where the processing of the XML fragments should be done, in this show case the processing of one employee. In our real business case we have to invoke another webservice which can handle only one employment relation (one employer and one employee) per request.
My advice is to invoke another BPEL activity which does the processing, so there is a clear separation of chunk reading and processing the data.

Finally we assign an OK string as output with an Assign activity just before the reply.
The flow should now look like this:
FinalFlow(Note that the Catch activities, which normally implement the exception handling, is left out)

We’re almost ready to deploy and test the composite. Two more little things need to be done.
The first one is that we have to map in the Mediator the request to the BPEL input and the same applies to the output. Because it’s only a single string, I do this with a direct assign instead of invoking an xslt mapping.
Now we only have to create a configPlan where we specify the physical directory where the XML files are read: right-mouse click on the composite name (composite.xml) and create a config plan. In the created config plan you will find a ‘reference’ element with the name of the jca file adapter, in our case “SynchreadXML”. You will see that already a placeholder has been created. Enter the physical directory of the runtime environment where the XML file, which is to be read, is located.
Finally you can deploy the composite and test it to find out what happens!

After deployment, you can test the composite in Enterprise Manger with the TEST page.
Enter in the Request tab the XML filename containing the test data and press button “Test Web Service”. The composite nicely returns an “OK” as output string in the Response message.
Now open the ‘Flow Trace’ window to inspect what happened:
Apparently the file has been read in four chunks. To see what exactly had been read, click on the BPEL process. In the Audit trail, expand the “payload” items to investigate what data has been read each time:


As you can see, the result is that the SOA Suite reads each time one child element of the root element. It nicely returns the correct LineNumber and ColumnNumber for the next read until from property IsEOF is set by the File Adapter, meaning End Of File reached.
This will not solve my problem, because the employer is the child element of the root and that’s the one which can contain thousands of employee elements as child elements. So when the employer element is read, also all its containing employee elements will be read too, while I just want to chunk-read on the employees instead of the employers!
Now remember that in the jca file, the ChunkSize setting is set to 1. What happens when we set this to 2?
After changing the jca file, redeploy and test it. The result is that there are only two read actions:


After inspecting the payloads, it turns out two child elements of the root elements are read each time. The first time it reads both the MsgProperties element and the first Employer element. The second time only the second Employer element is read. Because this is the last child element of the root element, also the from property IsEOF correctly is set to true.
So that’s not going to help my problem, but it’s nice to know that the jca adapter can read a specified amount of elements in one read. We’re going to use this later on for a little performance tuning.

Pre-transform the XML before reading

As already mentioned in the introduction of this blog, the solution for this is to change the XML before it is read by the SOA Suite. This can be achieved with a custom valve, as described in the blog by Lucas. In this situation, I choose the relocate the employee elements: from the employer element to the root element, just after the employer element where they were initially located. In this way we still know the context of the employee elements: in the context of the preceding employer element.

To do this, the following actions have to be done:

  1. Create a custom valve and deploy it on the runtime environment
  2. Configure the valve in the project to be used by the jca file
  3. Change the XSD of the XML file being read accordingly
  4. Change the BPEL logic to adjust to the XML changes

1. Create a custom valve and deploy it on the runtime environment

In the custom Java valve the XML is transformed with a SAX transformation. I’ve also tried it with a StAX transformation, which should be less memory consuming, but I can’t get it to work. It does work when I test it locally, in the main method with a local file, but it doesn’t work with the JCA file adapter. I’ve no idea why. Maybe there is a reader who can help me out… The code is still in the source and can be activated based on composite property “useStAX” (true/false). Nevertheless, with the SAX transformation is also works. The Java code:

package nl.amis.fileadaptervalves;


import java.util.logging.Logger;

import javax.xml.namespace.QName;
import javax.xml.parsers.ParserConfigurationException;
import javax.xml.transform.Result;
import javax.xml.transform.Source;
import javax.xml.transform.TransformerConfigurationException;
import javax.xml.transform.TransformerException;
import javax.xml.transform.TransformerFactory;
import javax.xml.transform.sax.SAXSource;
import javax.xml.transform.stax.StAXResult;
import javax.xml.transform.stax.StAXSource;


import org.xml.sax.Attributes;
import org.xml.sax.InputSource;
import org.xml.sax.SAXException;
import org.xml.sax.XMLReader;
import org.xml.sax.helpers.XMLFilterImpl;
import org.xml.sax.helpers.XMLReaderFactory;

public class EmployerValve extends AbstractValve {

  private static final String USE_STAX_PROPERTY_NAME = "useStAX";
  private static final String TMP_FILE_PREFIX = "fileAdapterValve";
  private static final String TMP_FILE_SUFFIX = ".tmp";
  private static final String TAG_Employee = "Employee";
  private static final String TAG_Employer = "Employer";

  private File file = null;
  private final Logger logger = Logger.getLogger(this.getClass().getName());

  public EmployerValve() {

  public InputStreamContext execute(InputStreamContext inputStreamContext) throws PipelineException, IOException {
    // Get the input stream that is passed to the Valve

    logger.finest("================START FileAdapterAbzendValve================");
    String s = (String)getPipeline().getPipelineContext().getProperty(USE_STAX_PROPERTY_NAME);
    boolean useStAX = !"false".equalsIgnoreCase(s); //defaults to true
    logger.finest("useStAX=" + useStAX);

    //modify xml being read
    InputStream originalInputStream = inputStreamContext.getInputStream();
    InputStream newInputStream = transform(originalInputStream, useStAX);
    logger.finest("================END FileAdapterAbzendValve================");
    return inputStreamContext;

  private InputStream transform(final InputStream input, final boolean useStax) throws PipelineException, IOException {
    final InputStream is;
    final OutputStream out;

    file = File.createTempFile(TMP_FILE_PREFIX, TMP_FILE_SUFFIX);
    out = new FileOutputStream(file);
    logger.finest("tempfile=" + file.getAbsoluteFile());
    try {
      final Source src;
      final Result res;
      if (useStax){

        final XMLStreamReader xmlStreamReader = XMLInputFactory.newFactory().createXMLStreamReader(input, "UTF-8");
        final StreamReaderDelegate streamReaderDelegate = new StreamReaderDelegate(xmlStreamReader){
          private boolean startEmployer = false;
          private boolean endingEmployer = false;

          public int next() throws XMLStreamException{
            int event;
            if (endingEmployer){
              event = START_ELEMENT; //end of ending werkgever node, continue with starting werknemer node
                                     //in which the parent already is.
              logger.finest("StAX: end faking ending element " + TAG_Employer + ", so now returning start element");
              event = getParent().next();
            endingEmployer = false;
            if (!startEmployer && event == START_ELEMENT){
              if (TAG_Employer.equals(getParent().getLocalName())){
                startEmployer = true;
                logger.finest("StAX: starting element " + TAG_Employer + " found");
            else if (startEmployer && event == START_ELEMENT){
              if (TAG_Employee.equals(getParent().getLocalName())){
                startEmployer = false;
                event = END_ELEMENT;
                endingEmployer = true; //we're going into the state of ending the werkgever node
                logger.finest("StAX: starting element " + TAG_Employee + " found, start faking end element " + TAG_Employer);
            else if (event == END_ELEMENT){
              if (TAG_Employer.equals(getParent().getLocalName())){
                event = getParent().next(); //goto next node, so this node is skipped.
                logger.finest("StAX: skip ending element " + TAG_Employer);
            return event;

          public String getLocalName(){
            if (endingEmployer){ //we're in the state of ending the werkgever node (parent is in ending werknemer node)
              return TAG_Employer;
              return getParent().getLocalName();

          public QName getName(){
            QName qName = getParent().getName();//we're in the state of ending the werkgever node (parent is in ending werknemer node)
            if (endingEmployer){
             qName = new QName(qName.getNamespaceURI(), TAG_Employer, qName.getPrefix());
            return qName;

          public int getEventType(){
            if (endingEmployer){//we're in the state of ending the werkgever node (parent is in ending werknemer node)
             return END_ELEMENT;
              return getParent().getEventType();


        src = new StAXSource(streamReaderDelegate);
        final XMLStreamWriter xmlStreamWriter = XMLOutputFactory.newFactory().createXMLStreamWriter(out, "UTF-8");
        res = new StAXResult(xmlStreamWriter);
        final XMLReader xr = new XMLFilterImpl(XMLReaderFactory.createXMLReader()) {
          private boolean found = false;

          public void startElement(final String uri, final String localName, final String qName, final Attributes atts) throws SAXException {
            if (!(found) && TAG_Employee.equals(qName)) {
              found = true;
              logger.finest("SAX: Insert closing tag " + TAG_Employer);
              super.endElement(uri, TAG_Employer, TAG_Employer);
            super.startElement(uri, localName, qName, atts);

          public void endElement(final String uri, final String localName, final String qName) throws SAXException {
            if (found && TAG_Employer.equals(qName)) {
              //System.out.println("uri:" + uri + " localName:" + localName + " qName:" + qName);
              //delete this closing tag
              found = false;
              logger.finest("SAX: Skip closing tag " + TAG_Employer);
            } else {
              super.endElement(uri, localName, qName);
        src = new SAXSource(xr, new InputSource(input));
        res = new StreamResult(out);
      TransformerFactory.newInstance().newTransformer().transform(src, res);
      logger.finest("Transformation done!");

     } catch (SAXException e) {
      } catch (XMLStreamException e) {
    } catch (TransformerConfigurationException e) {
    } catch (TransformerException e) {
    is = new FileInputStream(file);
    return is;

  public void test(final boolean useStAX ) throws IOException, ParserConfigurationException, SAXException, PipelineException, InterruptedException {
    long start = System.currentTimeMillis();

    File file = new File("Employers.xml");
    if (!file.exists()){
      System.out.print("File does not exist!");
      FileInputStream in = new FileInputStream(file);
      InputStream result = transform(in, useStAX);
      ByteArrayOutputStream bos = new ByteArrayOutputStream();
      byte[] buffer = new byte[2048];
      int bytesRead;
      while ((bytesRead = != -1) {
        bos.write(buffer, 0, bytesRead);
      String s = new String(bos.toByteArray(), "UTF-8");
      System.out.println(">>" + s + "<<");

      System.out.println("duration: " + (System.currentTimeMillis() - start) + "ms");

  public void finalize(InputStreamContext inputStreamContext) {
    try {
    } catch (PipelineException e) {
    } catch (IOException e) {

  public void cleanup() throws PipelineException, IOException {
    if (file != null && file.exists()){
      file = null;

  public static void main(String[] args) throws IOException, ParserConfigurationException, SAXException, PipelineException, InterruptedException {
    EmployerValve valve = new EmployerValve();


See Lucas’ blog how to create the jar and install it in the runtime environment. Don’t forget to restart (including the admin server) your environment.

2. Configure the valve in the project to be used by the jca file

Lucas’ blog also explains how to configure the valve. In short it means we need to add a pipeline XML file to our project in which the EmployerValve is configured:
This EmployerPipeline has to be attached to the Fileadapter jca file with property PipelineFile:

3. Change the XSD of the XML file being read accordingly

Because the Employer valve changes the XML before it’s read by the SOA Suite, we have to change the XSD in the project so it matches with the changed XML file. The Message element, the root element, now contains a sequence of a MsgProperties element and one or more sequences with a Employer element followed by zero or more Employee elements:

<element name="Message">
      <element name="MsgProperties" type="tns:tMsgProperties" minOccurs="0"/>
      <sequence maxOccurs="unbounded">
        <element name="Employer" type="tns:tEmployer" />
        <element name="Employee" type="tns:tEmployee" minOccurs="0" maxOccurs="unbounded"/>
    <attribute name="test" type="boolean"/>

Changed XSD
If you want to try to use the StAX transformation, which for some reason doesn’t work in my environment, the property useStAX has to be added in the composite.xml:Add StAX property to composite
You can change this setting to ‘true’ in Enterprise Manager to activate the StAX transformation: go to the Composite page, click on the ‘SynchReadXML’ JCA Adapter in the ‘Services and References’ section and then switch to the Properties tab.

4. Change the BPEL logic to adjust to the XML changes

Finally we have to adjust the logic in BPEL by looping over the read elements. Based on the type of element, with expression local-name($InvokeSynchReadXML_SynchReadXML_OutputVariable.body/*[1]) = ‘MsgProperties’, we decide what to do. Assume that for processing one Employee element, also the information of both the MsgProperties element and the Employer element is needed. So in case the MsgProperties is read, we store the information in a MsgPropeties variable and the same applies for an Employer element (in an Employer variable). When an Employee element is read, we can use these variables to complete the data for processing an Employee. The flow now looks like this:
BPEL flow corrected
After deployment and test with the test data, we see in the Flow Trace that, as expected, 10 read actions has been performed: 1 MsgProperties element, 2 Employer elements, 6 Employee elements and a last try to detect that EOF has been reached:
When inspecting the SynchReadXMLProcess it turns out the switch on type of elements works well, as well as the assignment of the variables!

Tuning for performance

Pure from a functional point of view this works well for now. The problem of reading large XML files which are so large they may result into an OutOfMemory problem causing the server to crash, is solved. But from a performance perspective this still is not the best situation. Reading only one child element of the root each a time can take quite some time when the large XML file contains thousands of employees. So it would be better to read more elements in one read. This means that the BPEL has to change as well by looping over the single elements from the array of elements that has been read each time. We can use the same logic of switching on type of element within the loop. This results into a flow like this:
BPEL flow tuned for perfomance
To test if this tuned logic still works correctly, I’ve set the property ChunkSize in the jca adapter to 4.
After invoking the composite with the test data, the Flow Trace informs that, as expected, 3 reads has been done (two times 4 elements and a final read of 1 remainder element).
Inspecting the SynchReadXMLProcess proves that the iterating over the elements also works well:
Flow trace tuned performance

In our real business case, we’ve set the ChunkSize to 50. This value is specific for our business case, because it’s depending upon your runtime environment, how complex the XML is, how large the large XML files are and the processing of the data itself. When, in our business case, we want to improve the performance even more, we have to change the logic of processing one employee into processing multiple employees at once.


Using the chuck reading features of the file adapter in combination with the preprocessing of the file adapter, gives the opportunity to process large XML files in fragments. This can prevent OutOfMemory problems and gives the ability to further performance tuning.
This is ideal for batch processes or, as in our business case, asynchronous process where the end-user uploads a file and immediately only gets the response that the file has been received and will be processes (the feedback of the processing the file, including errors, is stored in a database and is shown in a dedicated screen).


The resources are available for download:



  1. Ariel October 31, 2017
    • Emiel Paasschens October 31, 2017
      • Mark January 3, 2018
        • Björn Schulz February 17, 2018
        • Denmark February 26, 2019
  2. Anurag Gupta July 23, 2016
  3. Lolke Dijkstra April 25, 2016
  4. Aurimas Lacitis January 17, 2016
    • Emiel Paasschens January 19, 2016
  5. Emiel Paasschens January 14, 2016
  6. Sunny January 12, 2016
  7. Emiel Paasschens January 5, 2016
  8. Sameer January 4, 2016