BPEL 2.0 introduced the forEach activity – similar to for [-loop] found in many programming languages. Oracle SOA Suite 11g adopted BPEL 2.0, first in run time (PS2) and later in Design Time (JDeveloper) as well (PS3 an beyond). For BPEL processes created using BPEL 2.0, forEach is a looping mechanism – similar to repeatUntil and while – and also the successor to the proprietary Oracle extension to BPEL 1.x called FlowN. In that latter capacity, forEach is the activity that enables parallelism in BPEL processes to a dynamic degree.
The well known Flow activity also supports parallelism – but only for a static number of branches, known at design time. FlowN (1.x) and forEach (2.0) add the ability to execute a scope a dynamic number of times, determined at run time.
For example when an operation needs to be performed on multiple elements in a collection, such as all order lines in an order or all persons in a travel booking, forEach is valuable – especially when it makes sense to perform the operation on multiple elements at the same time.
Note however that parallelism in BPEL is a relative concept: a single BPEL process instance is never operated on in more than one JVM thread, so there is no real parallel execution at CPU level.
However â€“ when asynchronous activities are ‘waiting other activities can be performed â€˜in the mean timeâ€™. Examples of asynchronous activities are Wait, Receive (for a reply to the invoke of an asynchronous service, Pick (onMessage and/or onAlarm).
When the forEach scope contains such asynchronous actions, it can provide parallel execution by executing the scope for the next element in the for-loop while the previous element’s iteration is waiting for an activity to continue or complete.
Here follows a simple example of using the forEach activity to implement a BPEL process (BPEL 2.0) that calculates the factorial for the number that was passed in the request. To refresh your memory:
A recursive solution with BPEL would be possible, but very expensive. A simple solution based on a loop is presented below. Note that a similar solution can be created using repeatUntil (also introduced in BPEL 2.0) and while.
The complete process is implemented as follows:
The variables in this process are defined as follows:
- an inputVariable that contains an input element of type integer; this is the factorial operand, the x in y= x!
- the outputVariable, with a result element also of type integer that contains the outcome of the factorial calculation: the y in y = x!
The first step in the process, after receiving the request, is the initialization of the outputVariable. The result is set to 1. Because of this initialization, we can use the outputVariable for calculating every next iteration’s outcome, as you will see in a moment. First the Assign activity:
Now the forEach activity. The first definition of this activity consists of the name of the activity, the name of the Counter (or index variable) and a checkbox to indicate whether the BPEL engine should attempt parallel execution (in this case that does not make sense). The Counter Name will be used to create a variable – $index in this case – that is available in the scope inside the forEach loop:
On the Counter Values tab the loop is configured. The start value for the counter ($index) is specified through an XPath expression – which can be a literal as is the case here. The final value of the counter is also set using an XPath expression. Frequently, this value will be derived as the count of the number of elements of a specific type in the request message. In this case we need the counter to iterate from 1 to the number sent in the request message as the factorial operand:
The only other activity in this process is the Assign activity inside the forEach scope where the factorial is calculated in each iteration:
In every iteration, the factorial intermediate result is calculated by taking the result from the previous iteration and multiplying it with the current value of $index, the iteration counter. The result is stored in the result element in the outputVariable. When the forEach is done, the result is stored in the right location for returning it to the invoker.
Invoking the Factorial service returns the correct result – not surprisingly of course:
The audit flow is more interesting than the fact that 5! equals 120:
We clearly see how Scope1 is executed multiple times – once for every iteration of forEach and every value of $index.
True Parallelism using forEach
We will now look at a simple example of parallel execution using forEach. The forEach scope in this example contains an synchronous activity – a Wait that can last more than 3 seconds. When this scope is executing the Wait for a specific iteration, the BPEL engine can start processing the next iteration meanwhile. And when that one starts to wait, it can start the next iteration’s scope as well.
In this case, the forEach activity is explicitly configured to do parallel execution when appropriate:
The Counter has been configured to iterate from 1 to the number of waits that is specified in the request message:
The Wait is simple. Its most interesting aspect is that the duration is derived using an XPath expression. The inputVariable contains the wait time in seconds. This value is used to construct an expression according to the required format: P#Y#M#DT#H#M#S. In this case, the number of seconds to wait is specified in the durationOfWaitInSeconds elements in the inputVariable. This number if combined with a fixed indication of 0 hours and 0 minutes:
If we deploy this application and invoke it with 5 for numberOfWaits and 6 for the durationOfWaitInSeconds, we could expect a total execution time of just over 30 seconds in the case of a sequential execution:
However, because the BPEL process includes the forEach activity that we have configured to support parallel execution, we may have hope for a shorter execution time.
The Flow Trace indicates that in this case the composite instance completed in about 10 seconds. Clearly, some parallel execution must have taken place in order to get from the sequential value of 30+ seconds to the actual 10 seconds.
The visual flow makes it fairly clear what happened:
Clearly, the scopes were executed in parallel. “If all the scopes do is wait, they may as well do it all at the same time.”