Posts tagged parallel
Performance of Enterprise Java Applications is a requirement and usually a challenge. Business requirements on systems can be stiff, successful systems can easily be overloaded and complex application architectures can add a burden too. Improving performance by tuning the application after it has been built seldomly renders huge improvements. By taking a step back – or even two – and regarding the application and the performance from a distance, it becomes possible to really design and architect for performance according to the ISYITF-method: it is staring you in the face. Order of magnitude improvements are attainable through logical reasoning and careful application of multi-tier architecture principles and JEE platform facilities.
This is the abstract for the session Thinking Through Java Enterprise Performance that I will be presenting on Tuesday October 2nd at JavaOne 2012 (BOF 4712 4:30 PM – 5:15 PM – Parc 55 – Cyril Magnin I.
BPEL 2.0 introduced the forEach activity – similar to for [-loop] found in many programming languages. Oracle SOA Suite 11g adopted BPEL 2.0, first in run time (PS2) and later in Design Time (JDeveloper) as well (PS3 an beyond). For BPEL processes created using BPEL 2.0, forEach is a looping mechanism – similar to repeatUntil and while – and also the successor to the proprietary Oracle extension to BPEL 1.x called FlowN. In that latter capacity, forEach is the activity that enables parallelism in BPEL processes to a dynamic degree.
The well known Flow activity also supports parallelism – but only for a static number of branches, known at design time. FlowN (1.x) and forEach (2.0) add the ability to execute a scope a dynamic number of times, determined at run time.
For example when an operation needs to be performed on multiple elements in a collection, such as all order lines in an order or all persons in a travel booking, forEach is valuable – especially when it makes sense to perform the operation on multiple elements at the same time.
Note however that parallelism in BPEL is a relative concept: a single BPEL process instance is never operated on in more than one JVM thread, so More >
In this article, we will continue a discussion on asynchronous processing started in a previous article that introduced asynchronous and parallel processing Java using Executors, Futures, Callable Objects and the underlying thread model in Java 5 and 6.
While a stand alone Java application – without UI – is a rare thing in my world, a Java Web application certainly is not. And performance, especially perceived performance, is pretty important in that world. The first page load is the most important measure I suppose for what the user feels is the performance of the web application. The fact that after the initial load, additional elements can be loaded into the page – asynchronously – is fine. The intial page load and the browser’s indication that the load is done (and the hourglass disappears) is what is most important for the happiness of the user.
We will see three stages in this article, of a very simple web page. It is a JSF (JavaServer Faces) page, that contains some very simple elements of which one displays an ‘expensive’ value – a value that takes some time to get hold of. Maybe because a database query is involved or web service is called. Whatever the cause, this one More >
Processors are not going to get much faster. No higher clockspeeds are foreseen. The speed of processing will be further increasing through parallellization, engaging multiple CPU cores for handling all tasks rather than a single faster core.
This is but one reason for taking a closer look at the threading model in Java and the way we can do asynchronous and parallel processing as of Java 5. Another reason for my interest in asynchronous processing has to do with (perceived) performance. If an application performs a task on behalf of a user, it may block until the task is completed. The user cannot do anything until the task completes – watching the hourglass or whatever busy cursor is used. With asynchronous processing, a task which the user does not immediately require the results from can be processed in a separate thread. The perception of the user therefore is that the task is performed (or at least processed) much faster than in the synchronous case. And even though it is only perception – perception is usually all that counts!
Furthermore, if the task can be broken in smaller pieces that can be executed in parallel, we really can speed up the task – provided More >