Node.js: A simple pattern to increase perceived performance setupsynccallssized

Node.js: A simple pattern to increase perceived performance

The asynchronous nature of code running on Node.js provides many interesting options for service orchestration. In this example I will call two translation services (Google and SYSTRAN). I will call both of them quickly after each other (milliseconds). The first answer to be returned, will be the answer returned to the caller. The second answer will be ignored. I’ve used a minimal set of Node modules for this; http, url, request. Also I wrapped the translation API’s to provide a similar interface which allows me to call them with the same request objects. You can download the code here. In the below picture this simple scenario is illustrated. I’m not going to talk about the event loop and the call stack. Watch this presentation for a nice elaboration on those.

setupsynccallssizedWhat does it do?

The service I created expects a GET request in the form of:


In this case I’m translating the Japanese 犬 to the English dog.

The result of this call in the console is:

Server running at
0s, 0.054ms - Request start
0s, 147.451ms - Google response: dog
0s, 148.196ms - Response returned to caller
0s, 184.605ms - Systran response: dog

The result returned is:

{  "result": "dog",  "source": "Google" }

As you can see, Google is first to respond. The response from Google is returned to the client which does not have to wait for the result of Systran to come in.

If we slow down the returning of Google’s response with 1 second (setTimeout), we see the following:

Server running at
0s, 0.003ms - Request start
0s, 107.941ms - Systran response: dog
0s, 108.059ms - Response returned to caller
1s, 78.788ms - Google response: dog

These are just single requests thus timing values differ slightly.

The following result is returned:

{  "result": "dog",  "source": "Systran" }

How does it work?

Actually this setup is surprisingly simple using JavaScript and callbacks. The http module is used to create an HTTP server and listen on a port. The url module is used to parse the incoming request. The request module is used to create the GET request needed for SYSTRAN. See systran-translate.js (I’ve of course changed the API key ;). In the callback function of the server (which is called in the callback functions of the Google and Systran calls) I check if a response has already been returned. If not then I return it. If it has already been returned, I do nothing.

Below is a snippet from my main file which starts the server, calls the services and returns the response.


I’ve used the Google API as can be used with the node-google-translate-skidz module. Not much interesting to show here. To do the Systran translation, I’ve used the following code:


If you uncomment the console.log lines you can see the actual request which is being send such as:

%E7%8A%AC is of course 犬

Why is this interesting?

Suppose you are running a process engine which executes your service orchestration in a single thread. This process engine might in some cases not allow you to split your synchronous request/reply in a separated request and reply which might be received later, often making this a blocking call. When execution is blocked, how are you going to respond to another response arriving at your process? Also there are several timeouts you have to take into account such as maybe a JTA timeout. What happens if a reply never comes? This might be a serious issue since it might keep an OS thread blocked, which might cause stuck threads and might even hang the server if this happens often.

Through the asynchronous nature of Node.js, a scenario as shown above, suddenly becomes trivial as you can see from this simple example. By using a pattern such as this, you can get much better perceived performance. Suppose you have many clustered services which are all relatively lightweight. Performance of the different services might vary due to external circumstances. If you call a small set of different services at (almost) the same time, you can get a quick response to give to the customer. At the same time you might call services of which the answer might not be interesting anymore when it returns, increasing total system load.

In this example several things are missing such as correct error handling. You might also want to return a response if one of the services fails. Also, if the server encounters an error, the entire server crashes. You might want to avoid that. Routing has not been implemented to keep the example as simple as possible. For security you of course have your API platform solution.

For more information visit my session at OOW2016: Oracle Application Container Cloud: Back-End Integration Using Node.js