Adding a Cross Instance, Cross Restarts and Cross Application Cache to Node Applications on Oracle Application Container Cloud

Lucas Jellema

In a previous post I described how to do Continuous Integration & Delivery from Oracle Developer Cloud to Oracle Application Container Cloud on simple Node applications: Automating Build and Deployment of Node application in Oracle Developer Cloud to Application Container Cloud. In this post, I am going to extend that very simple application with the functionality to count requests. With every HTTP request to the application, a counter is incremented and the current counter value is returned in the response.


The initial implementation is a very naïve one: the Node application contains a global variable that is increased for each request that is handled. This is naïve because:

  • multiple instances are running concurrently and each is keeping its own count; because of load balancing, the subsequent requests are handled by various instances and the responses will show a somewhat irregular request counter pattern; the total number of requests is not known: each instance as a subtotal for that instance
  • when the application is restarted – or even a single instance is restarted or added – the request counter for each instance involved is reset

Additionally, the request count value is not available outside the Node application and it can only be retrieved by calling the application -which in turn increases the count.

A much better implementation would be one that uses a cache – that is shared by the application instances and that survives application (instance) restarts. This would also potentially make the request count value available to other microservices that can access the same cache – if we allow that to happen.

This post demonstrates how an Application Cache can be set up on Application Container Cloud Service and how it can be leveraged from a Node application. It shows that the request counter will be shared across instances and survives redeployments and restarts.


Note: there is still the small matter of race conditions that are not addressed in this simple example because read,update and write are not performed as atomic operation and no locking has been implemented.

The steps are:

  • Add (naïve) request counting capability to greeting microservice
  • Demonstrate shortcomings upon multiple requests (handled by multiple instances) and by instance restart
  • Implement Application Cache
  • Add Application Cache service binding to ACCS Deployment profile for greeting in Developer Cloud Service
  • Utilize Application Cache in greeting microservice
  • Redeploy greeting microservice and demonstrate that request counter is shared and preserved

Sources for this article are in GitHub: .

Add (naïve) request counting capability to greeting microservice

The very simple HTTP request handler is extended with a global variable requestCounter that is displayed and incremented for each request:


It’s not hard to demonstrate shortcomings upon multiple requests (handled by multiple instances) :


Here we see how subsequent requests are handled (apparently) by two different instances that each have their own, independently increasing count.

After application restart, the count is back to the beginning.

Implement Application Cache

To configure an Application Cache we need to work from the Oracle Application Container Cloud Service console.



Specify the details – the name and possibly the sizing:



Press Create and the cache will be created:


I got notified about its completion by email:



Add Application Cache service binding to ACCS Deployment profile for greeting in Developer Cloud Service

In order to be able to access the cache from within an application on ACCS, the application needs a service binding to the Cache service. This can be configured in the console (manually) as well as via the REST API, psm cli and the deployment descriptor in the Deployment configuration in Developer Cloud Service.

Manual configuration through the web ui looks like this:


or though a service binding:



and applying the changes:



I can then utilize the psm command line interface to inspect the JSON definition of the application instance on ACCS and so learn how to edit the deployment.json file with the service binding for the application cache. First setup psm:


And inspect the greeting application:

psm accs app -n greeting -o verbose -of json


to learn about the JSON definition for the service binding:


Now I know how to update the deployment descriptor in the Deployment configuration in Developer Cloud Service:


The next time this deployment is performed, the service binding to the application cache is configured.

Note: the credentials for accessing the application cache have to be provided and yes, horrible as it sounds and is, the password is in clear text!

It seems that the credentials are not required. The value of password is now BogusPassword – which is not the true value of my password – and still accessing the cache works fine. Presumably the fact that the application is running inside the right network domain qualifies it for accessing the cache.

The Service Binding makes the following environment variable available to the application – populated at runtime by the ACCS platform:


Utilize Application Cache in greeting microservice

The simplest way to make use of the service binding’s environment variable is demonstrated here (note that this does not yet actually use the cache):


and the effect on requests:


Now to actually interact with the cache – through REST calls as explained here: – we will use a node module node-rest-client. This module is added to the application using

npm install node-rest-client –save


Note: this instruction will update package.json and download the module code. Only the changed package.json is committed to the git repository. When the application is next built in Developer Cloud Service, it will perform npm install prior to zipping the Node application into a single archive. That action of npm install ensures that the sources of node-rest-client are downloaded and will get added to the file that is deployed to ACCS.

Using this module, the app.js file is extended to read from and write to the application cache. See here the changed code (also in GitHub

var http = require('http');
var Client = require("node-rest-client").Client;

var version = '1.2.3';

// Read Environment Parameters
var port = Number(process.env.PORT || 8080);
var greeting = process.env.GREETING || 'Hello World!';

var requestCounter = 0;

var server = http.createServer(function (request, response) {
  getRequestCounter( function (value) {
     requestCounter = (value?value+1:requestCounter+1);
     // put new value in cache  - but do not wait for a response          
     console.log("write value to cache "+requestCounter);
     response.writeHead(200, {"Content-Type": "text/plain"});
     response.end( "Version "+version+" says an unequivocal: "+greeting 
                 + ". Request counter: "+ requestCounter +". \n"


// functionality for cache interaction
// for interaction with cache
var baseCCSURL = 'http://' + CCSHOST + ':8080/ccs';
var cacheName = "greetingCache";
var client = new Client(); 

var keyString = "requestCount";

function getRequestCounter(callback)  {
        function(data, rawResponse){
            var value;
            // If nothing there, return not found
            if(rawResponse.statusCode == 404){
              console.log("nothing found in the cache");
              value = null;
              // Note: data is a Buffer object.
              console.log("value found in the cache "+data.toString());
              value = JSON.parse(data.toString()).requestCounter;

function writeRequestCounter(requestCounter) {
var args = {
        data: { "requestCounter": requestCounter},
        headers: { "Content-Type" : "application/json" }
        function (data, rawResponse) {   
            // Proper response is 204, no content.
            if(rawResponse.statusCode == 204){
              console.log("Successfully put in cache "+JSON.stringify(data))
              console.error("Error in PUT "+rawResponse);
              console.error('writeRequestCounter returned error '.concat(rawResponse.statusCode.toString()));
}// writeRequestCounter

Redeploy greeting microservice and demonstrate that request counter is shared and preserved

When we make multiple invocations to the greeting service, we see a consistently increasing series of count values:


Even when the application is restarted or redeployed, the request count is preserved and when the application becomes available again, we simply resume counting.

The logs from the two ACCS application instances provide insight in what takes place – how load balancing makes these instances handle requests intermittently – and how they read each others’ results from the cache:




Sources for this article are in GitHub: .

Blog article by Mike Lehmann, announcing the Cache feature on ACCS:

Documentation on ACCS Caches:

Tutorials on cache enabling various technology based applications on ACCS:

Tutorial on Creating a Node.js Application Using the Caching REST API in Oracle Application Container Cloud Service

Public API Docs for Cache Service –

Using psm to retrieve deployment details of ACCS application: (to find out how Application Cache reference is defined)

Next Post

ODA X6-2M - How to create your own ACFS file system

In this post I will explain how to create your own ACFS file system (on the command line) that you can use to (temporarily) store data. So you have this brand new ODA X6-2M and need to create or migrate some databases to it. Thus you need space to store […]
%d bloggers like this: