Node.js run from GitHub in Generic Docker Container backed by Dockerized Redis Cache image 63

Node.js run from GitHub in Generic Docker Container backed by Dockerized Redis Cache

In a previous article I talked about a generic Docker Container Image that can be used to run any Node.js application directly from GitHub or some other Git instance by feeding the Git repo url as Docker run parameter (see https://technology.amis.nl/2017/05/21/running-node-js-applications-from-github-in-generic-docker-container/). In this article, I create a simple Node.js application that will be pushed to GitHub and run in that generic Docker container. It will use a Redis cache that is running in a separate Docker Container.

image

The application does something simple: HTTP requests are handled: each request will lead to an increment of the request counter and the current value of the request counter is returned. The earlier implementation of this functionality used a local Node.js variable to keep track of the request count. This approach had two spectacular flaws: horizontal scalability (adding instances of the application fronted by a load balancer of sorts) led to strange results because each instance kept its own request counter. And a restart of the application caused the count to be reset. The incarnation we discuss in this article uses a Redis cache as a shared store for the request counter, one that will also survive the restart of the Node.js application instances. Note: of course this means Redis becomes a single point of failure, unless we cluster Redis too and/or use a persistent file as backup. Both options are available but are out of scope for this article.

Sources for this article can be found on GitHub: https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017/tree/master/part1 .

Run Redis

To run a Docker Container with a Redis cache instance, we only have to execute this statement:

docker run -d –name redis -p 6379:6379 redis

We run a container based on the Docker image called redis. The container is also called redis and its internal port 6379 is exposed and mapped to port 6379 in the host. That it all it takes. The image is pulled and the container is started.

image

Create Node.js Application RequestCounter – Talking to Redis

To talk to Redis from a Node.js application, there are several modules available. The most common and generic one seems to be called redis. To use it, I have to install it with npm:

npm install redis –save

image

To leverage Redis in my application code, I need to require(‘redis’) and create a client connection. For that, I need the host and port for the Redis instance. The port was specified when we started the Docker container for Redis (6379) and the host ip is the ip of the Docker machine (I am running Docker Tools on Windows).

Here is the naïve implementation of the request counter, backed by Redis. Naïve because it does not cater for race conditions between multiple instances that could each read the current counter value from Redis, each increase it and write it back, causing one or multiple counts to be potentially lost. Note that the REDIS_HOST and REDIS_PORT can be specified through environment variables (read with process.env.<name of variable>.

//respond to HTTP requests with response: count of number of requests
// invoke from browser or using curl:  curl http://127.0.0.1:PORT
var http = require('http');
var redis = require("redis");

var redisHost = process.env.REDIS_HOST ||"192.168.99.100" ;
var redisPort = process.env.REDIS_PORT ||6379;

var redisClient = redis.createClient({ "host": redisHost, "port": redisPort });

var PORT = process.env.APP_PORT || 3000;

var redisKeyRequestCounter = "requestCounter";

var server = http.createServer(function handleRequest(req, res) {
    var requestCounter = 0;

    redisClient.get(redisKeyRequestCounter, function (err, reply) {
        if (err) {
            res.write('Request Count (Version 3): ERROR ' + err);
            res.end();
        } else {
            if (!reply || reply == null) {
                console.log("no value found yet");
                redisClient.set(redisKeyRequestCounter, requestCounter);
            } else {
                requestCounter = Number(reply) + 1;
                redisClient.set(redisKeyRequestCounter, requestCounter);
            }
            res.write('Request Count (Version 3): ' + requestCounter);
            res.end();
        }
    })
}).listen(PORT);

    //        redisClient.quit();

console.log('Node.JS Server running on port ' + PORT + ' for version 3 of requestCounter application, powered by Redis.');

 

Run the Node.JS Application talking to Redis

The Node.js application can be run locally – from the command line directly on the Node.js runtime.

Alternatively, I have committed and pushed the application to GitHub. Now I can run it using the generic Docker Container Image lucasjellema/node-app-runner that I prepared in this article: https://technology.amis.nl/2017/05/21/running-node-js-applications-from-github-in-generic-docker-container/ using a single startup command:

docker run -e “GIT_URL=https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017” -e “APP_PORT=8080” -p 8015:8080 -e “APP_HOME=part1”  -e “APP_STARTUP=requestCounter-3.js” -e “REDIS_HOST:127.0.0.1” -e “REDIS_PORT:6379”   lucasjellema/node-app-runner

This command passes relevant values as environment variable – such as the GitHub Repo url, the directory in that repo and the exact script to run and also the host and port for Redis as well as the port that the Node.js application should listen at for requests. In the standard Docker way, the internal port (8080) is mapped to the external port (8015).image

 

The application can accessed from the browser:

image

 

Less Naïve Implementation using Redis Watch and Multi for Optimistic Locking

Although the code shown overhead seems to be working – it is not robust. When scaling out –  multiple instances can race against each other and overwrite each other’s changes in Redis because no locking has been implemented. Based on this article: https://blog.yld.io/2016/11/07/node-js-databases-using-redis-for-fun-and-profit/#.WSGEWtwlGpo I have extended the code with an optimistic locking mechanism. Additionally, the treatment of client connections is improved – reducing the chance of leaking connections.

//respond to HTTP requests with response: count of number of requests
// invoke from browser or using curl:  curl http://127.0.0.1:PORT
// use an optmistic locking strategy to prevent race conditions between multiple clients updating the requestCount at the same time
// based on https://blog.yld.io/2016/11/07/node-js-databases-using-redis-for-fun-and-profit/#.WSGEWtwlGpo 
var http = require('http');
var Redis = require("redis");

var redisHost = process.env.REDIS_HOST || "192.168.99.100";
var redisPort = process.env.REDIS_PORT || 6379;

var PORT = process.env.APP_PORT || 3000;

var redisKeyRequestCounter = "requestCounter";

var server = http.createServer(function handleRequest(req, res) {
    increment(redisKeyRequestCounter, function (err, newValue) {
        if (err) {
            res.write('Request Count (Version 3): ERROR ' + err);
            res.end();
        } else {
            res.write('Request Count (Version 3): ' + newValue);
            res.end();
        }
    })
}).listen(PORT);


function _increment(key, cb) {
    var replied = false;
    var newValue;

    var redis = Redis.createClient({ "host": redisHost, "port": redisPort });
    // if the key does not yet exist, then create it with a value of zero associated with it
    redis.setnx(key, 0);
    redis.once('error', done);
    // ensure that if anything changes to the key-value pair in Redis (from a different connection), this atomic operation will fail
    redis.watch(key);
    redis.get(key, function (err, value) {
        if (err) {
            return done(err);
        }
        newValue = Number(value) + 1;
        // either watch tells no change has taken place and the set goes through, or this action fails
        redis.multi().
            set(key, newValue).
            exec(done);
    });

    function done(err, result) {
        redis.quit();

        if (!replied) {
            if (!err && !result) {
                err = new Error('Conflict detected');
            }

            replied = true;
            cb(err, newValue);
        }
    }
}

function increment(key, cb) {
    _increment(key, callback);

    function callback(err, result) {
        if (err && err.message == 'Conflict detected') {
            _increment(key, callback);
        }
        else {
            cb(err, result);
        }
    }
}

console.log('Node.JS Server running on port ' + PORT + ' for version 3 of requestCounter application, powered by Redis.');

This Node.js application is run in exactly the same way as the previous one, using requestCounter-4.js as APP_STARTUP rather than requestCounter-3.js.

docker run -e “GIT_URL=https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017” -e “APP_PORT=8080” -p 8015:8080 -e “APP_HOME=part1”  -e “APP_STARTUP=requestCounter-4.js” -e “REDIS_HOST:127.0.0.1” -e “REDIS_PORT:6379”   lucasjellema/node-app-runner

image