State management in serverless functions - connection pooling in AWS Lambda leveraging memoized functions image 173

State management in serverless functions – connection pooling in AWS Lambda leveraging memoized functions

The problem

Setting up a datastore connection is an expensive process. The amount of available connections is often limited and creating a connection costs precious time – and and in serverless functons time literally means money. Especially in something like an event processing Lambda, where thousands of events can call your function within a couple of seconds you would rather not set up a connection for each individual invocation. Thankfully, you don’t have to! By using a couple of tricks in JavaScript and the nature of how AWS tears down Lambda runtime environments, we can use the same datastore connection for loads of sequential invocations.

Memoization as a concept

The first trick we will use is memoization. Memoization is a way in which you can retain the execution scope of a function after that function has returned. Due to how scoping works in javascript, we can do the following:

function memoizedGetNonRandom() {
  let nonRandom = undefined;
  return _getNonRandom;

  function _getNonRandom() {
    if (nonRandom !== undefined) return nonRandom;
    nonRandom = Math.random();
    return nonRandom;
  }
}

const getNonRandom = memoizedGetNonRandom();

On the last line of this snippet the variable getNonRandom is assigned the return value of the function memoizedGetNonRandom. This return value is a function itself. Because the variable nonRandom is present within the scope when _getNonRandom was declared, it is still available within getNonRandom and can be read and set. The first time getNonRandom is called, nonRandom is undefined and Math.random will be called, nonRandom will be set and its value returned. Every time getNonRandom is called thereafter, nonRandom will be set and the same value will be returned.

Memoization used in practice

The same memoization principle can be used to return a datastore client, for example an ElasticSearch client which can manage our ElasticSearch connection(s) for us.

const memoizedGetEsClient = (): (() => Promise<elasticsearch.Client>) => {
  let initializedEsClient = undefined;

  return async (): Promise<elasticsearch.Client> => {
    if (initializedEsClient) return initializedEsClient;

    const esClient = new elasticsearch.Client({
      hosts: [{ ...config.es }],
      connectionClass: httpAwsEs,
    });

    await esClient.ping({ requestTimeout: 500 });

    initializedEsClient = esClient;
    return esClient;
  };
};

export const getEsClient = memoizedGetEsClient();

The function getEsClient can now be called in many different place within the same Lambda invocation and will return the same ElasticSearch client each time. A major advantage to using a memoized function like this is the modularity of the code. You do not need to worry where in your code you call this function for the first time and whether you had already initialized your ES client, this will always just work and you no long need to worry about the implementation details.

AWS lambda execution context

As stated in the AWS lambda documentation, AWS creates an execution context for each concurrent version of your Lambda. This is a temporary runtime environment based on the configuration settings you’ve provided. When a function has not been called in a while, a ‘cold start’ is required in which AWS sets up everything your Lambda needs. This process may take up thirty seconds. Subsequent invocations can then reuse this execution context to avoid having this ‘cold start’. Between function invocations, AWS ‘freezes’ the execution context, ‘thawing’ it when the function is reused. This has several consequences, but for our current purpose we will focus on one of them:

Variables declared outside of the function’s handler method remain in existence between function invocations

This means that we can reuse our memoized ElasticSearch client over different Lambda invocations, since our getEsClient variable will remain the same! Since the execution context is frozen between function invocations, we are not billed for the time between invocation, only the actual time when the function is working. When using multiple concurrent versions of the same Lambda, each concurrent version will have to intialize its own ElastichSearch client which can then be reused by invocations which make use of the same context. It is never guarantueed that two lambda invocations will use the same exection context, so it is important that a Lamdba is never dependent on initial state for its proper execution. An execution context can be available for up to an hour or more.

State management in serverless functions - connection pooling in AWS Lambda leveraging memoized functions image 173
Each concurrent lambda gets it’s own execution context. These do not necessarily have to be created at the same time, as the number of concurrent lambda’s is dependent on the load and will dynamically change. The first invocation of a lambda within an execution context will initialize an ElasticSearch client, which can then be used by future invocations within the same context.