Somewhat Stateful Serverless Functions on Oracle Cloud Infrastructure - Implementing a DIY Cache image 51

Somewhat Stateful Serverless Functions on Oracle Cloud Infrastructure – Implementing a DIY Cache

We call them serverless and we consider them stateless. Functions on Oracle Cloud Infrastructure (and on other cloud platforms). Of course they are neither. They run on servers. And they can carry some state, though largely opportunistic and not reliably.

Functions on OCI run inside a container. The container is started when the function is first invoked. And the container is left running for some time (several minutes) in order to perhaps service additional function requests. While the container keeps running, state can retained as well. When the function is for example a Node function, global variables can be set and read across requests. For as long as the container keeps running. For which there is no guarantee.

When the number of requests increases to a point where the OCI Functions framework deems it necessary to start a second instance of the container for helping with handling the requests, this second container obviously does not have the state of the first container. At that point, we should consider the state lost.

So if the load on a function is not excessive and it consists of at least one request every 3-6 minutes (I have not yet figured out the exact idle time limit used by OCI Functions for killing the container), the state can be kept. The state could be persisted periodically, to a backend database or some other persistence service or simply to a file on OCI Object Storage. When a container is first started, it could also retrieve initial state from such sources.


Cache on OCI Functions

There is a facility lacking from my point of view on OCI Functions: a simple way to remember – to use a [quick] cache for keeping track of certain information, across multiple functions and function executions. It feels as if for the time being, I could conjure up a simple cache with spectacular lack of robustness using some of the ingredients described above. A function can keep state. If the function is kept alive with a heartbeat call every X (say three) minutes, it hangs on to the state for potentially pretty long. It could save its state periodically and/or whenever the contents of the state is changed to a backend file. In this case, when the container is compromised in some way, the cache contents can carry over to a new instance.


For dealing with multiple instance of this cache function running side by side we could use a CQRS approach regarding the persisted cache contents: one function is used to handling cache updates; this function does not carry state: it writes its changes to the persisted cache; reading values from the cache is handled by a second function; this function reads the cache from the backend when it first initialized and it checks periodically if any changes have been written to the persisted cache contents, and if so reloads the cache contents. Of course this means that it takes some time for changes to become available from the cache. Note: this CQRS approach is only needed when the load on the cache is substantial and multiple function instances are required to handle the requests.


Implementation of Naïve Function Cache

Just for kicks, I will show how I created a naïve implementation of the cache, using:

  • an OCI serverless Function with Fn
  • the API Gateway to expose a route to the function – for GET (to read) and PUT (to write) values to the cache
  • existing and new code to write to and retrieve from Object Storage
  • OCI Monitoring Health Check to perform the heartbeat call to keep the cache function alive

Sources discussed in this article are available on GitHub in this repo:

1. Create new function:

fn init –runtime node cache

Implement function in a few steps:

  1. Handle request to put data in cache
  2. Handle request to get data from cache
  3. Implement ‘persist cache to Object Storage whenever changes have been made to data in the cache’
  4. Implement ‘retrieve cache from Object Storage when the function is initialized’


2. Configure route on API Gateway

I have added a route in existing API Deployment in an existing API Gateway. The route path is /cache and the GET, POST and PUT methods are supported. The route has function cache as its target.


3. I have added code to function cache to try to restore the cache from the Object Storage backing file when the function [instance] is invoked for the first time (after the container was spun up). Whenever data has been manipulated, the cache contents is written to the backing file to persist the changes.


Write cache to file after data was manipulated


4. Schedule Health Check on Oracle Cloud Infrastructure – Monitoring

A health check is a scheduled ping or HTTP request – used for assessing the health of a specific endpoint. In this case, I am using the scheduled call (once every 5 minutes) to keep the function [instance] alive – and thereby the cache contents.


This health check invokes the API Gateway route to the cache function with a GET operation, with an interval of 5 minutes. The vantage point in this case is AWS East US 2 – the calls are made from an other cloud provider’s infrastructure. Useful for a real health check, irrelevant in my particular use case.

Running the Cache in the Wild

After deploying the function, configuring the route on API Gateway and scheduling the Health Check, everything should be in working order. I can now start adding data to the cache, retrieve values from the cache and continue to be able to access data long after the first instance of the function cache has died.

The first call to put something into or get something out of the cache takes a while (several seconds): the function needs to be initialized. After that, calls are pretty quick (100-200 ms from my local Postman).

A URL to get a value from the cache looks like this: 


Values are saved to the cache with either a POST or a PUT operation; the request looks like this in Postman:


Here is the fn-bucket on Object Storage where the file backing the cache has been created:


The Health Check keeps invoking the function once every 5 minutes. This pattern is easily spotted in the metrics for the function, the API Gateway and the Health Check.

For the function:


12 calls per hour for most hours, some spikes. And a fairly small duration (40 ms) for most function executions.

The metrics for the API Gateway Deployment:



Sources discussed in this article are available on GitHub in this repo: