Oracle Functions are the Functions as a Service (or FaaS) offering on Oracle Cloud Infrastructure. Functions are the serverless, stateless execution engines that play such an important role in cloud native applications. Functions handle requests and events, contributing to live application behavior, streaming activities and integrations. Functions are also used in automation tasks performed as part of the Ops half of DevOps, for example for log processing, monitoring and incident resolution.
The mental picture for functions is one of ‘serverless-ness and stateless-ness’. Function instances are started instantly when a request or event needs to be processed and then die when done.
The Oracle Functions FaaS offering is based on Project Fn, Functions are typically implemented in one of the major programming languages and wrapped using the corresponding FDK (Fn or Function Development Kit).
As developer, this about all you need to know about the function. Implement the functionality, hook into the request handling performed by the FDK and commit your code. From that point on, Build, delivery, deployment and instantiation & execution are taken care of in a completely serverless and stateless model.
This is not untrue. However, it helps to have a little more understanding of the physical reality. It can help you save money, better leverage the technology, save effort and understand the actual behavior. Of course functions are not really serverless and it turns out they are are not stateless either.
This article discusses some of the physical realities of Oracle Functions. What happens during build and deployment, what happens at runtime and what are limitations and characteristics that you should be aware of. You can for example retain memory state and use a local file system – across multiple requests.
Build, Deliver and Deploy
When the Function is built, a Docker container image is created – typically through the Fn Command Line Interface. The mental picture of deploying the function consists of handing the container that contains the function implementation to some black box execution engine, adding any number of configuration parameters to influence the environment specific runtime behavior.
Again, this is not untrue. But there is a little bit more to the process. The building of the Docker container image adds several components: an operating system (by default Alpine Linux), the language runtime time for the implementation language, the Fn FDK and of course the function implementation itself. Alternatively, you can use your own Docker container image as a starting point for building an Fn function.
The resulting container image is pushed to a container image registry (typically one on OCI) and the function is registered with Oracle Cloud Infrastructure. As part of this registration, some meta data regarding the function is recorded, taken from the function.yaml file or from default values if none are provided in that file.
The memory allocated to the function when executing is defined in this manner. The default value is 128 MB; other options are 256, 512 and 1024 MB. Associated with the memory size is the size of the local filesystem (32, 64, 128 and 256 MB respectively) that the function can make use of (the /tmp directory can be written to, the entire local filesystem can be read from).
The second trait that is defined regarding a function is its time out. The default value is 30 seconds – meaning that if the function does not respond to a request or event in 30 seconds, it is considered to have failed and processing is halted. The time out can be extended in the function.yaml file to a maximum of 120 seconds. That is the longest a function on OCI Functions can work on processing a request.
Note: the combination of memory and processing time determines the cost of running a function ($1.5 per 100,000 GB Memory-Seconds, plus $0.02 per 100K invocations).
An additional physical aspect of deploying a function is the link – through the function application – to an OCI tenancy, compartment, VCN and subnet. Through this association, the function is linked to a region and its associated data center facilities.
For resiliency and high availability, best practice is to specify a regional subnet for an application (or alternatively, multiple AD-specific subnets in different availability domains). If an availability domain specified for an application ceases to be available, Oracle Functions runs functions in an alternative availability domain.
Next, let’s look at what happens at runtime.
As we knew from the beginning – there is no such thing as serverless compute. However, how aware do we have to be of the underlying server infrastructure? With Oracle Functions not very. We will be able to tell that several container instances have been started. But we have no way to telling or influencing on what physical hardware or even on which VMs that happens. The infrastructure is hidden from view (and from our concern and our ability to mess up).
When the first request ever is received by the OCI FaaS facility that is to be processed by a Function, the corresponding container image is looked up in the container image registry and a container (instance) is started – using the memory shape defined for the function. The currently applicable configuration parameters are injected into the container as environment variables. This is a cold start for the function – which can take considerable time (10-20 seconds is no exception). When the container is running, FaaS hands the request to the container for processing. At that point, the timeout countdown is started.
Of course in order to run a container, a container orchestrator (Fn Server) is required and this platform component requires a VM to run in and that VM is hosted on a physical server, somewhere in the OCI region. That part of the stack is not visible. We as DevOps team have a responsibility for developing, deploying and running Functions but we do not have to concern ourselves with the infrastructure. So in our world, we can consider functions by and large to be serverless.
When the request is handled, the function in the container returns the response.
At this point the container is not removed. Only after 10 minutes of inactivity will the container be stopped. During that time, it remains hot, ready to serve new requests. The OCI FaaS infrastructure keeps track of containers that are still available to handle requests – and that are currently not processing a request. The first request handled by a container took a while (the cold start) and all subsequent requests are much faster if the container is already available, ready to run.
When I investigated Oracle Functions behavior with Node applications, I found that for about 14-15 seconds after the response was returned from the function, I could still run background activity started by the function while processing the request. It seemed that after that time, the container was paused.
Note that a container will only handle one request at a time. There is no concurrent request handling.
However, subsequent requests have access to the contents of the file system including the /tmp directory that the function can write to. And these subsequent requests can use the same global variables; the memory state is retained during the lifetime of the container.
So even though the mental picture is one of statelessness, in actual fact there can be state. And this is something you can and should make use of. For example by retaining expensive objects such as connections or far fetched service results. At the same time, your code should not rely on this state. FaaS may stop containers at any moment or start new ones that will not have initial state.
The picture shows how functions on OCI automatically leverage OCI facilities such as Audit, Monitor and Logging. Yet another aspect of Operations that our team does not have to set up themselves.
When multiple requests are to be processed at the same time, OCI FaaS will start additional containers, They can run on the same VM or a different VM. We do not know nor should we care.
We do know that containers can be started to run functions to a maximum of 30 GB of memory – the default limit that applies across an Availability Domain (and that can be increased through a service request). Provided the limit is not exceeded, there is no difference in response time (latency) between functions executing on the different containers. When functions from different applications are invoked simultaneously, Oracle Functions ensures these function executions are isolated from each other.
After ten minutes of inactivity, OCI FaaS scales in by stopping a container. This should not matter to us: enough capacity is retained to handle the current load. The state of the function is lost at this point. Note: we do not pay for containers that are inactive. We only pay for and during the actual function execution.
Conclusion on Physical Function Findings
There is never concurrent request handling in a single function instance (== container)
Each instance can handle multiple [subsequent] requests
State is kept in instance between requests, until the death of an instance
An instance can perform some background work after a request is responded to (14 seconds?); I am not sure if such activity is counted towards the pay per use – I assume that this is not the case (because it is very hard to measure).
An instance is killed after [approximately] 10 minutes.
We as DevOps team determine the timeout for a function (default 30 seconds) and the memory & filesystem shape (default 128 MB RAM, 32 GB file system)
The first 2M function executions (counting across all functions in the tenancy, assuming default 128MB memory shape and subsecond processing time ) are free of charge. And after that the executions are still dirt cheap. You should be using more functions – or at least consider them.
OCI Documentation on Functions – https://docs.cloud.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm