Container Instances on Oracle Cloud Infrastructure are VM based environments that can be leveraged as “serverless” container runtimes. Once a container instance has been defined – which is a very simple, rapid procedure – we can specify the container images (OCI – open container initiative – compliant) for which containers should run on the instance. Not having to configure, provision and manage VMs in order to containers is the main benefit of Container Instances – as well as being able to work through a simple, declarative interface to create the container instance, run the container(s) and manage the instance.
Multiple containers can run on an instance – similar to the way in which multiple containers can run inside a Kubernetes Pod – for example an application container with one or more side cars that take care of logging, request routing and other supportive activities. The containers on the instance share the same networking namespace – they can find each other on localhost or 127.0.0,1. To avoid one container from utilizing too resources and starving other containers in the container instance, you can throttle the resources made available to each container.
A container instance can be quite small and also very large – in terms of CPU and memory: up to 128 vCPU up to 1024 GB RAM. It will always get 15GB ephemeral storage. Options to attach persistent volumes with OCI Block Storage and OCI File Storage (FSS) will be available soon. The container instance VM is a light weight VM, tuned for running containers. The OCI Container Instances VMs use a fully isolated base OS and kernel inside each container instance. For the container instance base image, a stripped-down Oracle Linux image is used with several optimizations designed to significantly reduce the boot-up time. The base OS inside the container instance is managed with a VSOCK interface, which is limited to the OS update functionality to improve the overall security posture while removing the OS management overhead. Note: the VM OCI creates for the container instance is completely managed: we do not see the VM as a regular Compute VM, we cannot get access to the OS on this VM. We know there is a VM but in practice we only interact with the hypervisor management layer that runs the containers.
It appears that the shape selected for the container instance at creation time is fixed and cannot be changed at a later moment in time. It also does not seem possible to move a container from one container instance to another. (of course for truly ephemeral containers, moving to a differently shaped container instance should really be as simple as starting a container from the same container image on the more shapely container instance and updating the network routing to this new instance. Any persisted data should be outside the container and is therefore available for new container.
The costs are charged for the container instance – not the containers running in them. Costs are only charged for the compute resources underlying the container instance, not for the serverless layer around it. Because the OS in the VM is optimized for running containers, the overall price/performance could be slightly better for Container Instance vs run a container on your self managed VM. No costs are incurred when the Container Instance is not active.A container instance will be inactive as soon as all containers within that instance stop, and the autorestart policy is not enabled. Container instances therefore are a good option for ephemeral workloads – scheduled or triggered jobs for example for build, deploy, data processing and reporting tasks.
For Container Instances that need to stay up, such as those used for web applications, customers can configure restart policies to restart containers within a container instance in case of failure, ensuring that the application is always up.
This article shows an example of running a PostgreSQL 15 database on Container Instances – something that is now really simple. The tutorial reference in the resources section describes how to run a WordPress container that connects to a MySQL container both running on the same Container Instance – a quite instructive and easy to follow along example.
Running PostgreSQL on OCI Container Instances
The container image we will run: postgres:15.1.
The steps we have to go through:
- configure a container instance – with the appropriate physical shape and network configuration (new VCN, join existing VCN, publish public IP)
- indicate container image(s) to run on the instance and configure environment variables; set additional startup options
- provision the instance along with the container(s)
- (optional) configure network access rules in order to access the desired ports on the container instance (and/or allow the software running in the container(s) on the container instance to access resources outside the VCN)
In the OCI Console – Navigate to Developer Services, Containers & Artifacts, Container Instances.
Click on button Create Container Instance
Define the name for the new Container Instance, select the compartment and define the shape – specify number of OCPUs and size of memory.
Next, configure networking. This means: specifying the VCN and subnet to which the container instance is attached and whether or not to assign a Public IP address to the Container Instance.
In this case I am joining the container instance to an existing VCN as well as an existing subnet. I also want to have it exposed on a public IP address.
Under Advanced Options, we can specify the Container (automatic) Restart policy for the instance. Select between Always, Never, and On failure. When an individual container exits (stops, restarts, or fails), the restart policy is applied, using the exit code of the container to decide in case of On failure if a restart should take place. If all containers exit and do not restart, the container instance is shut down.
Click Next.
On this tab, we specify the container image(s) that run on the container instance. Note: we can later add containers or remove them from the container instance. First define the name for the container. Then launch the image selection dialog.
Container images can be retrieved from OCI Container Image Registry – first tab – or from external registries like Docker Hub – second tab. In this case, the PostgreSQL image is found on Docker Hub. Its fully qualified name is postgres:15.1.
Click Select Image. Note that the entry is not validated at this point and no image is pulled right now.
Next, this container can be configured further. We can set environment variables and under advanced options also define working directory and ENTRYPOINT arguments for the container as well as the the amount of resources that the container consumes in absolutes or percentages. By default, the container can use all resources in the container instance..
Here I have defined the POSTGRESS_PASSWORD environment variable – a variable expected by the Postgres container image.
Press Next.
Step 3 allows us to review the configuration and retrace our steps to make refinements.
In this case I click on Create to kick off provisioning of the Container Instance and starting the container.
After about 50 seconds, the container instance is ready and running, A public IP address is assigned and shared with me:
The container is running inside the instance. That means that I now have a PostgreSQL 15.1 instance at my disposal. However, in order to actually get to it, a little network configuration is to be made: I have to allow network traffic on the subnet from anywhere to port 5432 on which the container exposes the PostgreSQL instance.
Click on the subnet:
The subnet’s detail page appears.
Click on the/a security list entry – to edit it and add an Ingress rule to allow incoming traffic for port 5432:
Click on Add Ingress Rule(s). A popup appears where the new ingress rule can be defined.
Set source CIDR to 0.0.0.0/0 to allow traffic from just anywhere. Set the Destination Port Range to 5432 – the port on which we want to allow traffic. Add a description to help us and colleagues remind why again we did this. Then press Add Ingress Rule. The new rule immediately takes effect. Note: we have now allowed incoming traffic targeted at port 5432 to all compute instances on this VCN. That may be a little too much.
Let’s try to connect to the PostgreSQL instance – from a popular database management tool DBeaver.
Run DBeaver. Click File and select New Connection.
Select Database Connection. Press Next. Select the PostgreSQL tile and press Next.
Enter the public up address assigned to the Container Instance as the Host. Enter the same value used for the POSTGRES_PASSWORD environment variable defined for the container for password. Accept the default settings for port and database (and username).
Press Finish.
The connection is configured and established successfully.
My local laptop running DBeaver is peeking inside the PostgreSQL database running on my OCI Container Instance.
After stopping the container, the container instance becomes inactive; no further costs are incurred. The container is retained and it can be restarted at a later moment in time. Starting the inactive container instance plus container took close to 45 seconds. OK for scheduled workloads. A little slow for triggered ephemeral workloads such as build jobs.
Resources
Tutorial Manage container workloads on OCI using the Container Instances service – https://docs.oracle.com/en/learn/manage-oci-container-instances/#introduction
Documentation https://docs.oracle.com/en-us/iaas/Content/container-instances/creating-a-container-instance.htm#creating-a-container-instance
Comparison with Azure Container Instances – https://blogs.oracle.com/cloud-infrastructure/post/oci-vs-azure-serverless-container-instances-best
Official PostgreSQL Container Images on Docker Hub – https://hub.docker.com/_/postgres/
DBeaver Homepage – https://dbeaver.com/