There can be moments that you are really proud of what you achieved. You want to show it to colleagues and share your knowledge in presentations. It can be useful to let your container run in the cloud for a relatively short time. In this blog, I will explain the four different ways to run a container in AWS, with their pro’s and con’s. After that, I will explain how showing a container to collegaes can be done on Fargate, the PAAS solution of Amazon for containers in the cloud.
Four different ways to run a container in AWS
There are four different ways to run a container in AWS:
1) Start a virtual machine from the EC2 console, install Docker on it and then use the virtual machine as if this virtual machine is running on premise. You don’t use the ECS agent to control the Docker environment, you use the Docker tools to get high availability. This solution can be used to migrate Docker from on-premise environments to the cloud: operators will not have to learn new functionality about the container environment, because this works the same as in the on-premise environment. They do, however, have to learn about the network environment in AWS (VPC’s, subnets, security groups, routing tables etc).
2) Running in ECS, using EC2. The difference with the first approach is that you -do- use the management agent of ECS. ECS will take care of load balancers and auto scaling groups for the EC2’s. It is possible to log on to the EC2’s to see what is happening within Docker. The main advantage of this solution is, that you can decide to shut down the EC2’s (and save money) on moments that you don’t use the containers. This will save money, because the biggest part of the costs for this solution is in the costs of running the virtual machine. From our use case, using ECS on EC2 has the advantage that we can run the containers on t2.small virtual machines, which makes it possible to use free tier.
3) Running in ECS, using Fargate. In this option, AWS will use virtual machines as needed, but the management is done by AWS. It isn’t possible to log on to the EC2’s, AWS will take care that the containers keep running. We will look at this solution in the rest of this article.
4) Kubernetes in the AWS-cloud. This service is called EKS, it is outside the scope of this article.
Costs of these solutions
When looking at the costs, I will assume that the container will run 24 hours a day, for one month. For the first two options, a t2.micro can be used for simple solutions: it uses one CPU and the VM has one MB of memory. When more containers are used at the same time, bigger machines can be used, for example m5.large machines with 2 CPU’s and 8 Mb of memory.
The costs of these options are:
1) Virtual Machine without ECS: this is the cheapest option. Based on one VM, the costs are $11,43 for a t2.micro and $80,53 for a m5.large VM (including a disk of 20GB). On this VM, multiple containers can be run.
2) ECS based on EC2: you only pay for the EC2’s that you use, there is no additional fee for usage of the ECS service. ECS will add an extra container on the virtual machine that will start or stop your container and will take care of the logging to CloudWatch (if necessary). This will take a small amount of extra resources. The costs are about the same as running a Virtual Machine in AWS without ECS. Using an extra container might cost extra money – or not, depending on the amount of unused CPU and memory on the already running Virtual Machines.
3) Running ECS based on Fargate: the costs are based on the amount of CPU and memory that you ask for. For 0,5 CPU and 1 GB of memory, the costs are $17,77 per container per month. An extra container will always cost extra money.
How to deploy the container in Fargate
The container that is deployed must exist in a repository: you can both use Docker Hub and Amazon’s ECR (Elastic Container Registry) for that. In this example, I will use the latest version of nginx from Docker Hub. I will use the Task deployment, not the Service deployment. This is, because the Task deployment is easier than the Service deployment. The Task deployment is more suitable for the use case of running a container for just a demo: when the task in the container fails, the container will not be restarted. In a demo, this isn’t a problem: you might want to know that the container failed and do some research about that. In a production environment, most nginx containers will have to run 24 hours a day and then it is wise to let AWS start a new container when a container fails.
Creating a cluster using the GUI
Containers in the cloud are running in clusters, so let’s create our first cluster. Choose services, type ECS and choose ECS:
Choose Clusters, and press the button Create Cluster:
Choose Networking only, followed by Next step:
In the next screen, we give the cluster a name, followed by Create:
The cluster will now be created. Choose View cluster to go back to the overview of available clusters:
Create a cluster via de Command Line Interface (CLI)
A cluster can also be created using a Command Line Interface (CLI). This is another CLI than the standard AWS CLI, it can be downloaded via the following link: https://github.com/aws/amazon-ecs-cli . Mind, that you don’t have to clone this repository to use the software, just scroll to the bottom of this screen:
When you downloaded the software, rename the executable to ecs_cli.exe and use this software to create the cluster:
Creating a Fargate task using the GUI
Now we have the cluster, we can create a task within this cluster. Go to the Task Definitions and choose Create New Task Definitions:
Choose Fargate and press the Next Step button:
Give the task a name. The Task Role can be used when the task uses other AWS services, for example S3. We will use the default here. The network mode of Fargate tasks is always awsvpc.
When you scroll down, you can fill in the amount of CPU and the amount of memory this task will use. In this example, we use the minimum: 0,5GB of memory and 0,25 CPU. Click on Add container:
Type the name of the container, the name and version of the image (in our case: nginx:latest) and the memory limits (in our case a soft limit of 300 MB). We will not use EC2 services, the port number that is used within the container is therefore the same port number as the port number that is exposed to the outside world. When you want to have a different port number, use an Application Load Balancer between the container and the services that use this container.
The settings for Healthcheck doesn’t have to be changed. Health check is used to see if the container is healty: when a check fails, the container will be stopped by the cluster. Under Environment, use 256 CPU units. Each CPU is devided into 1024 CPU units, so we will use 25% of a CPU for this container. In each task, at least one container should have the check mark after essential: when that container failes, the task will fail. Leave the checkmark set. When the container requires variables, you can add them here. For our purpose, we will leave them empty.
We will not change the defaults for Container timeouts or Network settings. In the task definition of Fargate containers, the logging is done to Cloudwatch by default. We don’t need logging for this container, so under Storage and logging, uncheck the checkbox after Log configuration:
Scroll down to the end of this webpage and click add:
The screen will go back to the Task settings. In the task settings, we configured the amount of memory to be 0,5 GB, in the container we configured the amount of memory to be 300MB, so there is memory left for another container (if we would like to have one). You can see this in the diagram:
Click on Create now:
The task is being created, click on View task definition to see it in the list of task definitions:
The task can be started using the button Actions, with the submenu Run Task:
Running the task
Choose for Fargate as the launch type, and choose for one of the subnets (or both):
Press on the Run task button to start the task:
Click on the link on the tasks-tab to see what public IP address is being used:
The IP Address is shown on this page under network:
When you copy this address to a browser, you will see the nginx start page (as expected):
Create a Fargate-task using the CLI
The same task can also be added using a JSON file. This JSON-file can be added via the GUI, or using the CLI. The JSON-file for this task would be:
You can add the task definition to ECS with the following (default AWS CLI) command:
You can start the task with the command:
You can see that I added the ID’s from the subnets and the security group, these ID’s can be copied from the VPC service.