Introduction
In response to a demo I gave about deploying a docker container to the Azure platform, I received some questions about deploying a docker container to the AWS platform. I wrote a guide on how to deploy a docker container with Azure Pipelines here.
In my opinion is Azure Pipelines a more user friendly and cheaper product than AWS CodePipeline. I have used both and highly prefer Azure Pipeline over AWS CodePipeline. In the same article here, I have also covered why I like Azure Pipelines. In summary:
- It has cloud native support with Azure. If you ever want to switch to Azure, this is super easy
- It can work with almost any language/platform. It’s cross-platform
- The documentation is clear and almost everything can be written in YAML
- I recently discovered the ease of the Azure Pipelines Marketplace. This adds user defined additional components to Azure Pipelines
Configure ECR in AWS
ECR is the private docker container registry of AWS. You have something similar in Azure called Azure Container Registry. ECR stands for Elastic Container Registry. All the terminology in AWS confuses me sometimes so I found this very handy AWS naming dictionary. It explains in a few words what a service does.
To configure a registry do the following:
- Go to the AWS console. Type in elastic container registry
- Once in the menu, click on “Create repository”
- Give it a name and create the repository. You can leave the rest of the fields default
Once you’ve created the repository you can upload your docker image to this repository. If you click “View push commands”, it will show you the commands on how to do this. I would always recommend to use the Linux/macOs instructions over the Windows instructions, these worked better in my experience. I used a really simple hello world Java application from my own github repo.
Configure ECS in AWS
ECS & Fargate
Now, this is the hardest part. We will deploy a container to the ECS. This means the Elastic Container Service. In this component you can deploy docker container in EC2 and in Fargate instances. For this article we will focus on the recently introduced Fargate, since this is fairly easy. You will have to follow the following steps:
- Go to ECS in the console
- Create a cluster (networking only)
- Give it a name
Next, we’re going to create a task definition. That is a sort of blueprint for tasks. Those tasks will be run by a service in a cluster (complicated huh?)
- Click on task definitions on the left in the ECS menu
- Click create new task definition. Create a Fargate task
- Give it a name and role
- Add a container. Get the repository image url from the ECR repository you’ve created in the previous paragraph and fill this in the Image field.
- Add the port you plan to use, for my application this is 8080. Add the container name and the rest you can leave default
- Create the task definition by clicking on create
Task definition & security group
Next, go back to the cluster menu and click on the cluster you created. Next do the following:
- Click on the tasks tab
- Click Run new task
- Create a task with as option fargate. Run the previous defined task definition. For the VPC you can click one or create one yourself, this doesn’t really matter for this guide
- Click auto assign public IP
- Click Run Task
You should see your task beginning to run. Before it all works we have to do one more thing:
- Click on the ENI id link.
- Next, click on the security group in the lower menu
- Click on inbound rules
- Add the rule 8080 for all sources
By going to the IP displayed in your task information and adding 8080 to it, the site should display something along the lines of Hello World.
Write Azure pipelines YAML
The last part of this post will be about how to automate the deploy process with Azure Pipelines. If you’re not familiar with Azure Pipelines I would highly recommend reading my article or follow a quick tutorial on azure pipelines. It’s fairly easy, so this won’t take long.
Now, create a yaml file in azure pipelines. For this part we will use Terraform and the AWS CLI. Read this part of the documentation on how to get values out of the data sources (like repo url) with terraform
- First compile the java code with maven in a task
- Create the terraform file to output the repo url, arn and service name (noted below)
- Next get the ecs/ecr variables with terraform into the azure pipelines variables. This line
echo "##vso[task.setvariable variable=json_url]$json_url"
gets the variable out of the task context so it can be used in another stage. This is also described here.
variable "AWS_DEFAULT_REGION" {
type = string
default = "eu-west-2"
}
# Configure the AWS Provider
provider "aws" {
version = "3.0"
region = var.AWS_DEFAULT_REGION
}
data "aws_ecr_repository" "terraform" {
name = "terraform"
}
data "aws_ecs_cluster" "test" {
cluster_name = "test"
}
data "aws_ecs_service" "terraform" {
service_name = "terraform"
cluster_arn = data.aws_ecs_cluster.test.arn
}
output "test_terraform_output_url" {
value = data.aws_ecr_repository.terraform.repository_url
}
output "ecs-cluster" {
value = data.aws_ecs_cluster.test.arn
}
output "ecs-service" {
value = data.aws_ecs_service.terraform.service_name
}
- task: Bash@3
inputs:
targetType: 'inline'
script: |
terraform init -input=false
terraform apply -input=false -auto-approve
json_url=$(terraform output -json test_terraform_output_url)
echo "##vso[task.setvariable variable=json_url]$json_url"
ECS_CLUSTER=$(terraform output ecs-cluster)
echo "##vso[task.setvariable variable=ECS_CLUSTER]$ECS_CLUSTER"
ECS_SERVICE=$(terraform output ecs-service)
echo "##vso[task.setvariable variable=ECS_SERVICE]$ECS_SERVICE"
The next part is the aws deploy self. This script downloads the aws cli and uses the cli to deploy the docker container to ecs. It uses the variables gotten from the terraform script.
Lastly it forces a new deployment, so when a new container is pushed it will automatically update.
- task: PowerShell@2
inputs:
targetType: 'inline'
script: |
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
$docker_repo_url = $(json_url).Split("/")[0]
$docker_repo_name = $(json_url).Split("/")[1]
aws ecr get-login-password --region $(TF_VAR_AWS_DEFAULT_REGION) | docker login --username AWS --password-stdin $docker_repo_url
docker build -t "$docker_repo_name" .
docker tag "$docker_repo_name" "$(json_url)":latest
docker push "$(json_url)":latest
aws ecs update-service --region $(TF_VAR_AWS_DEFAULT_REGION) --cluster $(ECS_CLUSTER) --service $(ECS_SERVICE) --force-new-deployment
Disadvantages
There’s one major downside to this solution: the IP address changes every time a new deploy is issued. I tried to associate an elastic IP (static IP) to the fargate instance, but this seems impossible according to this stackoverflow post.
There seem to be some possibilities to solve this with a load balancer or with a EC2 ECS instance. But that is too complex for this article.
I hope this article learned you something. If you have any questions feel free to comment below! Have a good day.