How Can We Help?

AWS Containers

You are here:
< All Topics

AWS offers: the following container systems:

 

 

docker
ECS
ECR
EKS
Fargate

 

 

Docker

 

Docker images are stored in:

 

Docker Repositories

 

. docker hub https://hub.docker.com
is a public repository for basic images

 

Amazon ECR – Elastic Container Registry

– you can also keep your private images here
you can also use the public repository

 

 

 

the basic docker file contains:

 

FROM
COPY
RUN
CMD

 

this builds the docker image, you then push/pull to/from the repository you are using.

 

you run the image and it creates a live docker container – important to know the difference!

 

ECS: AWS Elastic Container Service – AWSs own container platform

 

EKS: Elastic kubernetes Service – AWSs managed kubernetes container service

 

AWS Fargate: a serverless container platform service, works with ECS and EKS

 

ECR: a repository for storing container images – Elastic Container Repository

 

 

important for exam! with ECS

 

ECS – EC2 Launch Type

 

when you launch a container you are launching an ECS task on an ECS cluster

 

you must provision and maintain the infra on the EC2 instances

 

but each EC2 instance must run the ECS Agent to register and operate as an ECS Cluster

 

AWS then takes care of precisely on which instance your containers are launched! you don’t specify it.

 

 

 

Fargate Launch Type for ECS

 

we do not provision any infra ie no ec2 instances needed

it is serverless

 

you just create your container task definitions

 

AWS runs the ECS Tasks for you based on your CPU/RAM requirement

 

to scale in Fargate you simply increase the number of tasks, not instances

 

way easier to manage than ec2 launch type

 

 

 

ECS iam roles for ECS

 

this is where you are using ec2 instances

 

you have your ec2 instance profile for this, it is used by the ecs agent,

 

makes the api calls to ecs service

 

sends container logs to cloudwatch logs

 

pull docker image from ecr

 

the ecs task roles:

 

this applies to ecs and fargate launch types

 

you create a role for each task

 

you use different roles for different ecs services that you run

 

these you define in your task definition for your ecs service

 

 

ECS Load Balancer Integrations

 

 

Do NOT use Elastic Load Balancer for ECS  – you should not use the old ELB as this has only minimal container features, and it has NO Fargate support

 

ALB – supported, ok for most use cases

 

NLB – recommended only if you need high throughput/performance or to pair with an AWS Private Link

 

 

ECS Data Volumes (EFS) – these allow for data persistence

 

you mount an EFS mounted onto the ECS tasks

 

this works for both EC2 and Fargate launch types

 

this means tasks running in any AZ can share the same data

 

Fargate + EFS = serverless

 

use case: when you need persistent multi-az shared storage for your containers

 

important – exam!

 

remember: S3 CANNOT be mounted as a file system!
only EFS can do this.

 

 

 

ECS Service Auto Scaling

 

there are a number of possibilities for this.

 

uses AWS Application Auto Scaling to automatically increase or decrease the desired number of ECS tasks

 

it measures following metrics:

 

ECS Service Average CPU Utilization
ECS Service Average Memory Utilization
ALB Request Count per Target – from ALB

 

you can also use
Target Tracking – a scale based on target value for a specified CloudWatch metric

 

or
Step Scaling – based on a specified CloudWatch Alarm

 

or
Scheduled Scaling which is based on a specified date and time – this is predictable

 

remember that scaling at the ecs service auto scaling task level is NOT the same as scaling at the ec2 instance auto scaling level

 

also,

 

Fargate Auto Scaling – this is much easier to set up as it is serverless

 

 

So, for Auto Scaling for the EC2 Instance launch type:

 

this works by adding underlying EC2 instance according to demand

 

We can use
Auto Scaling Group scaling

 

this scales asg acc to cpu util
then it adds ec2 instances over time

 

or use the new more advanced system called

 

ECS Cluster Capacity Provider

 

– this is a much better option by far

 

this is used to auto-provision and scale the infra for your ecs tasks

 

it is paired with an ASG

 

adds EC2 instances when needed

 

ECS Cluster Capacity Provider Auto-Scaling is the much better option. 

 

 

so eg

 

ECS Scaling – Service CPU usage example

 

cloudwatch metric monitors cpu usage

 

and triggers a cloudwatch alarm in turn, this then scales via the auto scaling group for the cluster adding ec2 instances as required.

 

 

ECS Rolling Updates

 

when updating from v1 to v2, we can speficy who many tasks can be started and stopped, and which order

 

we can set a min % of tasks healthy

 

and a maximum %

 

default 100% min and 200% max

 

the max tells you how many more you can create…

 

the system is allowed to terminate up to the min %.

 

eg min 50% max 100%

 

we start with 4 tasks

 

this means we can terminate half the tasks ie 2 in this case at one time…

 

we can then perform the update on those instances in turn

 

ECS Tasks can also be invoked by linking to Event Bridge or to SQS message queuing

 

 

Amazon ECR Elastic Container Registry

 

stores and manages your docker images on AWS

 

there is a private and public repo – the public is ECCR public gallery at gallery.ecr.aws

 

fully integrated with ECS

 

access controlled by iam policy

 

it supports image vulnerability scanning, versioning, image tags and image lifestyle.

 

need to be aware of this repo for the exam!

 

 

Amazon EKS

 

Elastic Kubernetes Service

 

enables use of Kubernetes on AWS as alternative to ECS

 

open source, whereas ECS is AWS proprietary

 

similar to ECS but different API

 

EKS supports EC2 and Fargate

 

 

use case: if already using kubernetes

 

 

or wants to migrate to it, can be actually used on any cloud or with no cloud, not just aws

 

EKS uses “EKS Pods” in place of “ECS Tasks” – otherwise the same thing – remember this for the exam.

 

 

AWS App Runner

 

priced per cpu and gbyte.

 

This is a fully managed container application service which is serverless and managed AWS service for easy deployment of web apps and APIs

 

start with source code or container image

automatically builds and deploys the app for you

 

autoscaling, HA, Lb, encryption and connects to your vpc

 

also connects to DBs and message queues

 

 

 

 

 

 

Table of Contents