How Can We Help?
AWS SQS Standard Queueing System
SQS or Standard Queue System is the basic legacy AWS message queueing system.
producers send messages to the sqs queue which is a form of buffer
consumers then poll the queue for the messages
is over 10 yrs old, fully managed, used to decouple applications.
exam: “decouple” : means a reference to sqs!
no limit to message volume, unlimited throughput
default retention of message in queue is 4 days, max 14 days
application ie consumer must ensure message is deleted from the queue after processing
low latency
max 256kb per message
it is possible to have a duplicate of a message “at least once delivery” so apps have to bear this in mind.
SQS : producing messages -are sent to SQS by the SendMessage API from an SDK
it persists in the queue for the max default retention period set until a consumer deletes it.
SQS consumers – can run on ec2 instances, servers on-premise, or on Lambda.
consumers poll the sqs queue for messages, can receive up to 10 messages at a time
they process then delete the messages using DeleteMessage API
you can have multiple consumers who receive and process the messages in parallel
best-effort message ordering is used – exam!
to increase throughput – add consumers to horizontally increase processing
the ASG responsible for the EC2 does this
we can use CloudWatch metric – “Queue Length” – this is an indication of approx volume of messages at any one time
this sends an alarm to CloudWatch alarm which then causes ASG to scale out
a front end web app can send a message to the SQS Queue
the back end app eg vid processing can poll and receive message from that queue, and the process them
exam question!
SQS security can be done:
in flight with https api
at rest using kms keys
client side
access controls using iam policies can be used to regulate access to the sqs api
sqs access policies are similar to s3 bucket policies
useful for cross-account access to sqs queues
useful for allowing other services eg s3 or sns to write to sqs queues
To publish ie send an event notification ie a message, then an instance or s3 bucket has to be granted access to the queue via a queue access policy in iam:
cross-account access is required if it is another account. in this case you need action: sql:ReceiveMessage for the resource
eg when you upload an object to an s3 bucket and you want the queue to know about this…
so you need to allow sql:SendMessage
for the course account ie the s3 bucket and the owner account
you set this in the access policy for the queue
for both sending and receiving you need to set this.
also ensure encryption is DISABLED for this
SQS Message Visibility Timeout
exam q!
after a message is polled by a consumer, it becomes INVISIBLE to other consumers
default timout for this is 30 secs
that means the message must be processed within 30 seconds
other consumers cant see the message during this 30 seconds…
but if it is not processed within this time, then it will be received by another consumer and so processed twice..
there is an api called ChangeMessageVisability to allow for more time from SQS – the consumer can call this api…
if you set it too high eg hours and the consumer has a problem eg crash, then reprocessing will take time
but if too short, seconds, then we may get duplicates… so set the visibility time to a reasonable level
SQS Dead Letter Queue
if a consumer fails to process the message within the Visibility Timeout
then the message goes back to the queue
but we can set a threshold of how many times a message may be sent back to the queue
we do this with a MaximumReceives threshold, if this is exceeded, then the message then goes into a dead letter queue DLQ
this is useful for debugging – exam q! why might you use a dead letter queue
make sure to process the messages in the DLQ before they expire, set a much higher retention of eg 14 days for DLQ messages.
DLQ Redrive to Source
this is a feature to help consume messages in the DLW to understand what is wrong with them
what this is, an example below…
when our code is fixed ie why the message wasnt processed, ie fix the cause of the problem, and then we can redrive the messages from the DLQ back to the source queue in a batch process without having to write any custom code to achieve this..
first you create a dead letter queue in Create Queue
set a retention period that is very long eg 14 days
then create the queue
then on your normal queue… click on enable dead letter queue configuration and select the dlq that you defined above.
set max receives eg to 3.
this means
after 3 receives, send to dlq