Tags Archives: memcached

Elasticache

Elasticache is an AWS managed data caching service mainly for databases and applications.

 

ElastiCache uses one of two open-source in-memory cache engines for its functionality:

 

Memcached and Redis.

 

 

 Elasticache is used to reduce traffic overhead for RDS and some other applications. It is extremely fast as db is held in ram memory.

 

Your cache must have an invalidation strategy defined to ensure only the most currently used data is stored in the cache.

 

It can also be used to store user sessions for an application for cases where users may be redirected later to different instances of the application, saving having to re-do the user login session.

 

But it does require code configurations for apps to be able to query the cache.

 

 

ElastiCache includes a feature for master/slave replication and multi-AZ, can be used for achieving cross-AZ redundancy and thus high-availability through the use of Redis replication groups.

 

 

Memcached

 

Memcached is an ASCII text file memory object caching system. ElastiCache is protocol compliant with Memcached, thus all the tools used with existing Memcached environments can also be used with ElastiCache. This is the simplest caching model and can also be used when deploying large nodes with multiple cores and threads.

 

Redis

Redis is an open-source in-memory key-value store that supports information structures such as lists and sorted sets.

 

Redis can power multiple databases, as well as maintain the persistence of your key store and works with complex data types — including bitmaps, sorted sets, lists, sets, hashes, and strings.

 

If Cluster-Mode is disabled, then there is only one shard. The shard comprises the primary node together with the read replicas. Read replicas store a copy of the data from the cluster’s primary node.

 

Elasticache allows for up to 250 shards for a Redis cluster if Cluster-Mode is enabled. Each shard has a primary node and up to 5 read replicas.

 

When reading or writing data to the cluster, the client determines which shard to use based on the keyspace. This avoids any potential single point of failure.

 

 

Implementing ElastiCache

 

There are three main implementation steps:

 

Creating an ElastiCache cluster
Connecting to the ElastiCache cluster from an EC2 instance
Managing the ElastiCache environment from the AWS console

 

 

Creating an ElastiCache cluster

 

This involves choosing and configuring the caching engine to be used. This will be either Redis or Memcached. For each caching engine, configuration parameters differ.

 

Next, we need to choose the location ie in AWS cloud or On-Premise.

 

AWS Cloud – This uses the AWS cloud for your ElastiCache instances

 

On-Premises – In this case, you can create your ElastiCache instances using AWS Outpost.

 

AWS Outposts is a fully managed service that extends AWS infrastructure, services, APIs, and tools to your own on-site infrastructure.

 

 

ElastiCache REDIS Replication –  Cluster Mode Disabled

 

There are two possible configuration modes for running ElastiCache and REDIS:

 

Cluster Mode Disabled, and Cluster Mode Enabled:

 

In this configuration you run ONE PRIMARY NODE of ElastiCache with up to 5 Read Replicas

 

Note that uses asynchronous replication to maintain the Read Replicas, so there is a lag.

 

The primary node is always used for read/write. The other nodes are read-only.

 

There is just ONE SHARD and all shards hold all the data.

 

Multi-AZ is enabled by default for failovers.

 

 

ElastiCache REDIS Replication –  Cluster Mode Enabled 

 

With Cluster Mode Enabled the data is  partitioned across MULTIPLE SHARDS

 

Data is divided across all your shards. This helps especially with scaling write transactions.

 

Each shard consists of a primary node and up to 5 read replica nodes.

Also has multiple AZ availability

 

Provides up to 500 nodes per cluster with a single master node.

or 250 nodes with 1 master and 1 replica.

 

 

 

Scaling REDIS with ElastiCache

 

For “Cluster Mode Disabled”:

 

Horizontal scaling – you scale out or in by adding or removing read replicas

 

Vertical scaling – you alter the type of the underlying nodes 

 

Important for exam!

 

This is done by means of ElastiCache creating a NEW node group with the new type specification for the nodes, then performing a replication to the new node group, and then finally updating the DNS records so that they point to the new node group and not any longer to the old original node group before scaling.

 

For “Cluster Mode Enabled”:

 

this can be done in two different ways – online, and offline:

 

Online: no interruption to service no downtime, but can be some performance degredation during the scaling.

 

Offline: service is down, but additional configurations are supported

 

So, when doing horizontal REDIS scaling, you can do online and office rescaling, and you can do resharding or shard rebalancing for this:

 

Resharding: “resharding” – this means scaling in or out by adding or removing shards.

 

Shard rebalancing:  involves equally redistributing the keyspaces among the shards as balanced as possible.

 

Vertical Scaling: you are changing to a larger or smaller node type, this is done online only, relatively straightforward.

 

 

 

REDIS Metrics to Monitor

 

Evictions: this is the number of non-expired items the cache has removed in order to make space for new writes ie the memory was full.

 

In this case choose an eviction policy to evict expired items eg least recently used items, LRU  or scale up to a larger node type with more memory, or else scale out by adding more nodes

 

CPUUtilization: this monitors CPU usage for the entire host, if too high, then scale up to a larger node type with more memory

 

SwapUsage: this should not be allowed to exceed 50Mb, if it does then verify you have configured enough reserved memory

 

CurrConnections: no of current connections – see if a specific app is causing this

 

DatabaseMemoryUsagePercentage:

 

NetworkBytesIn/Out & NetworkPAcketsIn/Out

 

ReplicationBytes: vol of data being replicated

 

ReplicationLag: how far behind the replica is from the primary node

 

 

 

 

 

 

 

 

 

Some ElastiCache use cases

 

know these for the exam!

 

Updating and managing leaderboards in the gaming industry

 

Conducting real-time analytics on live e-commerce sites

 

Monitoring status of customers’ accounts on subscription-based sites

 

Processing and relaying messages on instant messaging platforms

 

Online media streaming

 

Performing geospatial processes

 

 

 

Pros and Cons of Using ElastiCache

 

Pros of ElastiCache

 

Fully-managed – ElastiCache is a fully-managed cloud-based solution.

 

AWS takes care of backups, failure recovery, monitoring, configuration, setup, software updating and patches, and hardware provisioning.

 

Improved application performance – ElastiCache provides in-memory RAM data storage that substantially reduces database query times.

 

Easily scalable – you can scale up and down with minimal overhead

 

Highly available – ElastiCache achieves high availability through automatic failover detection and use of standby read replicas.

 

Cons of ElastiCache

 

Limited and complex integration – ElastiCache doesn’t provide many easy options for integration. And you can only connect Elasticache to databases and applications hosted by AWS.

High learning curve – the Elasticache user interface is not intuitive and the system requires a high learning overhead to properly understand.

 

High price – You pay only for what you use but the costs of using Elasticache can swiftly rise according to usage.

 

 

Comparison of ElastiCache With Redis, CloudFront, And DynamoDB

 

ElastiCache is very different to all these services.

 

 

AWS ElastiCache versus Redis

 

 

ElastiCache is an in-memory cache in the cloud. With very fast retrieval of data from managed in-memory caches, Elasticache improves overall response times, and saves relying wholly on slow disk-based databases for processing queries.

 

Redis stands for Remote Dictionary Server — a fast, in-memory, open-source, key-value data store that is usually implemented as a queue, message broker, cache, and database.

 

ElastiCache is developed on open-source Redis to be compatible with the Redis APIs, as well as operating seamlessly with Redis clients.

 

This means that you can run your self-managed Redis applications and store the data in an open Redis format, without having to change the code.

 

ElastiCache versus CloudFront

 

While ElastiCache and CloudFront are both AWS caching solutions, their approaches and framework differ greatly.

 

ElastiCache enhances the performance of web applications by retrieving information from fully-managed in-memory data stores at high speed.

 

To do this it utilizes Memcached and Redis, and is able in this way to substantially reduce the time applications need to read data from disk-based databases.

 

Amazon CloudFront is primarily a Content Delivery Network (CDN) for faster delivery of web-based data through deploying endpoint caches that are positioned closer to the traffic source. This saves too much web traffic from further-flung geolocations from having to source content entirely from the original hosting server.

 

ElastiCache versus DynamoDB

 

DynamoDB is a NoSQL fully-managed AWS database service that holds its data on solid state drives (SSDs). These SSDs are then cloned across three availability zones to increase reliability and availability. In this way, it saves the overhead of building, maintaining, and scaling costly distributed database clusters.

 

ElastiCache is the AWS “Caching-as-a-Service”, while DynamoDB serves as the AWS “Database as a Service”.

 

 

Pricing of ElastiCache

 

To use ElastiCache you have to make a reservation- Pricing for this is based on the caching engine you choose, plus the type of cache nodes.

 

If you are using multiple nodes (ie replicas) in your cluster, then ElastiCache will require you to reserve a node for each of your cluster nodes.

 

 

Difference Between Redis and Memcached

 

 

REDIS: similar to RDS

multi AZ with auto failover
read replicas used to scale reads and provide HA.

 

Data durability

 

provides backup and restore

REDIS:

Primary use case: In-memory database & cache   Cache
Data model: In-memory key-value 
Data structures: Strings, lists, sets, sorted sets, hashes, hyperlog
High availability & failover: Yes 

 

Memcached by contrast:

 

Memcached
Primary use case: Cache
Data model: In-memory key-value
Data structures: Strings, objects
High availability & failover: No

 

 

is multi-node data partitioning ie sharding

 

no HA

 

non-persistent data

 

no backup and restore

multi-threaded architecture

 

 

Main Points To Remember About REDIS and Memcached

 

REDIS is for high-availability – memcached has no AZ-failover, only sharding.

 

Also REDIS provides backup & restore – memcached does not.

 

Memcached has a multi-threaded architecture, unlike REDIS.

 

 

 

Redis Metrics to Monitor

 

Evictions: this is the number of non-expired items the cache has removed in order to make space for new writes ie the memory was full.

 

In this case choose an eviction policy to evict expired items eg least recently used items, LRU

 

scale up to a larger node type with more memory, or else scale out by adding more nodes

 

CPUUtilization: this monitors CPU usage for the entire host, if too high, then scale up to a larger node type with more memory

 

SwapUsage: this should not be allowed to exceed 50Mb, if it does then verify you have configured enough reserved memory

CurrConnections: no of current connections – see if a specific app is causing this

 

DatabaseMemoryUsagePercentage:

 

NetworkBytesIn/Out & NetworkPAcketsIn/Out

 

ReplicationBytes: vol of data being replicated

 

ReplicationLag: how far behind the replica is from the primary node

 

 

 

Memcached Scaling

 

Memcached clusters can have 1-40 nodes soft limit

 

Horizontal scaling: you add or remove nodes from the cluster and use “Auto-discovery” to allow you app to identify the new nodes or new node config.

 

Vertical scaling:  scale up or down to larger or smaller node types

 

to scale up: you create a new cluster with the new node type

 

then update your app to use the new cluster endpoints

 

then delete the old cluster

 

Important to note that memcached clusters/nodes start out empty, so your data will be re-cached from scratch once again.

 

there is no backup mechanism for memcached.

 

 

Memcached Auto Discovery

 

automatically detects all the nodes

 

all the cache nodes in the cluster maintain a list of metadata about all the nodes

 

note: this is seamless from the client perspective

 

Memcached Metrics to Monitor

Evictions: the number of non-expired items the cache evicted to allow space for new writes (when memory is overfilled). The solution: use a new eviction policy to evict expired items, and/or scale up to larger node type with more RAM or else scale out by adding more nodes

 

CPUUtilization: solution: scale up to larger node type or else scale out by adding more nodes

 

SwapUsage: should not exceed 50MG

 

CurrConnections: the number of concurrent and active connections

 

FreeableMemory: amount of free memory on the host

 

 

 

 

 

Continue Reading

AWS ElastiCache for Redis/Memcached

ElastiCache is an in-memory DB cache. 

 

ElastiCache supports two open-source in-memory caching engines: Memcached and Redis

 

applications query ElastiCache (EC), called a “cache hit”

 

– if not available it seeks from RDS and then stores in EC

 

this relieves load on RDS

 

cache must have an invalidation strategy set in order to ensure the most relevant data is cached -not so easy in practice

 

is used for managed Redis or 1memcached

 

is an in-memory db with v high performance and low latency,

 

reduces load for read-intensive workloads

 

helps the application operate stateless

 

AWS takes care of maintenance, upgrades, patching, backups etc.

 

BUT you have to make major application code changes to query EC!

 

EC can also be used for DB user session store

 

which avoids users having to reauthenticate and relogin to the DB

 

 

Difference between Redis and Memcached

 

REDIS

 

is similar to RDS:

 

allows for multi-az with auto-failover

 

use read replicas

 

data durability and high availability rather like RDS

 

backup and restore features

 

 

MEMCACHED

 

uses multi-node for partitioning of data, known as sharding

 

NO high availability and no replication

 

it is not a processing cache, so not persistent

 

no backup and no restore

 

has multi-threaded architecture

 

 

pure cache with no high availability or data protection for failure – a simpler option

 

Deployment Options

 

Amazon ElastiCache can use on-demand cache nodes or reserved cache nodes.

 

On-demand nodes provide cache capacity by the hour, resources are assigned when a cache node is provisioned.

 

You can terminate an on-demand node at any time. Billing is monthly for the actual hours used.

 

Reserved nodes use 1-year or 3-year contracts.

 

Hourly cost of reserved nodes is much lower than hourly cost for on-demand nodes.

 

 

 

REDIS Use Case:

 

need to know for exam!

 

Gaming Leaderboards, as these require intensive processing-

 

REDIS uses “sorted-sets”, this ensures accurate real-time data is generated for the leaderboard – important!

 

 

EC Security – important for exam

 

The EC caches do not use IAM policies, except for AWS API-level security

 

REDIS AUTH

 

you use Redis Auth to authenticate – sets a password/token when creating the redis cluster

 

on top of security groups

 

supports inflight ssl

 

 

Memcached

 

uses SASL authentication (more advanced)

 

3 kinds of patterns for EC – need for exam

 

lazy loading – all read data is put in the cache, but can become stale, but future data is only loaded when it cant find the data present in the cache, it reads from db and then copies to cache

write-through – adds or updates cahce data when written to DB – there is no stale data

 

Session Store: stores temp session data in cache using a TTL

 

 

 

DB TCP Ports

 

PostgreSQL: 5432

 

MySQL: 3306

 

Oracle RDS: 1521

 

MSSQL Server: 1433

 

MariaDB: 3306 (same as MySQL)

 

Aurora: 5432 (if PostgreSQL compatible) or 3306 (if MySQL compatible)

 

For the exam make sure to be able to differentiate an “Important Port” vs an “RDS database Port”.

 

Continue Reading