Configuring Spring Cache Manager with AWS ElastiCache Redis (cluster mode disabled) and Lettuce

We have Spring Boot 2 application that uses Redis as the cache manager. We deploy our application on Amazon AWS where we use AWS ElastiCache Redis service in cluster mode disabled. Our setup includes a Redis master with two Redis slaves. The default Java client for Redis with spring-boot-starter-data-redis dependency is lettuce-core. When you are working with single Redis node with no slaves, using AWS Elastic Cache Redis is as simple as providing the spring.redis.url with the value of AWS ElastiCache Redis instance URL. This was the set up that we were using till a month back. As the load on the system increased we decided to use ElastiCache Redis in replicated setup to scale our reads. In AWS, Redis implements replication in two ways:

  1. With a single shard that contains all of the cluster’s data in each node – Redis (cluster mode disabled)
  2. With data partitioned across up to 15 shards — Redis (cluster mode enabled)

In our case, cached data is less than 1 GB so it fits in RAM of single node. This made us choose cluster mode disabled setup.

Continue reading “Configuring Spring Cache Manager with AWS ElastiCache Redis (cluster mode disabled) and Lettuce”

TIL #4: Downloading a zip from S3 to local directory using AWS CLI

Today, I had a need to download a zip file from S3 . I quickly learnt that AWS CLI can do the job. The AWS CLI has aws s3 cp command that can be used to download a zip file from Amazon S3 to local directory as shown below.

 $ aws s3 cp s3://my_bucket/myzip.zip ./

If you want to download all files from a S3 bucket recursively then you can use the following command

$ aws s3 cp s3://my_bucket/  ./ -- recursive

You can specify your AWS profile using the profile option shown below.

$ aws s3 cp s3://my_bucket/myzip.zip ./ -- profile test

To download all the files from a folder you can use following command:

$ aws s3 cp s3://my_bucket/my_folder  ./ -- recursive

You can also use include and exclude options to filter files based on wildcards. For example, let’s suppose you only want to download files with zip extension from a S3 bucket my_bucket then you can use the following command.

$ aws s3 cp s3://pcl-caps ./ --recursive --exclude "*" --include "*.zip"

There is another great option dryrun that you can use to see the actions that will be performed without running the command. In the above command, if we add —dryrun flag then we can see which all files will be downloaded to local directory.

$ aws s3 cp s3://pcl-caps ./ --recursive --exclude "*" --include "*.zip" --dryrun

Hands-on guide for building Serverless applications

Yesterday, I released hands-on guide to building Serverless applications using AWS Lambda and Serverless framework. The guide is open-source and available on Github. Checkout the guide and please give feedback.

Serverless is an overloaded word. Serverless means different things depending on the context. It could mean using third party managed services like Firebase, or it could mean an event driven architecture style or it could mean next generation compute service offered by cloud providers or it could mean a framework to build Serverless applications. This series will start with an introduction to Serverless compute and architecture. Once we learned the basics, we will start developing application in a step by manner.

Read more https://github.com/shekhargulati/hands-on-serverless-guide.