3
votes

I have the following scenario where a company has two regions on Amazon cloud, Region 1 in US and Region 2 in Asia. In the current architecture AWS DynamoDB and MySQL-RDS solution are used and installed in the US region. The EC2 servers in Asia regions which hold the business logic has to access DynamoDB and RDS in the US region to get or update data.

The company wants to now install DynamoDB and MySql-RDS in the Asia region to get better performance, so the EC2 servers in Asia region can get the required data from the same region.

The main issue now is how can we sync the data between the two regions, the current DynamoDB and RDS don't inherently support multiple regions.

Are there any best practices in such a case?

1
Considered using the AWS Data Pipeline? I use it for backing up Dynamo DB to S3, but there is no reason not to use it for synchronizing between regions.marco
Thank you for your help,we are looking for multi master solution between regions. by using data pipeline, one of the regions will become a master and another become a slave, so all updates should be done on the master, and the slave one should be used only for read ( it is not easy to write the logic behind synchronizing updates between regions . Current RDS has a built in Read Replica service between regions, but there is nothing for DynamoDB. Can we use Data pipline for each update ?? because we dont want to copy the whole database each time.user3341697
otherwise we are thinking of using SQS with dynamodb to send all update requests to the Slave regions.user3341697

1 Answers

3
votes

This is a big problem when the access is from different geographies.

RDS off late has some support for cross-region "read" replicas. Take a look here. http://aws.amazon.com/about-aws/whats-new/2013/11/26/announcing-point-and-click-database-replication-across-aws-regions-for-amazon-rds-for-mysql/

Dynamo DB doesn't have this. You might have to think of partitioning your data (keep Asia data in Asia and US data in US). Another possibility is to increase the speed by using an in-memory cache. Don't access Dynamo DB always for all the reads: After every successful read, cache the object in AWS Elasticache - setup this cache to be near the required regions (you will need multiple cache clusters). Then all the reads will be fast (since they are now region local). When the data changes (write) then invalidate the object in the cache as well.

However this methods only speeds up the reads (but not writes). Typically most apps will be OK with this.