2
votes

I want to make a cluster system within an AWS enterprise. The cluster will have a master node and several slaves. The slaves will connect to the master using a TCP/IP connection. There may be several clusters in our organization's AWS enterprise (eg dev1, dev2, qa1, qa2, etc).

For this particular technology, the slaves must somehow discover the IP address of the master node. What is the best practice in doing this? I had a few ideas:

  1. Put the entire cluster in some sort of NAT'd subnet and have the master node always at a known address (eg 192.168.0.1)
  2. Require some sort of domain name for each cluster and use DNS.
  3. Use Eureka instead of DNS.

There may be more ideas. I'm somewhat new to AWS but not new to network topologies, so I may be going in the wrong direction. #1 about sounds to be the easiest thing to do. Are there any other ideas?

2

2 Answers

0
votes

You can set arbitrary key-value pairs on your EC2 instances. So for example you could tag your instances with class=master and class=slave when you create them. Then, the other instances can use the EC2 API (using the AWS CLI, or one of the AWS SDKs) to list the instances with a certain tag and get the IP address. Here's an example using the AWS CLI:

aws ec2 describe-instances --filter Name=tag:class,Values=master \
--query Reservations[*].Instances[*].PrivateIpAddress --output text

which would return the private ip address of the master.

0
votes

Another approach I've seen was to have the master write its own IP in a file in an S3 bucket, then have the nodes read the master's IP from that same file. Or it could be done using a database instead, any storage medium/location reachable by all participants will do.