After some searching I found that there are two accepted ways to manage AS API or AS in general for jobs:
One method is to manipulate the health of a server directly from within the worker itself. This is what quite a few sites do and it is effective, when your worker detects no more jobs or redundancy in the system it marks the server it is on as unhealthy. This way the AS API comes along and automatically takes it down after a period of time.
So with this method you would have a scale up policy based on your SQS queue size over a period of time (say for every 5 mins of SQS messages being over 100 add 2 servers; every 10 mins of SQS messages being over 500 double network capacity by 50%). The scale down would be handled by code instead of an active policy.
This method would work with zero clusters too so you can down your cluster all the way to no servers when it's not being used making it quite cost effective.
Advantages:
- Easy to setup
- Using AWS API functions
- Probably the quickest to setup
- Using AWS managed API to manage the cluster size for you
Disadvantages:
- Hard to manage without using full AWS API i.e. when making a new server you can't get it's instanceid back without doing a full API command return of all instanceids. There are other occasions the AWS AS API gets in your way and makes life a little harder if you want a element of self control over your cluster
- Relying on Amazon to know what's best for your wallet. You are relying on the Amazon API to scale correctly, this is an advantage to many but a disadvantage to some.
- The worker must house some of your server pool code meaning that the worker is not generic and can't just instantly be moved to another cluster without some configuration change.
With this in mind there is a second option, DIY. You use the EC2 Spot Instance and on Demand Instance API to make your own AS API based around your custom rules. This is pretty simple to explain:
- You have a cli script that when run starts, say, 10 servers
- You have a cronjob that when detects a satisfying of certain conditions downs the servers or ups more
Advantages:
- Easy and clean to manage your end
- Can make generic workers
- The server pool can start to manage many clusters
- You can make the rules and what not really quite complex getting figures from metrics on AWS and using them with comparison and time ranges to understand if things should change.
Disadvantages:
- Hard to get multi-region (not so bad for SQS since SQS is single region)
- Hard to deal with errors in region capacity and workload
- You must rely on your own servers uptime and your own code to ensure that the cronjob runs as it should and provisions servers as it should and breaks them down when it should.
So really it seems to be a battle of which is more comfortable for the end user. I personally am mulling the two still and have created a small self hosted server pooler that could work for me but at the same time I am tempted to try and get this to work on AWS' own API.
Hope this helps people,
EDIT: Note with either of these methods you will still require a function on your side to predict how you should bid, as such you will need to call the bid history API on your spot type (EC2 type) and compute how to bid.
Another Edit: Another way to automatically detect redundancy in a system is to check the empty responses metric for your SQS queue. This is the amount of times your workers have pinged the queue and received no response. This is quite effective if you use an exclusive lock in your app for the duration of the worker.