I have setup a batch environment with
- Managed Compute environment
- Job Queue
- Job Definitions
The actual job(docker container) does a lot of video encoding and hence uses up most of the CPU. The process itself takes a few minutes (close to 5 minutes to get all the encoders initialized). Ideally I would want one job per instance so that the encoders are not CPU starved.
My issue is when I launch multiple jobs at the same time or close enough, AWS batch decides launch both of them in the same instance as the first container is still initializing and has not started using CPUs yet. It seems like a race condition to me where both jobs see the instance created as available.
Is there a way I can launch one instance for each job without looking for instances that are already running? Or any other solution to lock an instance once it is designated for a particular job?
Thanks a lot for your help.
environment
property of the aws batch when launching a job. below is my configuration ``` Minimum vCPUs 0 Desired vCPUs 0 Maximum vCPUs 256 Instance types c5 ``` – Guru Govindan