We have the following setup for our jenkins master/slave nodes: one static & fixed master (no jobs will execute on master) and dynamic spin up of slaves and also we terminate AWS EC2 jenkins slaves if it is idle for 30 mins.
When it needs to execute a new job it will provision a slave based on the configuration defined in Manage Jenkins ---> Configure System ---> Cloud ---> Amazon EC2. It also has some init script section to install & configure according to our needs.
When a new node is spun up, initially we see it in offline status on the following page: https://jenkins.xxxxx.xxxxx.com/computer/
While it is executing the init script stuff (which takes around ~10 mins in our case to complete all the mentioned commands) a jenkins job which was waiting in the build queue is assigned to this new node and people are seeing the following message in the job console output and aborting the builds thinking that job has been assigned to an offline slave/node which no longer exists
Still waiting to schedule task
‘EC2 (Jenkins Slave) - jenkins-dev-docker-slave (i-xxxxxxxxx)’ is offline; ‘EC2 (Jenkins Slave) - jenkins-dev-docker-slave (i-xxxxxxxxxx)’ is offline
Is there any way for us to specify a configuration which waits till the slave node/agent becomes ready (Agent successfully connected and online) for executing the jobs instead of being allotted before the init script commands execution is complete ?
Note:- All these are pipeline jobs and linux slaves/nodes. Jenkins version: 2.190.1
Please let me know if anyone needs any other details.
Regards, Pavan