I'm interested if anyone has done something like this before and if so how it worked out. We have a Jenkins farm with about 15 slaves. Right now each slave has its own local disk for the workspaces, but our jobs are not tied to specific slaves. This means that if Job 1 originally ran on Slave1, but then had to switch to Slave2 it would have to pull the code again. This seems like a waste in terms of download time and of disk space because the code is now duplicated across two slaves.
Is it a good idea to mount a shared NFS drive (or some other shared drive) across all the slaves so that the jobs could run on any slave, but the disk would be the same for all? The obvious risk would be latency, but are there other risks associated with this as well?
Thanks!