4
votes

I'm interested if anyone has done something like this before and if so how it worked out. We have a Jenkins farm with about 15 slaves. Right now each slave has its own local disk for the workspaces, but our jobs are not tied to specific slaves. This means that if Job 1 originally ran on Slave1, but then had to switch to Slave2 it would have to pull the code again. This seems like a waste in terms of download time and of disk space because the code is now duplicated across two slaves.

Is it a good idea to mount a shared NFS drive (or some other shared drive) across all the slaves so that the jobs could run on any slave, but the disk would be the same for all? The obvious risk would be latency, but are there other risks associated with this as well?

Thanks!

4

4 Answers

2
votes

Assuming your network is good and your mounts are correctly setup, I don't see any problem with this approach. As you suggested, you will save on time, but pay on network transit.
I suggest you try it out with some of the slaves and do some bench marking.

I hope this helps.

2
votes

The only down side I can think of is if you want to build the same job on more than one slave simultaneously (not Jenkins default behaviour, but it is possible). In that case you would have multiple builds using the same workspace directory.

2
votes

Given that disk space is so cheap and fast these days, I really doubt you will see any benefits from your plan.

Instead I can think of several downsides:

  • NFS mounted disk space is slower
  • NFS mounted disk space can be more unreliable (the network has to work, the server has to work.)
  • There is no way to tell Jenkins the disk on different slaves is actually shared. Jenkins might decide to clean up workspace on slave1 while it is using the workspace for building on slave2.

If you are worried about the checkout time, there are ways to optimize that:

  • You do not have to delete workspace at the end of the build. If Jenkins sees it already has a workspace for a job, it will try to reuse it for the next build.
  • Configure SCM to update and clean existing workspace instead of checking out a clean copy every time.

The details how to do these vary a bit depending on which version control system you use. Also there might be other tricks you can do: shallow clones, use reference repos, ...

I'm pretty sure you can make the checkout time a non-issue. Disk space usage is harder to make go away but usually disk is cheap enough. And if you have small and fast SSD disk, you can usually clean up generated files from the workspace at the end of the build to save space. (I have exactly that case at work.)

0
votes

Jenkins encryption may differ from host A to Host B, thus credentials will break