We have the following jenkins setup:
- Jenkins master
- Jenkins Slave1
- Jenkins Slave2
- Jenkins Slave3
Those are all virtual machines and the slaves do always exist. They don't spawn automatically up and down.
Now we have builds which needs a lot of tools (maven, python, aws cli, ...). We can install every tool on every slave and everything will work fine. But we want to build a docker approach.
Nearly all the tutorials I've seen are using slaves in Docker. They use some orchestration tool like Kubernetes and are creating slaves in Docker, do their stuff and delete the pod again.
We don't have the possibility to do this:
Question: Is it a decent approach to use an 'old' Jenkins setup with real VM slaves on which we use docker?
What I'm thinking about is writing a pipeline and in each stage we use a docker container:
- start build (it will choose a slave, e.g. Slave1)
- pipeline will start
- stage1: spin up e.g. a python container: git clone and execute python commands. mount volume to workspace??
- stage2: sping up e.g. aws container and mount the content of the workspace and execute new commands etc.
Can someone evaluate this approach?