In a similar situation, we've limited the number of executors to 1. Where formerly the VMs were powerful enough to run 4 executors, we've destroyed them and created 4 small VMs instead, with 1 executor each. This guarantees parallel execution on different VMs.
The reason for that was, that — while you can make a reasonable effort to make sure your team's Stage A does not interfere with your team's Stage B (at this time it looks like they do interfere, hence your question) — you cannot make any assumptions about whether a) some of your stages won't break some other team's stages; or b) their stages will break yours. In our case, we ran out of disk space on stage, so let's cleanup docker images before, right? Oopsie — another executor is using the image, and the docker daemon is shared between all the executors. Here's another scenario: your stage creates a big file - storage is exhausted - up to four jobs fail (yours, as well as three innocent by-executers).
Your best course of action seems to be convincing the powers to replace big 4-executor machines with 4 small 1-executor machines.
Your other options include using the "lockable resource" plugin and locking e.g. the name of the machine. This won't prevent scheduling different parts of your pipeline on the same machine, but will hold one of the stages till the previous one has finished. In our experience, this leads to a great increase in the pipeline execution time versus no tangible benefit whatsoever.