According to the SLURM FAQ:
Can Slurm emulate a larger cluster? Yes, this can be useful for testing purposes. It has also been used to partition "fat" nodes into multiple Slurm nodes. There are two ways to do this. The best method for most conditions is to run one slurmd daemon per emulated node in the cluster as follows.
Assume we have a single node with 10 GPUs and 40 CPU cores. Can this be used to virutally split the node into 10 nodes with 4 cores are 1 GPU each with explicit CPU/GPU binding? If so, how does the configuration need to look like?