I'm trying to use a cluster to run an MPI code. the cluster hardware consist of 30 nodes, each with the following specs: 16 Cores at 2 Sockets (Intel Xeon e5-2650 v2) - (32 Cores with multithreading enabled) 64 GByte 1866 MT/s main memory named: aria
the slurm config file is as following:
#SBATCH --ntasks=64 # Number of MPI ranks
#SBATCH --cpus-per-task=1 # Number of cores per MPI rank
#SBATCH --nodes=2 # Number of nodes
#SBATCH --ntasks-per-node=32 # How many tasks on each node
#SBATCH --ntasks-per-socket=16 # How many tasks on each CPU or socket
#SBATCH --mem-per-cpu=100mb # Memory per core
when I submit the job, a return message comes out with the following content: sbatch: error: Batch job submission failed: Requested node configuration is not available which is a little bit confusing. I'm submitting one task per a cpu and dividing the tasks equally between nodes and sockets, can anyone please advise on the problem with the aforementioned configs? and one more thing: what is the optimum configuration given the hardware specs?
Thanks in advance