1
votes

I am having trouble understanding how the threads and resources allocated to a snakejob translate to number of cores allocated per snakejob on my slurm partition. I have set the --cores flag to 46 on my .sh which runs my snakefile, yet each of 5 snakejobs are concurrently running, with 16 cores provided to each of them. Does a rule specific thread number superceed the --cores flag for snakemake? I thought it was the max cores that all my jobs together had to work with...

Also, are cores allocated based on memory, and does that scale with number of threads specified? For example, my jobs were allocated 10GB a peice of memory, but one thread only. Each job was given two cores according to my SLURM outputs. When I specified 8 threads with 10GB of memory, I was provided 16 cores instead. Does that have to do with the amount of memory I gave to my job, or is it just that an additional core is provided for each thread for memory purposes? Any help would be appreciated.

Here is one of snake job outputs:

    Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cores: 16
Rules claiming more threads will be scaled down.
Job counts:
        count   jobs
        1       index_genome
        1

[Tue Feb  2 10:53:59 2021]
rule index_genome:
    input: /mypath/genome/genomex.fna
    output: /mypath/genome/genomex.fna.ann
    jobid: 0
    wildcards: bwa_extension=.ann
    threads: 8
    resources: mem_mb=10000

Here is my bash command:

module load snakemake/5.6.0
    snakemake -s snake_make_x --cluster-config cluster.yaml --default-resources --cores 48 --jobs 47 \
    --cluster "sbatch -n {threads} -M {cluster.cluster} -A {cluster.account} -p {cluster.partition}" \
    --latency-wait 10
1

1 Answers

0
votes

When you use slurm together with snakemake, the --cores flag unfortunately does not mean cores anymore, it means jobs.. So when you set --cores 48 you are actually telling snakemake to use at max 48 parallel jobs.

Related question: Behaviour of "--cores" when using Snakemake with the slurm profile