I want to know the number of CPUs on the local machine using Python. The result should be user/real
as output by time(1)
when called with an optimally scaling userspace-only program.
14 Answers
If you have python with a version >= 2.6 you can simply use
import multiprocessing
multiprocessing.cpu_count()
http://docs.python.org/library/multiprocessing.html#multiprocessing.cpu_count
If you're interested into the number of processors available to your current process, you have to check cpuset first. Otherwise (or if cpuset is not in use), multiprocessing.cpu_count()
is the way to go in Python 2.6 and newer. The following method falls back to a couple of alternative methods in older versions of Python:
import os
import re
import subprocess
def available_cpu_count():
""" Number of available virtual or physical CPUs on this system, i.e.
user/real as output by time(1) when called with an optimally scaling
userspace-only program"""
# cpuset
# cpuset may restrict the number of *available* processors
try:
m = re.search(r'(?m)^Cpus_allowed:\s*(.*)$',
open('/proc/self/status').read())
if m:
res = bin(int(m.group(1).replace(',', ''), 16)).count('1')
if res > 0:
return res
except IOError:
pass
# Python 2.6+
try:
import multiprocessing
return multiprocessing.cpu_count()
except (ImportError, NotImplementedError):
pass
# https://github.com/giampaolo/psutil
try:
import psutil
return psutil.cpu_count() # psutil.NUM_CPUS on old versions
except (ImportError, AttributeError):
pass
# POSIX
try:
res = int(os.sysconf('SC_NPROCESSORS_ONLN'))
if res > 0:
return res
except (AttributeError, ValueError):
pass
# Windows
try:
res = int(os.environ['NUMBER_OF_PROCESSORS'])
if res > 0:
return res
except (KeyError, ValueError):
pass
# jython
try:
from java.lang import Runtime
runtime = Runtime.getRuntime()
res = runtime.availableProcessors()
if res > 0:
return res
except ImportError:
pass
# BSD
try:
sysctl = subprocess.Popen(['sysctl', '-n', 'hw.ncpu'],
stdout=subprocess.PIPE)
scStdout = sysctl.communicate()[0]
res = int(scStdout)
if res > 0:
return res
except (OSError, ValueError):
pass
# Linux
try:
res = open('/proc/cpuinfo').read().count('processor\t:')
if res > 0:
return res
except IOError:
pass
# Solaris
try:
pseudoDevices = os.listdir('/devices/pseudo/')
res = 0
for pd in pseudoDevices:
if re.match(r'^cpuid@[0-9]+$', pd):
res += 1
if res > 0:
return res
except OSError:
pass
# Other UNIXes (heuristic)
try:
try:
dmesg = open('/var/run/dmesg.boot').read()
except IOError:
dmesgProcess = subprocess.Popen(['dmesg'], stdout=subprocess.PIPE)
dmesg = dmesgProcess.communicate()[0]
res = 0
while '\ncpu' + str(res) + ':' in dmesg:
res += 1
if res > 0:
return res
except OSError:
pass
raise Exception('Can not determine number of CPUs on this system')
Another option is to use the psutil
library, which always turn out useful in these situations:
>>> import psutil
>>> psutil.cpu_count()
2
This should work on any platform supported by psutil
(Unix and Windows).
Note that in some occasions multiprocessing.cpu_count
may raise a NotImplementedError
while psutil
will be able to obtain the number of CPUs. This is simply because psutil
first tries to use the same techniques used by multiprocessing
and, if those fail, it also uses other techniques.
len(os.sched_getaffinity(0))
is what you usually want
https://docs.python.org/3/library/os.html#os.sched_getaffinity
os.sched_getaffinity(0)
(added in Python 3) returns the set of CPUs available considering the sched_setaffinity
Linux system call, which limits which CPUs a process and its children can run on.
0
means to get the value for the current process. The function returns a set()
of allowed CPUs, thus the need for len()
.
multiprocessing.cpu_count()
and os.cpu_count()
on the other hand just returns the total number of physical CPUs.
The difference is especially important because certain cluster management systems such as Platform LSF limit job CPU usage with sched_getaffinity
.
Therefore, if you use multiprocessing.cpu_count()
, your script might try to use way more cores than it has available, which may lead to overload and timeouts.
We can see the difference concretely by restricting the affinity with the taskset
utility, which allows us to control the affinity of a process.
Minimal taskset
example
For example, if I restrict Python to just 1 core (core 0) in my 16 core system:
taskset -c 0 ./main.py
with the test script:
main.py
#!/usr/bin/env python3
import multiprocessing
import os
print(multiprocessing.cpu_count())
print(os.cpu_count())
print(len(os.sched_getaffinity(0)))
then the output is:
16
16
1
Vs nproc
nproc
does respect the affinity by default and:
taskset -c 0 nproc
outputs:
1
and man nproc
makes that quite explicit:
print the number of processing units available
Therefore, len(os.sched_getaffinity(0))
behaves like nproc
by default.
nproc
has the --all
flag for the less common case that you want to get the physical CPU count without considering taskset:
taskset -c 0 nproc --all
os.cpu_count
documentation
The documentation of os.cpu_count
also briefly mentions this https://docs.python.org/3.8/library/os.html#os.cpu_count
This number is not equivalent to the number of CPUs the current process can use. The number of usable CPUs can be obtained with
len(os.sched_getaffinity(0))
The same comment is also copied on the documentation of multiprocessing.cpu_count
: https://docs.python.org/3/library/multiprocessing.html#multiprocessing.cpu_count
From the 3.8 source under Lib/multiprocessing/context.py
we also see that multiprocessing.cpu_count
just forwards to os.cpu_count
, except that the multiprocessing
one throws an exception instead of returning None if os.cpu_count
fails:
def cpu_count(self):
'''Returns the number of CPUs in the system'''
num = os.cpu_count()
if num is None:
raise NotImplementedError('cannot determine number of cpus')
else:
return num
3.8 availability: systems with a native sched_getaffinity
function
The only downside of this os.sched_getaffinity
is that this appears to be UNIX only as of Python 3.8.
cpython 3.8 seems to just try to compile a small C hello world with a sched_setaffinity
function call during configuration time, and if not present HAVE_SCHED_SETAFFINITY
is not set and the function will likely be missing:
- https://github.com/python/cpython/blob/v3.8.5/configure#L11523
- https://github.com/python/cpython/blob/v3.8.5/Modules/posixmodule.c#L6457
psutil.Process().cpu_affinity()
: third-party version with a Windows port
The third-party psutil
package (pip install psutil
) had been mentioned at: https://stackoverflow.com/a/14840102/895245 but not the cpu_affinity
function: https://psutil.readthedocs.io/en/latest/#psutil.Process.cpu_affinity
Usage:
import psutil
print(len(psutil.Process().cpu_affinity()))
This function does the same as the standard library os.sched_getaffinity
on Linux, but they have also implemented it for Windows by making a call to the GetProcessAffinityMask
Windows API function:
- https://github.com/giampaolo/psutil/blob/ee60bad610822a7f630c52922b4918e684ba7695/psutil/_psutil_windows.c#L1112
- https://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-getprocessaffinitymask
So in other words: those Windows users have to stop being lazy and send a patch to the upstream stdlib :-)
Tested in Ubuntu 16.04, Python 3.5.2.
In Python 3.4+: os.cpu_count().
multiprocessing.cpu_count()
is implemented in terms of this function but raises NotImplementedError
if os.cpu_count()
returns None
("can't determine number of CPUs").
If you want to know the number of physical cores (not virtual hyperthreaded cores), here is a platform independent solution:
psutil.cpu_count(logical=False)
https://github.com/giampaolo/psutil/blob/master/INSTALL.rst
Note that the default value for logical
is True
, so if you do want to include hyperthreaded cores you can use:
psutil.cpu_count()
This will give the same number as os.cpu_count()
and multiprocessing.cpu_count()
, neither of which have the logical
keyword argument.
multiprocessing.cpu_count()
will return the number of logical CPUs, so if you have a quad-core CPU with hyperthreading, it will return 8
. If you want the number of physical CPUs, use the python bindings to hwloc:
#!/usr/bin/env python
import hwloc
topology = hwloc.Topology()
topology.load()
print topology.get_nbobjs_by_type(hwloc.OBJ_CORE)
hwloc is designed to be portable across OSes and architectures.
You can also use "joblib" for this purpose.
import joblib
print joblib.cpu_count()
This method will give you the number of cpus in the system. joblib needs to be installed though. More information on joblib can be found here https://pythonhosted.org/joblib/parallel.html
Alternatively you can use numexpr package of python. It has lot of simple functions helpful for getting information about the system cpu.
import numexpr as ne
print ne.detect_number_of_cores()
If you are using torch you can do:
import torch.multiprocessing as mp
mp.cpu_count()
the mp library in torch has the same interface as the main python one so you can do this too as the commenter mentioned:
python -c "import multiprocessing; print(multiprocessing.cpu_count())"
hope this helps! ;) it's always nice to have more than 1 option.
/proc/<PID>/status
has some lines that tell you the number of CPUs in the current cpuset: look forCpus_allowed_list
. – wpoely86import torch.multiprocessing; mp.cpu_count()
– Charlie Parker