75
votes

I have experience in coding OpenMP for Shared Memory machines (in both C and FORTRAN) to carry out simple tasks like matrix addition, multiplication etc. (Just to see how it competes with LAPACK). I know OpenMP enough to carry out simple tasks without the need to look at documentation.

Recently, I shifted to Python for my projects and I don't have any experience with Python beyond the absolute basics.

My question is :

What is the easiest way to use OpenMP in Python? By easiest, I mean the one that takes least effort on the programmer side (even if it comes at the expense of added system time)?

The reason I use OpenMP is because a serial code can be converted to a working parallel code with a few !$OMPs scattered around. The time required to achieve a rough parallelization is fascinatingly small. Is there any way to replicate this feature in Python?

From browsing around on SO, I can find:

  • C extensions
  • StackLess Python

Are there more? Which aligns best with my question?

7

7 Answers

33
votes

Due to GIL there is no point to use threads for CPU intensive tasks in CPython. You need either multiprocessing (example) or use C extensions that release GIL during computations e.g., some of numpy functions, example.

You could easily write C extensions that use multiple threads in Cython, example.

39
votes

Cython

Cython has OpenMP support: With Cython, OpenMP can be added by using the prange (parallel range) operator and adding the -fopenmp compiler directive to setup.py.

When working in a prange stanza, execution is performed in parallel because we disable the global interpreter lock (GIL) by using the with nogil: to specify the block where the GIL is disabled.

To compile _cython_np.pyx_ we have to modify the setup.py script as shown below. We tell it to inform the C compiler to use -fopenmp as an argument during compilation - to enable OpenMP and to link with the OpenMP libraries. setup.py

With Cython’s prange, we can choose different scheduling approaches. With static, the workload is distributed evenly across the available CPUs. However, as some of your calculation regions are expensive in time, while others are cheap - if we ask Cython to schedule the work chunks equally using static across the CPUs, then the results for some regions will complete faster than others and those threads will then sit idle. Both the dynamic and guided schedule options attempt to mitigate this problem by allocating work in smaller chunks dynamically at runtime so that the CPUs are more evenly distributed when the workload’s calculation time is variable. Thus, for your code, the correct choice will vary depending on the nature of your workload.

Numba

Numba’s premium version, NumbaPro, has experimental support of a prange parallelization operator for working with OpenMP.

Pythran

Pythran (a Python-to-C++ compiler for a subset of Python) can take advantage of vectorization possibilities and of OpenMP-based parallelization possibilities, though it runs using Python 2.7 only. You specify parallel sections using pragma omp directives (very similarly to Cython’s OpenMP support described above), e.g.:

pragma omp

PyPy

The JIT Python compiler PyPy supports the multiprocessing module (see following) and has a project called PyPy-STM "a special in-development version of PyPy which can run multiple independent CPU-hungry threads in the same process in parallel".

Side note: multiprocessing

OpenMP is a low-level interface to multiple cores. You may want to look at multiprocessing. The multiprocessing module works at a higher level, sharing Python data structures, while OpenMP works with C primitive objects (e.g., integers and floats) once you’ve compiled to C. It only makes sense to use OpenMP if you’re compiling your code; if you’re not compiling (e.g., if you’re using efficient numpy code and you want to run on many cores), then sticking with multiprocessing is probably the right approach.

16
votes

To the best of my knowledge, there is no OpenMP package for Python (and I don't know what it would do if there were one). If you want threads directly under your control, you will have to use one of the threading libraries. However, as pointed out by others, the GIL (Global Interpreter Lock) makes multi-threading in Python for performance a little... well, pointless*. The GIL means that only one thread can access the interpreter at a time.

I would suggest looking at NumPy/SciPy instead. NumPy lets you write Matlab-esque code where you are operating on arrays and matrices with single operations. It has some parallel processing capabilities as well, see the SciPy Wiki.

Other places to start looking:

* Ok, it isn't pointless, but unless the time is consumed outside of Python code (like by an external process invoked via popen or some such), the threads aren't going to buy you anything other than convenience.

12
votes

If you want to release GIL and use OpenMP ypu can take a look at Cython. It offers a simple parallelism for some common tasks. You can read more in Cython documentation.

9
votes

Maybe your response is in Cython:

"Cython supports native parallelism through the cython.parallel module. To use this kind of parallelism, the GIL must be released (see Releasing the GIL). It currently supports OpenMP, but later on more backends might be supported." Cython Documentation

7
votes

There is a package called pymp, which the author described it as a package that brings OpenMP-like functionality to Python. I have tried using it, but with different use case: file processing. It worked. I think it is quite simple to use. Below is a sample taken from the GitHub page:

import pymp
ex_array = pymp.shared.array((100,), dtype='uint8')
with pymp.Parallel(4) as p:
    for index in p.range(0, 100):
        ex_array[index] = 1
        # The parallel print function takes care of asynchronous output.
        p.print('Yay! {} done!'.format(index))
0
votes

http://archive.euroscipy.org/talk/6857 "introduces Cython's OpenMP abilities focussing on parallel loops over NumPy arrays. Source code examples demonstrate how to use OpenMP from Python. Results for parallel algorithms with OpenMP show what speed-ups can be achieved for different data sizes compared to other parallelizing strategies."

import numpy
import cython
from cython cimport parallel

@cython.boundscheck(False)
@cython.wraparound(False)
def func(object[double, ndim=2] buf1 not None,
        object[double, ndim=2] buf2 not None,
        object[double, ndim=2] output=None,
        int num_threads=2):
    cdef unsigned int x, y, inner, outer
    if buf1.shape != buf2.shape:
        raise TypeError('Arrays have different shapes: %s, %s' % (buf1.shape,
            buf2.shape))
    if output is None:
        output = numpy.empty_like(buf1)
    outer = buf1.shape[0]
    inner = buf1.shape[1]
    with nogil, cython.boundscheck(False), cython.wraparound(False):
        for x in parallel.prange(outer, schedule='static',
                num_threads=num_threads):
            for y in xrange(inner):
                output[x, y] = ((buf1[x, y] + buf2[x, y]) * 2 +
                    buf1[x, y] * buf2[x, y])
return output