This isn't working the way you're intending because calling sys.exit()
in a worker process will only terminate the worker. It has no effect on the parent process or the other workers, because they're separate processes and raising SystemExit
only affects the current process. You need to send a signal back the parent process to tell it that it should shut down. One way to do this for your use-case would be to use an Event
created in a multiprocessing.Manager
server:
import multiprocessing
def myfunction(i, event):
if not event.is_set():
print i
if i == 20:
event.set()
if __name__ == "__main__":
p= multiprocessing.Pool(10)
m = multiprocessing.Manager()
event = m.Event()
for i in range(100):
p.apply_async(myfunction , (i, event))
p.close()
event.wait() # We'll block here until a worker calls `event.set()`
p.terminate() # Terminate all processes in the Pool
Output:
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
As pointed out in Luke's answer, there is a race here: There's no guarantee that all the workers will run in order, so it's possible that myfunction(20, ..)
will run prior to myfuntion(19, ..)
, for example. It's also possible that other workers after 20
will run before the main process can act on the event being set. I reduced the size of the race window by adding the if not event.is_set():
call prior to printing i
, but it still exists.