0
votes

I am attempting to share a complex object that cannot be pickled between processes in a multi-gpu DDP training scenario. The recommended pythonic way I found to do this here is using a Manager object and manipulate my object with proxies. However when I import the following I get that the module cannot be found:

from torch.multiprocessing.managers import BaseManager

Seems like torch.multiprocessing does not cover the functionality I need from pythons multiprocessing. Is there a better way to do this or can I mix regular multiprocessing objects with torchs multiprocessing?