1
votes

When using std::async with launch::async in a for loop, my code runs serially in the same thread, as if each async call waits for the previous before launching. In the notes for std::async references (std::async), this is possible if the std::future is not bound to a reference, but that's not the case with my code. Can anyone figure out why it's running serially?

Here is my code snippet:

class __DownloadItem__ { //DownloadItem is just a "typedef shared_ptr<__DownloadItem__> DownloadItem"
    std::string buffer;
    time_t last_access;
 std::shared_future<std::string> future;
}

for(uint64_t start: chunksToDownload){
        DownloadItem cache = std::make_shared<__DownloadItem__>();
        cache->last_access = time(NULL);
        cache->future =
                std::async(std::launch::async, &FileIO::download, this, api,cache, cacheName, start, start + BLOCK_DOWNLOAD_SIZE - 1);
     }
}

The future is being stored in a shared future because multiple threads might be waiting on the same future.

I'm also using GCC 6.2.1 to compile it.

3
Do note editing your question to change the code you "have" after you get answers is not kosher as it can/may invalidate answers you revived in good faith. In this case though it appears that it does not and you have the same issue. The cache you add to the containers is not the cache that has the future stored in it.NathanOliver
I believe std::shared_future is for multiple threads waiting on one result whereas I think you need one thread waiting on multiple results. Maybe you need std::vector<std::future>?Galik
@NathanOliver I will try to keep that in mind next timethejinx0r
@Galik I do have multiple threads waiting on the same result. They are stored in a map so that multiple threads don't try to do the same work.thejinx0r
@NathanOliver I misunderstood your comment. I actually don't understand why you say that the cache variable is not the same cache in the container if cache is a shared_ptr. Can you please clarify?thejinx0r

3 Answers

5
votes

The std::future returned by async blocks in the destructor. That means when you reach the } of

for(uint64_t start: chunksToDownload){
    DownloadItem cache = std::make_shared<__DownloadItem__>();
    cache->last_access = time(NULL);
    cache->future =
            std::async(std::launch::async, &FileIO::download, this, api,cache, cacheName, start, start + BLOCK_DOWNLOAD_SIZE - 1);
 }  // <-- When we get here

cache is destroyed which in turn calls the destructor offuture which waits for the thread to finish.

What you need to do is store each future returned from async in a separate persistent future that is declared outside of the for loop.

1
votes

That's a misfeature of std::async as defined by C++11. Its futures' destructors are special and wait for the operation to finish. More detailed info on Scott's Meyers blog.

cache is being destroyed at the end of each loop iteration, thereby calling destructors of its subobjects.

Use packaged_task or ensure you keep a container of copies of shared pointers to your cache to avoid waiting for the destructors. Personally, I'd go with packeged_task

0
votes

As you noticed yourself, the future d-tor of future returned by std::async blocks and waits for the async operation to finish (for the future to become ready). In your case, cache object goes out of scope at each of the loop iterations and thus gets destructed, together with the future it holds, so you see the mentioned effect.