6
votes

I have an ip::udp::socket constructed with an io_service. There is only one boost::thread which calls the io_service::run() method, and an instance of io_service::work to prevent io_service::run() from returning. The completion handlers for my ip::udp::socket have custom asio_handler_allocate() and asio_handler_deallocate() functions, which are backed by a my::custom_memory_pool.

When my application quits, this sequence of events occurs on on my shutting-down thread:

  1. ip::udp::socket::close()
  2. work::~work()
  3. io_service::stop()
  4. thread::join()
  5. my::custom_memory_pool::~custom_memory_pool()
  6. ip::udp::socket::~socket()
  7. thread::~thread()
  8. io_service::~io_service()

In step 8, the call to io_service::~io_service() causes...

Program terminated with signal 11, Segmentation fault.
#0  0x00000000005ad93c in my::custom_memory_pool<boost::aligned_storage<512u, -1u> >::deallocate (this=0x36323f8, t=0x7fca97a07880)
    at memory.hpp:82
82                      reinterpret_cast<pool_node*>(t)->next_ = head_;
(gdb) bt 30
#0  0x00000000005ad93c in my::custom_memory_pool<boost::aligned_storage<512u, -1u> >::deallocate (this=0x36323f8, t=0x7fca97a07880)
    at memory.hpp:82
#1  0x00000000005ad40a in asio_handler_deallocate (p=0x7fca97a07880, s=96, h=0x7fffe09d5480) at net.cpp:22
#2  0x0000000000571a07 in boost_asio_handler_alloc_helpers::deallocate<socket_multicast::completion_handler> (p=0x7fca97a07880, s=96, h=...)
    at /usr/include/boost/asio/detail/handler_alloc_helpers.hpp:51
#3  0x0000000000558256 in boost::asio::detail::reactive_socket_recvfrom_op<boost::asio::mutable_buffers_1, boost::asio::ip::basic_endpoint<boost::asio::ip::udp>, socket_multicast::completion_handler>::ptr::reset (this=0x7fffe09d54b0)
    at /usr/include/boost/asio/detail/reactive_socket_recvfrom_op.hpp:81
#4  0x0000000000558310 in boost::asio::detail::reactive_socket_recvfrom_op<boost::asio::mutable_buffers_1, boost::asio::ip::basic_endpoint<boost::asio::ip::udp>, socket_multicast::completion_handler>::do_complete (owner=0x0, base=0x7fca97a07880)
    at /usr/include/boost/asio/detail/reactive_socket_recvfrom_op.hpp:112
#5  0x0000000000426706 in boost::asio::detail::task_io_service_operation::destroy (this=0x7fca97a07880)
    at /usr/include/boost/asio/detail/task_io_service_operation.hpp:41
#6  0x000000000042841b in boost::asio::detail::task_io_service::shutdown_service (this=0xd4df30)
    at /usr/include/boost/asio/detail/impl/task_io_service.ipp:96
#7  0x0000000000426388 in boost::asio::detail::service_registry::~service_registry (this=0xd4a320, __in_chrg=<value optimized out>)
    at /usr/include/boost/asio/detail/impl/service_registry.ipp:43
#8  0x0000000000428e99 in boost::asio::io_service::~io_service (this=0xd49f38, __in_chrg=<value optimized out>)
    at /usr/include/boost/asio/impl/io_service.ipp:51

So the io_service::~io_service() is trying to deallocate some memory to the pool that I destroyed back in step 5.

I can't move my::custom_memory_pool::~custom_memory_pool() to after io_service::~io_service().

I expected that after io_service::stop() and thread::join() returns, there could be no more asio_handler_deallocate() calls. Apparently that's not the case. What can I do in step 3 to force io_service to dequeue all of its completion events and deallocate all of its handler memory, and how can I block until io_service finishes those tasks?

2
Maybe I should move work::~work() to come after io_service::stop()... perhaps I'm not giving the io_service the opportunity to complete all operations.James Brock
If you're explicitly calling stop, then when you destory the work is irrelevant. The work only prevents the io_service from automatically stoping when it has no further tasks pending.Dave S
The asio docs: io_service::stop() means "Subsequent calls to run(), run_one(), poll() or poll_one() will return immediately." work::~work() means the end of the requirement that "the io_service object's run() function will not exit while work is underway." So the more I think about it, the more it seems that the proper order must be io_service::stop() then work::~work(), but the docs aren't explicit about that.James Brock
The order is irrelevant. By default, run() only returns when the io_service is stopped or there is no work (either a work object or a pending asynchronous operation) to do. By adding a work object, it prevents the latter of those cases. Explicit calls to stop cause the run/run_one/... functions to return as soon as possible, and further calls to return immediately. Calling io_service::reset() will then allow processing to restart. That is why stop does not remove the handlers, because there would be no way to restart.Dave S

2 Answers

4
votes

Here is the answer: When tearing down an io_service and its services, don't call io_service::stop() at all. Just work::~work().

io_service::stop() is really only for temporarily suspending the io_service so that it may be io_service::reset() later. An ordinary graceful shutdown of io_service should not involve io_service::stop().

1
votes

Calling io_service::stop() doesn't allow it to finish any handlers, it simply stops processing after any currently running handler and returns as soon as possible.

Since the various internal structures are destroyed when the io_service is destroyed, one solution is to control the order that the io_service is destroyed relative to the custom allocator. Either order them correctly if they're in the same structure (have the allocator prior to the io_service in the structure), or use the heap and explicitly order their destruction to guarantee that the io_service is destroyed first.