I am using boost asio to write simple server/client to transmit binary data. Particularly, I am using async_write and async_read with ip:tcp::socket. Nothing fancy, really.
boost::asio::async_write(*mp_socket, boost::asio::buffer(data_ptr, data_size),
boost::bind(&TcpSocket::m_handleWrite, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
boost::asio::async_read(*mp_socket, boost::asio::buffer(mp_buffer, m_buffer_size),
boost::bind(&TcpSocket::m_handleRead, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
I randomly generated binary data and send/receive in various platform pairs to test. The code works fine in same platform (such as all in linux, all in windows), but fails to work cross platform.
For example, the left is received data in windows, and the right is sent data in linux. The only difference is 0D0A (CRLF) in windows and 0A (LF) in linux. I think somewhere (in boost asio, or in winsock, or etc..), LF->CRLF conversion is happening.
So, is there a way to disable the conversion, as I am sending binary data? I was looking for some option in boost::asio configuration (such as using raw buffer, not stream buffer), but could not find it. Thank you for the help.

data in data_ptr
size_t sz = rand() % 60000;
char* p = (char*)malloc(sz + 4);
uint32_t* p_header = reinterpret_cast<uint32_t*>(p);
*p_header = htonl((uint32_t)sz);
for (size_t i = 0; i < sz; ++i)
{
p[i + 4] = rand() % 255;
}
async_write, orasync_readcalls (the overloads that take aConstBufferSequenceshould guarantee this). The difference is likely happening where the buffers themselves are generated. Can you show that code? - Chaddata_ptr? - Chad