I'm writing a small client/server application in c++ with winsock and I can't explain a few things that are happening.. I wrote 2 basic functions that send/receive all the data through a TCP connection..
bool sending(SOCKET source, char* buff, int size, int flags = 0)
{
int sent = 0;
int totalst = 0;
int total = size;
while(totalst != total)
{
sent = send(source, buff+totalst, size-totalst, flags);
if(sent > 0)
{
totalst += sent;
}
if(sent == SOCKET_ERROR)
{
return false;
}
}
return true;
}
bool receive(SOCKET source, char* buff, int size, int flags = 0)
{
int rec = 0;
int totalrc = 0;
int total = size;
while(totalrc != total)
{
rec = recv(source, buff+totalrc, size-totalrc, flags);
if(rec > 0)
{
totalrc += rec;
}
if(rec == SOCKET_ERROR)
{
return false;
}
}
return true;
}
The server sends out an integer that contains the size of the data block that will follow it.. This size of the data block in my case shouldn't change, it should always be 92600 bytes, but sometimes the client receives 92604 bytes instead. The odd thing is that if I make the server wait after sending the bock size and the block itself(with Sleep) it always sends what I would expect..
int i=0;
while(i < 100)
{
i++;
dat = getData();
len = sizeof(dat);
sending(source, (char*)&len, sizeof(len));
sending(source, dat, len);
Sleep(200);
}
Can it bee that the client receives the incorrect value of bytes because of lag ? Is there any way of fixing this? Any help is appreciated!