After studying the "window size" concept, what I understood is that it keeps packet before sending over wire and till acknowledgement come for earliest packet . Once this gets filled up, subsequent packet will be dropped. Somewhere I also have read that TCP is a streaming protocol, and packet is what related to IP protocol at Network layer .
What I assumed till was that I have declared a Buffer (inside code) which I fill with some data and send this Buffer using socket. I declared a buffer of 10000 bytes and send it repeatedly using socket over 10 Gbps link .
I have following assumptions and questions. Please verify and help
If I want to send a packet of 64,256,512 etc. bytes, declared buffer inside code of that much space and send over socket. Each execution of send() command will send one packet of that much size .
So if I want to study the packet size variation effect on throughput, what do I have to do? Do I need to vary buffer size in code?
What are the socket buffer which we set using SO_SNDBUF and SO_RECVBUF? Google says it's buffer space for socket. Is it same as TCP window size or something different? Which parameter is more suitable to vary or to increase throughput?
Also there are three parameter in socket buffer: Min, Default and Max. Which one should I vary to my experiment and to get more relevance?