12
votes

What's the maximum size of Linux UDP receive buffer? I thought it's limited only by available RAM, but when I set

5GB for rmem_max:

echo 5000000000 > /proc/sys/net/core/rmem_max

and 4GB for the actual socket buffer (in Erlang):

gen_udp:listen(Port, [{recbuf, 4000000000}])

When I measure the buffer utilization, it shows:

# netstat -u6anp | grep 5050
udp6  1409995136      0 :::5050  :::*       13483/beam.smp

I can't exceed this 1.4GB. For smaller buffer sizes, like e.g. 500MB, actual buffer size matched the configured value. My system is Debian 6.0, the machine has 50GB RAM available.

2
Where does it say it's limited only by available RAM? And why do you think you need a 4GB buffer?user207421
It doesn't. It doesn't say it's limited in any other way either. I need such a buffer to avoid data loss during longer network traffic peek.Wacław Borowiec
On the contrary. It says the kernel may adjust the value you supply up or down, and advises you to call getsockopt() to see what value was actually allocated. I find it hard to believe you need 4GB to handle traffic peaks. Probably you should just read faster.user207421
"It says the kernel may adjust the value you supply up or down" where did you find this information? How can you read faster than "while(true){recv(Socket)}" within one thread? I'm dropping packets after receiving them for test's sake. I'm able to read 60000 600B-sized packets per second, while it's not a problem to generate 200000/s traffic. Under these conditions the buffer fills after 16 seconds. You can't objectively say that 10s is a peek, but 20s is not. I'd rather expect, that with a better machine I'm able to survive longer peek.Wacław Borowiec
I've been reading that statement in man pages for over 20 years. It isn't news. How do you know you're dropping the packets at the receiver?user207421

2 Answers

12
votes

It seems that there is a limit in linux. I have tried setting rmem_max to 2^32-1 with success.

   root@xxx:/proc/sys/net/core# echo 2147483647 > rmem_max
   root@xxx:/proc/sys/net/core# cat rmem_max
   2147483647

2^32 was too much:

   root@xxx:/proc/sys/net/core# echo 2147483648 > rmem_max
   root@xxx:/proc/sys/net/core# cat rmem_max
   -18446744071562067968

Setting to 5000000000 yields:

   root@xxx:/proc/sys/net/core# echo 5000000000 > rmem_max
   root@xxx:/proc/sys/net/core# cat rmem_max
   705032704

I have tested in python that setting and getting socket receive buffer with

   ss.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, bufferSize)
   print ss.getsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF)

If 'bufferSize' is less then 1024^3 program prints doubled 'bufferSize', otherwise it falls back to 256.

The value 705032704*2 = 1410065408 is close to the 1409995136 obtained by netstat.

0
votes

2^32-1 (2147483647, maximum 32bit signed integer)

root@root@localhost:~# sysctl -w net.core.rmem_max=2147483647
net.core.rmem_max = 2147483647

root@localhost:~# sysctl -w net.core.rmem_max=2147483648
sysctl: setting key "net.core.rmem_max": Invalid argument
net.core.rmem_max = 2147483648

Echoing into the /proc filesystem appears to overflow when attempting to set larger values.