12
votes

I have a problem with open files under my Ubuntu 9.10 when running server in Python2.6 And main problem is that, that i don't know why it so..

I have set

ulimit -n = 999999

net.core.somaxconn = 999999

fs.file-max = 999999

and lsof gives me about 12000 open files when server is running.

And also i'm using epoll.

But after some time it's start giving exeption:

File "/usr/lib/python2.6/socket.py", line 195, in accept error: [Errno 24] Too many open files

And i don't know how it can reach file limit when it isn't reached.

Thanks for help)

3
What does "ulimit -n" return? Is the system actually letting you set it to 999999? - Daniel Stutzbach
You are probably hitting the per-process file descriptor limit and you don't note how you have modified it. See /usr/include/linux/limits.h NR_OPEN What do you do with 12k open files?? - msw
About this "/usr/include/linux/limits.h NR_OPEN" i didn't know, it was set to 1024, changed up to 65536. About "ulimit -n" it's return 999999 Will test now server with this new NR_OPEN option. And will reply) Thanks) - Andrey Nikishaev
I tested server with this new option and it's work perfectly)) Thank you very much for help))) - Andrey Nikishaev
hm.. found some strange behavior of system. i set all limits to 999999 and start server. I add some function to it that write to the log number of open files in the system with "sysctl fs.file-nr" and "lsof | wc -l", when server is highly loaded it gives error24: Too many open files. But number of open files is not bigger then 15k. May be there is another limits? or some of them didn't set properly(if so, how this can be checked?) - Andrey Nikishaev

3 Answers

28
votes

Params that configure max open connections.

at /etc/sysctl.conf

add:

net.core.somaxconn=131072
fs.file-max=131072

and then:

sudo sysctl -p

at /usr/include/linux/limits.h

change:

NR_OPEN = 65536

at /etc/security/limits.conf

add:

*                soft    nofile          65535
*                hard    nofile          65535
13
votes

You can also do this from your python code like below

import resource
resource.setrlimit(resource.RLIMIT_NOFILE, (65536, 65536))

The second argument is tuple (soft_limit, hard_limit). The hard limit is the ceiling for the soft limit. The soft limit is what is actually enforced for a session or process. This allows the administrator (or user) to set the hard limit to the maximum usage they wish to allow. Other users and processes can then use the soft limit to self-limit their resource usage to even lower levels if they so desire.

-1
votes

If you are using supervisord to run your process, everything mentioned above may not be enough. That happens because supervisord has its own configuration for the limit of opened files of its processes.

On /etc/supervisord.conf

[supervisord]
...
minfds=1024;