0
votes

I have just switched my servers away from Apache/mod_wsgi to go towards a nginx/uwsgi stack. However, I am seeing very bad performances compared to Apache, even though the server load is the same/even less during Christmas. Any ideas as to why, I am very new to uWSGI/Nginx stack? Here is my configuration:

[uwsgi]

chdir = /srv/www/poka/app/poka module = nginx.wsgi

home = /srv/www/poka/app/env/main

env = DJANGO_SETTINGS_MODULE=settings.prod

//master = true

processes = 10

socket = /srv/www/poka/app/poka/nginx/poka.sock

chmod-socket = 666

vacuum = true

pidfile = /tmp/project-master.pid

harakiri = 60

max-requests = 5000

daemonize = /var/log/uwsgi/poka.log

3

3 Answers

2
votes

First, you have to identify where the problem is. Assuming you don't do anything fanzy, like requests with huge payloads, I do a few things:

nginx: Log duration of upstream requests with $upstream_response_time. Compare it to total response time with $request_time. This tells you, where the time is lost, i.e. if nginx has a problem, or the upstream components (uwsgi, django, database, …) If uwsgi is the problem …

uwsgi: enable the stats server, then use uwsgitop to get a quick overview of the stats If uwsgi is fine, look into what Python/Django is doing …

uwsgi+python: enable pytracebacker-sockets to view what the workers are doing. If you see workers getting stuck, enable (if that is reasonable in your scenario) harakiri-mode, so uwsgi can recycle stuck workers. When using harakiri do not forget to enable the pytracebacker, as that will give you Python stacktraces when a worker is killed.

Django: Enable the debug-toolbar to see where and how much the application is spending its time.

When you've identified the component, you're already much closer to a solution, and can ask much more specific questions.

(If you are doing big requests, then compression settings and max-payload-related settings of uwsgi/nginx may be good candidates to look into. They caused us some headaches.)

0
votes

Do you really need 10 processes? Why you don't try a minor amount? uWSGI + Nginx can handle lot of concurrent requests just with 2/4 processes, perhaps the bottleneck is there.

0
votes

you can

  1. monitor cpu/mem for detail comparison

  2. install uwsgitop(via pip install uwsgitop) to monitor your uwsgi process