I have a puma server running a ruby on rails app on an AWS EC2 instance. It was working fine for a while, but I found it responding with 502 errors a few hours later. The app is deployed with capistrano.
A simple restart of puma fixed the problem temporarily, but I want to prevent it happening again. Not quite sure what to try first.
Here's my capistrano puma config:
set :puma_rackup, -> { File.join(current_path, 'config.ru') }
set :puma_state, "#{shared_path}/tmp/pids/puma.state"
set :puma_pid, "#{shared_path}/tmp/pids/puma.pid"
set :puma_bind, "unix://#{shared_path}/tmp/sockets/puma.sock"
set :puma_conf, "#{shared_path}/puma.rb"
set :puma_access_log, "#{shared_path}/log/puma.error.log"
set :puma_error_log, "#{shared_path}/log/puma.access.log"
set :puma_role, :app
set :puma_env, fetch(:rack_env, fetch(:rails_env, 'production'))
set :puma_threads, [0, 8]
set :puma_workers, 0
set :puma_worker_timeout, nil
set :puma_init_active_record, true
set :puma_preload_app, false
set :bundle_gemfile, -> { release_path.join('Gemfile') }
Puma error log doesn't show any crashes.
Nginx error log shows (xx'd out client ip): 2016/08/09 06:25:52 [error] 1081#0: *348 connect() to unix:///home/deploy/myapp/shared/tmp/sockets/puma.sock failed (111: Connection refused) while connecting to upstream, client: xx.xx.xx.xx, server: example.com, request: "POST /mypath HTTP/1.1", upstream: "http://unix:///home/deploy/myapp/shared/tmp/sockets/puma.sock:/mypath", host: "example.com"
ls /proc/<pid of your puma worker>/fd | wc -l
andcat /proc/<same pid>/limits
shows? – Vasfedls /proc/2522/fd | wc -l
> 15cat /proc/2522/limits
> Max open files 1024 4096 files – CaptainStiggz