4
votes

I have a Django Application deployed on ElastickBeanstalk. I recently migrated the load balancer from Classic -> Application in order to support Websocket (layer formed by: Django-channels (~=1.1.8, channels-api==0.4.0), Redis Elasticache AWS, and Daphne (~=1.4)). HTTP, HTTPS and Web Socket protocol are working fine.

But I can't find a way to deploy Websocket over Secure SSL. It's killing me, and it is blocking, as HTTPS connection from the browser will cut a non secure ws:// peer requests.

Here is my ALB Configuration Does anyone as a solution?

enter image description here

3
Could you specify django-channels and Daphne versions used? - ekauffmann
daphne~=1.4 channels~=1.1.8 channels-api==0.4.0 - MaxBlax360

3 Answers

2
votes

After 2 more days investigating, I finally cracked this config!

Here is the answer:

  1. The right, and MINIMUM, aws - ALB Config: enter image description here Indeed, we need to

    • Decode SSL ( this is not a End-to-End encryption )
    • Forward All traffic to Daphne. The reason why I did not go for the very spread among the web conf : "/ws/*" routing to Daphne, is that It provided me indeed the HandShake OK, but afterward, nothing, nada, websocket could not be pushed back to the subscriber. The reason, I believe, is that the push back from Daphne does not respect the custom base trailing URL you customize in your conf. Also, I cannot be sure of this interpretation. What I am sure of however is that if I don't forward all traffic to Daphne, it doesn't work after handshake.

      1. The minimum Deployment CONF
    • NO NEED of complet .ebextension override proxy in deployment:

enter image description here .ebextensions/05_channels.config

files:
  "/opt/elasticbeanstalk/hooks/appdeploy/post/start_supervisor.sh":
  mode: "000755"
  owner: root
  group: root
  content: |
    #!/usr/bin/env bash
    sudo virtualenv -p /usr/bin/python2.7 /tmp/senv
    source /tmp/senv/bin/activate && source /opt/python/current/env
    sudo python --version > /tmp/version_check.txt
    sudo pip install supervisor

    sudo /usr/local/bin/supervisord -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf
    sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf reread
    sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf update
    sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf restart all
    sudo /usr/local/bin/supervisorctl -c /opt/python/current/app/fxf/custom_eb_deployment/supervisord.conf status
  • start_daphne.sh ( remark I'm choosing the 8001 port, according to my ALB conf )

    #!/usr/bin/env bash source /opt/python/run/venv/bin/activate && source /opt/python/current/env /opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 8001 fxf.asgi:channel_layer

  • start_worker.sh

    #!/usr/bin/env bash source /opt/python/run/venv/bin/activate && source /opt/python/current/env python /opt/python/current/app/fxf/manage.py runworker

  • supervisord.conf

`

[unix_http_server]
file=/tmp/supervisor.sock   ; (the path to the socket file)

[supervisord]
logfile=/tmp/supervisord.log ; supervisord log file
loglevel=error ; info, debug, warn, trace
logfile_maxbytes=50MB        ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10           ; (num of main logfile rotation backups;default 10)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false               ; (start in foreground if true;default false)
minfds=1024                  ; (min. avail startup file descriptors;default 1024)
minprocs=200                 ; (min. avail process descriptors;default 200)

; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL  for a unix socket

[program:Daphne]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/fxf/custom_eb_deployment/start_daphne.sh --log-file /tmp/start_daphne.log
directory=/opt/python/current/app
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/daphne.out.log
stderr_logfile=/tmp/daphne.err.log

[program:Worker]
environment=PATH="/opt/python/run/venv/bin"
command=sh /opt/python/current/app/fxf/custom_eb_deployment/start_worker.sh --log-file /tmp/start_worker.log
directory=/opt/python/current/app
process_name=%(program_name)s_%(process_num)02d
numprocs=2
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/workers.out.log
stderr_logfile=/tmp/workers.err.log

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

`

If some are still struggling with this conf, I might post a tuto on medium or something. Don't hesitate to push me for it on answers ;)

1
votes

I also have been struggling a lot with SSL, EBS and Channels 1.x, with exactly the same scenario you described, but finally I could deploy my app. SSL was always the problem, as Django was ignoring my routes in routing.py file for all SSL requests, and everything was working just fine before that.

I decided to send all the websockets requests to a unique root path in the server, say /ws/*. Then added a specific rule to the load balancer, which receives all these requests through port 443, and redirects them to port 5000 (which Daphne worker is listening to) as an HTTP request (not HTTPS!). This, under the assumption that behind the load balancer, the VPC is secure enough though. Beware that this configuration could involve security issues for other projects.

Now my load balancer configuration looks like this enter image description here

...as HTTPS connection from the browser will cut a non secure ws:// peer requests.

One more thing. You should start websocket connections through HTTPS with wss://. You could write something like this in your .js file.

var wsScheme = window.location.protocol.includes('https') ? 'wss' : 'ws';
var wsPath = wsScheme + '://' + window.location.host + '/your/ws/path';
var ws = new ReconnectingWebSocket(wsPath);

Good luck!

0
votes

you should use wss:// instead of ws://. and change setting about proxy. I just added my wsgi.conf.

<VirtualHost *:80>

WSGIPassAuthorization On

WSGIScriptAlias / /opt/python/current/app/config/wsgi.py

RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} =http
RewriteRule .* https://%{HTTP:Host}%{REQUEST_URI} [L,R=permanent]

LoadModule proxy_wstunnel_module /usr/lib/apache2/modules/mod_proxy_wstunnel.so

ProxyPreserveHost On
ProxyRequests Off

ProxyPass "/ws/chat" "ws://**your site**/ws/chat" Keepalive=On
ProxyPassReverse "/ws/chat" "ws://**your site**/ws/chat" Keepalive=On

<Directory /opt/python/current/app/>
  Require all granted
</Directory>

</VirtualHost>

then it will give you 200 status to connect. "/ws/chat/" shoud be replaced by your websocket url.

Before you make this file, you should check your daphne server is on. Problems What I went through are djangoenv and worker for daemon.config.

first, djangoenv should be on one line. It means no linebreak. second, if you use django channel v2, then it doesn't need worker. so erase it.

this is my daemon.config(I use 8001 port.):

files:
  "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_daemon.sh":
    mode: "000755"
    owner: root
    group: root
    content: |
      #!/usr/bin/env bash

      # Get django environment variables
      djangoenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
      djangoenv=${djangoenv%?}

      # Create daemon configuraiton script
      daemonconf="[program:daphne]
      ; Set full path to channels program if using virtualenv
      command=/opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 8001 config.asgi:application
      directory=/opt/python/current/app
      user=ec2-user
      numprocs=1
      stdout_logfile=/var/log/stdout_daphne.log
      stderr_logfile=/var/log/stderr_daphne.log
      autostart=true
      autorestart=true
      startsecs=10

      ; Need to wait for currently executing tasks to finish at shutdown.
      ; Increase this if you have very long running tasks.
      stopwaitsecs = 600

      ; When resorting to send SIGKILL to the program to terminate it
      ; send SIGKILL to its whole process group instead,
      ; taking care of its children as well.
      killasgroup=true

      ; if rabbitmq is supervised, set its priority higher
      ; so it starts first
      priority=998

      environment=$djangoenv"

      # Create the supervisord conf script
      echo "$daemonconf" | sudo tee /opt/python/etc/daemon.conf

      # Add configuration script to supervisord conf (if not there already)
      if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
          then
          echo "[include]" | sudo tee -a /opt/python/etc/supervisord.conf
          echo "files: daemon.conf" | sudo tee -a /opt/python/etc/supervisord.conf
      fi

      # Reread the supervisord config
      sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf reread

      # Update supervisord in cache without restarting all services
      sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf update

      # Start/Restart processes through supervisord
      sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart daphne

And Double check your security group alb to ec2. good luck!