17
votes

I am trying to set up proxy_protocol in my nginx config. My server sits behind an AWS load balancer (ELB), and I have enabled Proxy Protocol on that for both ports 80 and 443.

However, this is what I get when I hit my server:

broken header: "��/��
                                                             '���\DW�Vc�A{����
                                                                              �@��kj98���=5���g@32ED�</A
    " while reading PROXY protocol, client: 172.31.12.223, server: 0.0.0.0:443

That is a direct copy paste from the nginx error log - wonky characters and all.

Here is a snip from my nginx config:

server {
  listen  80 proxy_protocol;
  set_real_ip_from 172.31.0.0/20; # Coming from ELB
  real_ip_header proxy_protocol;
  return  301 https://$http_host$request_uri;
}

server {
  listen      443 ssl proxy_protocol;
  server_name *.....com
  ssl_certificate      /etc/ssl/<....>;
  ssl_certificate_key  /etc/ssl/<....?;
  ssl_prefer_server_ciphers On;
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!DSS:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4;
  ssl_session_cache shared:SSL:10m;
  add_header X-Frame-Options DENY;
  add_header X-Content-Type-Options nosniff;
  ssl_stapling on;
  ssl_stapling_verify on;


  ...

I can't find any help online about this issue. Other people have had broken header issues, but the error with bad headers are always readable - they don't look like they are encoded like they are for me.

Any ideas?

5
Looks like this is on 443. How does it do on port 80? Also, is the ELB in HTTPS (not TCP) mode?Michael - sqlbot
Port 80 is handled by the first block - it just redirects to https. It is in TCP/SSL mode so websockets work.Scott Hillman
Same problem. Are you using Kubernetes, by chance? I'm trying to set this up going into a Kubernetes cluster and thought maybe kube-proxy had something to do with it.iameli
I am getting the same issue. I tried everything I could think of with no solutions. Anyone have an update?Heath N

5 Answers

14
votes

Two suggestions:

  1. Verify that your ELB listener is configured to use TCP as the protocol, not HTTP. I have an LB config like the following that's routing to Nginx with proxy_protocol configured:

    {
      "LoadBalancerName": "my-lb",
      "Listeners": [
         {
          "Protocol": "TCP",
          "LoadBalancerPort": 80,
          "InstanceProtocol": "TCP",
          "InstancePort": 80
        }
      ],
      "AvailabilityZones": [
        "us-east-1a",
        "us-east-1b",
        "us-east-1d",
        "us-east-1e"
      ],
      "SecurityGroups": [
         "sg-mysg"
      ]
    }
    
  2. You mentioned that you have enabled Proxy Protocol in the ELB, so I'm assuming you've followed AWS setup steps. If so then the ELB should be crafting the HTTP request correctly with the first line as something like PROXY TCP4 198.51.100.22 203.0.113.7 35646 80\r\n. However if the HTTP request is not coming into Nginx with the PROXY ... line then it could cause the problem you're seeing. You could reproduce that if you hit the EC2 DNS name directly in the browser, or you ssh into the EC2 instance and try something like curl localhost, then you should see a similar broken header error in the Nginx logs.

To find out whether it works with a correctly formed HTTP request you can use telnet:

    $ telnet localhost 80
    PROXY TCP4 198.51.100.22 203.0.113.7 35646 80
    GET /index.html HTTP/1.1
    Host: your-nginx-config-server_name
    Connection: Keep-Alive

Then check the Nginx logs and see if you have the same broken header error. If not then the ELB is likely not sending the properly formatted PROXY request, and I'd suggest re-doing the ELB Proxy Protocol configuration, maybe with a new LB, to verify it's set up correctly.

3
votes

I got this unreadable header issue too and here are the cause and how I fixed it.

In my case, Nginx is configured with use-proxy-protocol=true properly. It complains about the broken header solely because AWS ELB did not add the required header (e.g. PROXY TCP4 198.51.100.22 203.0.113.7 35646 80) at all. Nginx sees the encrypted HTTPS payload directly. That's why it prints out all the unreadable characters.

So, why didn't the AWS ELB add the PROXY header? It turned out I used wrong ports in the commands to enable Proxy Protocol policy. Instance ports should be used instead of 80 and 443.

The ELB has the following port mapping.

80 -> 30440
443 -> 31772

The commands should be

aws elb set-load-balancer-policies-for-backend-server \
  --load-balancer-name a19235ee9945011e9ac720a6c9a49806 \
  --instance-port 30440 \
  --policy-names ars-ProxyProtocol-policy

aws elb set-load-balancer-policies-for-backend-server \
  --load-balancer-name a19235ee9945011e9ac720a6c9a49806 \
  --instance-port 31272 \
  --policy-names ars-ProxyProtocol-policy

but I used 80 and 443 by mistake.

Hope this helps somebody.

3
votes

I had similar situation, nginx had 'proxy_protocol' on but AWS ELB settings was not on, so I got the similar message.

Solutions to edit settings to turn it on:

enter image description here

2
votes

I had this error and came across this ticket:

which ultimately led me to figuring out that I had an unneeded proxy_protocol declaration in my nginx.conf file. I removed that and everything was working again.

Oddly enough, everything worked fine with nginx version 1.8.0, but when I upgraded to nginx version 1.8.1 is when I started seeing the error.

1
votes

Stephen Karger's solution above is correct, you must adjust make sure to configure your ELB to support proxy. Here is the AWS docs for doing exactly that: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html. The docs are a bit daunting at first, so if you want you can just skip to steps 3 and 4 under the Enable Proxy Protocol Using the AWS CLI section. Those are the only necessary steps for enabling the proxy channeling. Additionally, as Stephen also suggested, you must make sure that your ELB is using TCP instead of http or https, as both of these will not behave properly with ELB's proxy implementation. I suggest moving your socket channel away from common ports like 80 and 443, just so you can still maintain those standardized connections with their default behavior. Of course, making that call is entirely dependent on how your app stack looks.

If it helps, you can use the npm package wscat to debug your websocket connections like so:

$ npm install -g wscat
$ wscat --connect 127.0.0.1

If the connection works in local, then it is for sure your load balancer. However, if it doesn't, there is almost definitely a problem with your socket host.

Additionally, a tool like nmap will aid you in discovering open ports. A nice checklist for debugging:

npm install -g wscat
# can you connect to it from within the server?
ssh [email protected]
wscat -c 127.0.0.1:80
# can you connect to it from outside the server?
exit
wscat -c 69.69.69.69:80
# if not, is your socket port open for business?
nmap 69.69.69.69:80

You can also use nmap from within your server to discover open ports. to install nmap on ubuntu, simply sudo apt-get install nmap. on osx, brew install nmap

Here is a working config that i have, although it does not provide ssl support at the moment. In this configuration, I have port 80 feeding my rails app, port 81 feeding a socket connection through my elb, and port 82 open for internal socket connections. Hope this helps somebody!! Anybody using Rails, unicorn, and Faye to deploy should find this helpful. :) happy hacking!

# sets up deployed ruby on rails server
upstream unicorn {
  server unix:/path/to/unicorn/unicorn.sock fail_timeout=0;
}

# sets up Faye socket
upstream rack_upstream {
  server 127.0.0.1:9292;
}

# sets port 80 to proxy to rails app
server {
  listen 80 default_server;
  keepalive_timeout 300;
  client_max_body_size 4G;
  root /path/to/rails/public;
  try_files $uri/index.html $uri.html $uri @unicorn;

  location @unicorn {
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_set_header X-Forwarded_Proto $scheme;
    proxy_redirect off;
    proxy_pass http://unicorn;
    proxy_read_timeout 300s;
    proxy_send_timeout 300s;
  }

  error_page 500 502 503 504 /500.html;
  location = /500.html {
    root /path/to/rails/public;
  }
}

# open 81 to load balancers (external socket connection)
server {
  listen 81 proxy_protocol;
  server_name _;
  charset UTF-8;
  location / {
    proxy_pass http://rack_upstream;
    proxy_redirect off;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
  }
}

# open 82 to internal network (internal socket connections)
server {
  listen 82;
  server_name _;
  charset UTF-8;
  location / {
    proxy_pass http://rack_upstream;
    proxy_redirect off;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
  }
}