How to make ELB pass protocol to node.js process (Elastic Beanstalk) - amazon-elastic-beanstalk

I have ELB balancing TCP traffic to my Node.js processes. When ELB is balancing TCP connections it does not send the X-Forwarded-Proto header like it does with http connections. But I still need to know if the connection is using SSL/TLS so I can respond with a redirect from my Node process if it is not a secure connection.
Is there a way to make ELB send this header when balancing TCP connections?
Thanks

You can configure proxy protocol for your ELB to get connection related information. In case of HTTP the ELB adds headers telling about the client information, in case of TCP however, AWS ELB simply passes through the headers from the client without any modifications, this causes the back end server to lose client connection information as it is happening in your case.
To enable proxy control for your ELB, you will have to do it via API, there is currently no way to do it via UI.
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html
The above doc is a step-by-step guide on how to do this, I don't want to paste the same here as that information might change over time.
EDIT:
As it turns out, Amazon implements Version 1 of the proxy protocol only which does not give away SSL information. It does however give port numbers which was requested by the client and a process can be developed stating something like if the request was over port 443 then it was SSL. I don't like it as it is indirect, requires hardocoding and coordination between devops and developers... seems to be the only way for now...lets hope AWS ELB starts supporting Version 2 of the proxy protocol which does have SSL info soon.

Related

haproxy acl to filter tcp request based on client IP taken from PROXY protocl v2 headers

So we have an AWS setup with NLB so that we can attach it to EIP and thus be static for customer whitelisting.
NLB sends Proxy Protocol v2 headers to haproxy which sends it back to the backend servers.
The app is picking this up and uses it.
The issue that I am trying to figure out is that since haproxy gets the internal IP of the NLB and it gets public client IP only through PROXY protocol header,
how can I parse client IP from PROXY header and filter it based on whitelisting ACL in haproxy?
I have a listen configuration like this
listen backoffice
bind 10.2.3.4:443 accept-proxy
mode tcp
default-server inter 3s fall 3 rise 3 error-limit 1 on-error mark-down
option tcplog
acl network_allowed src 5.6.7.8 1.4.6.8 4.6.7.8
tcp-request connection reject if !network_allowed
server backoffice.prod.local 10.1.2.3:443 check-send-proxy send-proxy-v2-ssl-cn port 443
THIS ^^ breaks the haproxy for this backend - it works only if I comment the tcp-request line and reload
To me, this looks like quite useful use case but so far no online research has shown any promising haproxy configuration to do just that.
I have only less than 2 days to figure this out so let's hope the community can lend a hand in a short time
So I ended up figuring it out for my case :)
working config is this :
listen backoffice
bind 10.2.3.4:443 accept-proxy
mode tcp
default-server inter 3s fall 3 rise 3 error-limit 1 on-error mark-down
option tcplog
acl network_allowed src 5.6.7.8 1.4.6.8 4.6.7.8
tcp-request content reject if !network_allowed
server backoffice.prod.local 10.1.2.3:443 check port 443
I removed the part where proxy headers are passed to backend as it did not need them.
I only needed to have NLB pass proxy header to haproxy so that we can reliably filter on source/ client IP

SSL encrypted connection to MySQL container not possible over AWS elastic load balancer

I am trying to establish a SSL-encrypted connection to a my MySQL Docker service running on a AWS VPC (setup up by the Docker for AWS cloud formation template). The elastic load balancer is configured to redirect port 3306. There is no problem to connect to the container (e.g. by using MySQLWorkbench, mysql-client, ..) as long as SSL is not turned on (adding AWS's own certificates (ACM) or my custom certificates to the ELB listener). In case SSL is enabled, the client starts hanging / freezing, without returning a proper error. I added the ca-certs from ACM, generated my own certificates (with and without additonal key / cert for the client) but nothing seems to resolve my problem.
Now I am well aware of the fact, that this setup is not that usual. I guess the standard way of doing this is to configure the MySQL-Server itself. AFAIK, in this case only the connection between client and ELB is encrypted, but I do not understand why this causes a problem?
I am grateful for answers!
In MySQL's client/server protocol, the server talks first. It advertises its capabilities (including whether it supports SSL). Then the client requests that the connection switch to SSL mode. Only then does SSL negotiation take place.
For this reason, it is not possible to offload SSL in front of MySQL.
Your connection hangs because the client is waiting for the initial packet from the server, while the ELB is waiting for the client to start negotiating SSL -- because unlike the MySQL client/server protocol, the client talks first on standard SSL negotiation.
You have to have a certificate on the MySQL Server, and not on the ELB, for this to work.
An AWS Network Load Balancer is a more appropriate solution for exposing MySQL, but you still need the SSL cert on the MySQL Server itself.

Nginx reverse proxy DNS configuration

I am looking to deploy an nginx/dns server on a vps proxy that maps to the real back-end in a different geographical location. The back-end runs apache,mysql,dovecot,postfix. It is a pay-for mail server. The users get entered through apache through php into mysql, and when users set up IMAP, dovecot/postfix pools them from mysql and delivers or uses the smtp outbound.
I read about something in the nginx.conf file, that I can declare the mail hostname on the proxy as so:
mail {
server_name mail.example.com;
...
}
This mail.example.com is the actual mx for the example.com mail exchanger listed in DNS? Here is where that came from:
"As you can see, we declared the name of this server at the top of the mail context.
This is because we want each of our mail services to be addressed as mail.example.
com. Even if the actual hostname of the machine on which NGINX runs is different,
and each mail server has its own hostname, we want this proxy to be a single point
of reference for our users. This hostname will in turn be used wherever NGINX
needs to present its own name, for example, in the initial SMTP server greeting."
So from my understanding, the physical hostname of the proxy should be something else besides mail.example.com. So in DNS on the proxy, I can define that as anyhost.example.com? The proxy also proxies back to my apache on the back-end.
Finally, on the back-end, how do I set up my DNS for that? What hostname do I choose for the actual box running apache,mysql,dovecot,postfix? Its all on one box. I understand that on the registrar, I point 2 nameservers, these should be two proxies, that way running a dig would only pull up the proxies and the MX which should be "known" to be on the proxy.
in your case where all of the services in one box including the proxy, you can set the apache, mysql and other services accessible only from localhost / 127.0.0.1. Then from nginx you put
upstream: 127.0.0.1:80
upstream: 127.0.0.1:3306
therefore the nginx is serving frontend request and forward them to designated services

html5 WebSocket

I already have a server with port and want to write a web app to get the information form the port. Will this be possible with WebPorts?
The Client doesn't even need to talk back to the server, which is the whole point of websockets I would imagine, but since I already have the ports setup, I might be easier and cleaner to just connect and get the info without having to refresh.
WebSockets are not intended as clear TCP channels over which other existing protocols can be implemented.
WebSockets are designed to allow messages to be sent between a client and server, where an event is raised each time a message is received.
Hence a WebSocket client cannot simply connect to an existing TCP server - that server also has to speak the WebSocket protocol.
You could of course write a WebSocket-based server that does nothing but act as a proxy to existing network services.
I think you want websockify which is a WebSocket to plain TCP socket bridge/proxy. It also allows sending and receiving of binary data with the older version of the WebSocket protocol which hadn't yet added direct binary data support.
Disclaimer: I created websockify.

WebSocket won't connect to anything other than 127.0.0.1 / localhost

I have a testapp consisting of an HTML5/WebSocket client and an HTTP/WS server. Both servers are in C#; the HTTP server is my own simple thing and the WS server is also homebrew based on concepts from http://nugget.codeplex.com/. HTTP server is listening on 0.0.0.0:5959 and WS server on 0.0.0.0:5960 (accept connections from any client, but on different ports).
My index.html includes some JavaScript that opens a WebSocket to 'ws://'+document.location.hostname+':5960/' (that is, to the same IP address that the webpage came from, but on port 5960). The WS server sends sample data every 100ms. All in all, it's a pretty straightforward demo.
I'm using Chrome 12.0 on Windows7.
I've found that the HTTP server works from any client, either a browser on my machine pointed to 127.0.0.1:5959 or localhost:5959, AND it works when any machine (mine or a remote machine... "remote" being a different PC on my desk :) hits my server machine's work-internal 10-net address 10.122.0.159:5959. Everything works as expected in HTTP land.
However, the WebSocket only works on 127.0.0.1 and localhost; remote machines can successfully fetch HTML from 10.122.0.159:5959 but the WebSocket will NOT connect to 10.122.0.159:5960. In fact, when I point my local browser to it's own 10-net address (10.122.0.159:5959) I get the same result - HTML loads but WebSocket does not connect.
Any ideas as to why this might be happening?
Does CORS require that the WS be using the same port as the HTTP request originated from? If so, is there a special exception to the rule for 127.0.0.1?
Many thanks,
-Dave
Update
It seems to be caused by a proxy server blocking ws:// requests. Our company employs a proxy server for content filtering and all the usual stuff, and our browsers are configured to use it.Chrome uses IE's proxy settings, and IE's default settings are for localhost to not use a proxy server. When I check the box to have local connections also use the proxy server, my ws:// requests to localhost get blocked. Conversely, when I uncheck the "use proxy server" box my server does rx the WS request. Similarly with the remote machine, if I turn off the proxy on the remote machine my server does rx the ws:// request.
So it's a proxy thing, not a CORS or socket thing, and now I'm off to explore proxy settings with our IT folks.
There is no WebSocket limitation on cross-origin except what is governed by the CORS security in the handshake.
It sounds like something is wrong with your WebSocket server and it is only listening on localhost for connections. I would add some debug output to the OnClientConnect routine in Nugget (WebSocketServer.cs) so you can see when socket connections happen. If you really think it isn't a problem with the server then I would suggest using wireshark and comparing the localhost connection to the remote connection.
Also, if you are using the SilverLight WebSocket prototype (README) in IE 9, then you are restricted to ports 4502-4534 for WebSocket connections. It's possible that for localhost this restriction is lifted.
It is/was indeed a proxy thing.
Rather than asking our IT folks to make changes (good luck with that, eh?) I simply turned off proxy for 10.122.0.159 ([Howto for IE/Chrome][1]). I briefly experimented with turning it off for the ws:// protocol but couldn't get it to work, so for now just opening that one IP address does the trick.