I am trying to establish a SSL-encrypted connection to a my MySQL Docker service running on a AWS VPC (setup up by the Docker for AWS cloud formation template). The elastic load balancer is configured to redirect port 3306. There is no problem to connect to the container (e.g. by using MySQLWorkbench, mysql-client, ..) as long as SSL is not turned on (adding AWS's own certificates (ACM) or my custom certificates to the ELB listener). In case SSL is enabled, the client starts hanging / freezing, without returning a proper error. I added the ca-certs from ACM, generated my own certificates (with and without additonal key / cert for the client) but nothing seems to resolve my problem.
Now I am well aware of the fact, that this setup is not that usual. I guess the standard way of doing this is to configure the MySQL-Server itself. AFAIK, in this case only the connection between client and ELB is encrypted, but I do not understand why this causes a problem?
I am grateful for answers!
In MySQL's client/server protocol, the server talks first. It advertises its capabilities (including whether it supports SSL). Then the client requests that the connection switch to SSL mode. Only then does SSL negotiation take place.
For this reason, it is not possible to offload SSL in front of MySQL.
Your connection hangs because the client is waiting for the initial packet from the server, while the ELB is waiting for the client to start negotiating SSL -- because unlike the MySQL client/server protocol, the client talks first on standard SSL negotiation.
You have to have a certificate on the MySQL Server, and not on the ELB, for this to work.
An AWS Network Load Balancer is a more appropriate solution for exposing MySQL, but you still need the SSL cert on the MySQL Server itself.
I have set up an Google Cloud Compute Instance:
Machine type
n1-standard-1 (1 vCPU, 3.75 GB memory)
CPU platform
Intel Haswell
Zone
us-east1-c
I can ssh in using the external address.
I have installed the vncserver and can access it on port 5901 from localhost as well as the internal IP.
I am trying to access it from the static, external IP address but it is not working.
I have configured the firewall to open to port to 0.0.0.0/0, but it is not reachable.
Can anyone help?
------after further investigation from the tips from the two answers (thanks, both!), I have a partial answer:
The Google Cloud Compute instance was set, by default, to not allow
HTTP traffic. I reset the configuration to allow HTTP traffic. I
then tried the troubleshooting tip to run a small HTTP service in
python. I was able to get a ressponse from the service over the
internet.
The summary of the current situation is as follows:
The external IP address can be reached
It is enabled and working for SSH
It is enabled and working for HTTP
It does not seem to allow traffic from vncserver
Any idea how to configure the compute instance to allow for vncserver traffic?
If you already verified that Google Firewall or your VM are not blocking packets, you must make sure that VNC service is configured to listen on the external IP address.
You can always use a utility like nmap outside Google project to reveal information on the port status.
enable http/https traffic form the firewall as per the need. it will work!!
The Google Cloud Compute instance was set, by default, to not allow HTTP traffic. I reset the configuration to allow HTTP traffic. I then tried the troubleshooting tip to run a small HTTP service in python. I was able to get a response from the service over the internet.
As such, the original question is answered, I can access Google Cloud Compute Instance External IP. My wider issue is still not solved, but I will post a new, more specific question about this issue
TLDR: make sure you are requesting http not https
In my case i was following the link from my CE instance's External Ip property which takes you directly to the https version and i didn't set up https, so that was causing the 'site not found' error.
Create an entry in your local ssh config file as below with mentioned local forward port. In my case its an example of yarn's IP, which I want to access in browser.
Host hadoop
HostName <External-IP>
User <Local-machine-username>
IdentityFile ~/.ssh/<private-key-for-above-user>
LocalForward 8089 <Internal-IP>:8088
In addition to having the firewall rules to allow HTTP traffic in both Google Cloud Platform and within the OS of the instance, make sure you install a web server such as Apache or Nginx.
After installing the web server, you connect to the instance using SSH and verify you do not get a failed connection with the following command:
$ sudo wget http://localhost
If the connection is positive, it means that you can access your external URL:
http://<IP-EXTERNAL-VM>
Usually there are two main things to check.
1. Port
By default, only port 80, 443 and ICMP are exposed. If your server is running on a different port, create a record for the same.
2. Firewall
Make sure you are allowing http and https traffic based on your need.
oua re
For me the problem was that I set up the traffic for the firewall rule to be 'Egress' instead of 'Ingress'.
If anyone already initiated 'https'
just disable it and check again.
I have ELB balancing TCP traffic to my Node.js processes. When ELB is balancing TCP connections it does not send the X-Forwarded-Proto header like it does with http connections. But I still need to know if the connection is using SSL/TLS so I can respond with a redirect from my Node process if it is not a secure connection.
Is there a way to make ELB send this header when balancing TCP connections?
Thanks
You can configure proxy protocol for your ELB to get connection related information. In case of HTTP the ELB adds headers telling about the client information, in case of TCP however, AWS ELB simply passes through the headers from the client without any modifications, this causes the back end server to lose client connection information as it is happening in your case.
To enable proxy control for your ELB, you will have to do it via API, there is currently no way to do it via UI.
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html
The above doc is a step-by-step guide on how to do this, I don't want to paste the same here as that information might change over time.
EDIT:
As it turns out, Amazon implements Version 1 of the proxy protocol only which does not give away SSL information. It does however give port numbers which was requested by the client and a process can be developed stating something like if the request was over port 443 then it was SSL. I don't like it as it is indirect, requires hardocoding and coordination between devops and developers... seems to be the only way for now...lets hope AWS ELB starts supporting Version 2 of the proxy protocol which does have SSL info soon.
I got my Dropwizard application running in an Openshift DIY Cartridge.
The application uses Https and binds to port 8080. I can access the application with curl from within an ssh connection via rhc ssh appname.
What do I have do configure that I can access my Dropwizard application via the appname-username.rhcloud.com domain?
I always get a proxy error 502. Error reading from remote server.
Any suggestion is greatly appreciated.
tmy
In OpenShift your application is deployed behind a proxy server ant this proxy server can only communicate with your application using http.
The OpenShift proxy server allows you to use both http and https connections and to communicate which type of connection was used the proxy server adds x-forwarded headers in the request to your application.
Tho configure Dropwizard, you will need to configure the http connector on port 8080, the default, and set useForwardedHeaders to true, also the default. See http://dropwizard.io/manual/configuration.html#http for more information.
At this point Dropwizard is aware whether a http or https connection was used. The thing I did not find is how to make content "confidential" so that the jetty container inside Dropwizard redirect the client to the https connector served by the OpenShift proxy server when the client tries to connect to your application using http.
Using as3crypto's TLSSocket it should be possible to connect to an SSL server. However, my server uses a self-signed certificate. How can I configure the client to accept that certificate?
I'm assuming I need to hard-code the cert's fingerprint in the client somewhere (or get it there some way). That's ok.
If as3crypto doesn't support this, other options are welcome.