I am unable to tunnle to my free hosted instance of a rails app on cloudfoundry inftrastructure.
When I run 'vmc tunnel mysql-service', I get the below:
1: none
2: mysql
3: mysqldump
Which client would you like to start?> 2
Opening tunnel on port 10000... FAILED
CFoundry::AccountNotEnoughMemory: 600: Not enough memory capacity, you're allowed: 2048M
For more information, see ~/.vmc/crash
Checking the ~/.vmc/crash logs I see:
Time of crash:
2013-03-13 18:16:54 -0400
CFoundry::AccountNotEnoughMemory: 600: Not enough memory capacity, you're allowed: 2048M
<<<
REQUEST: PUT https://api.cloudfoundry.com/apps/caldecott
REQUEST_HEADERS:
Authorization : bearer eyJhbGciOiJSUzI1NiJ9.eyJleHAiOjEzNjM4MTc3OTgsInVzZXJfbmFtZSI6ImhzdWVpbmczQGdtYWlsLmNvbSIsInNjb3BlIjpbImNsb3VkX2NvbnRyb2xsZXIucmVhZCIsIm9wZW5pZCIsInBhc3N3b3JkLndyaXRlIl0sImVtYWlsIjoiaHN1ZWluZzNAZ21haWwuY29tIiwiYXVkIjpbIm9wZW5pZCIsImNsb3VkX2NvbnRyb2xsZXIiLCJwYXNzd29yZCJdLCJqdGkiOiJkMzZjNDI3MS02ZDJkLTRjN2EtOThmYS1kNzc2MjhiZDFiNmMiLCJ1c2VyX2lkIjoiODY0OWZkMzEtY2JiNy00N2YyLTkyNmItODM5Y2MzNWFlMTlmIiwiY2xpZW50X2lkIjoidm1jIn0.Lt1Bw7mBP55Hi9MIPTn90s0RXkJcJwGZXZcqDep4BBnnwjrAOAPQPGlIwBA-Ovy9K5BazMXqnQCOv8kxpK8o4wo3vG6RAJPvF7p76JgZDq0C_n_PUV1LaxGrldnpc2PLawR0FHHChb7tKCJP4cf26lK8A8vg5GEwi8HWO5OJCERI-3CKKiGJB5mVj2rWGmE39-ihAWmT5LpS5jAEZ-XVvo4VDEKknJ8SQC6693FzdCZ2AJBHkAgNxRoCsBtvkxOgKkspI-IkcaMZx884BT24cGbseZ5XY3bj6ZjAb499AfbIFe97Hme4axtpWo8qn1grkrJxyI3gmYAVMHVgo1M1IQ
Content-Length : 310
Content-Type : application/json
REQUEST_BODY: {"name":"caldecott","instances":1,"state":"STARTED","staging":{"model":"sinatra","stack":"ruby19"},"resources":{"memory":64,"disk":2048,"fds":256},"env":["CALDECOTT_AUTH=43ae7176-67f6-41ac-8cff-bf21b4249a49"],"uris":["caldecott-d9149.cloudfoundry.com"],"services":["mysql-service"],"console":null,"debug":null}
RESPONSE: [403]
RESPONSE_HEADERS:
cache-control : no-cache
connection : keep-alive
content-type : application/json; charset=utf-8
date : Wed, 13 Mar 2013 22:16:54 GMT
keep-alive : timeout=20
server : nginx
transfer-encoding : chunked
x-ua-compatible : IE=Edge,chrome=1
RESPONSE_BODY:
{
"code": 600,
"description": "Not enough memory capacity, you're allowed: 2048M"
}
>
cfoundry-0.5.2/lib/cfoundry/baseclient.rb:156:in handle_error_response'
cfoundry-0.5.2/lib/cfoundry/baseclient.rb:135:inhandle_response'
cfoundry-0.5.2/lib/cfoundry/baseclient.rb:85:in request'
cfoundry-0.5.2/lib/cfoundry/baseclient.rb:74:input'
cfoundry-0.5.2/lib/cfoundry/v1/model_magic.rb:55:in block (2 levels) in define_client_methods'
cfoundry-0.5.2/lib/cfoundry/v1/model.rb:91:inupdate!'
cfoundry-0.5.2/lib/cfoundry/v1/app.rb:131:in update!'
cfoundry-0.5.2/lib/cfoundry/v1/app.rb:121:instart!'
tunnel-vmc-plugin-0.2.2/lib/tunnel-vmc-plugin/tunnel.rb:173:in start_helper'
tunnel-vmc-plugin-0.2.2/lib/tunnel-vmc-plugin/tunnel.rb:89:increate_helper'
tunnel-vmc-plugin-0.2.2/lib/tunnel-vmc-plugin/tunnel.rb:28:in open!'
tunnel-vmc-plugin-0.2.2/lib/tunnel-vmc-plugin/plugin.rb:41:inblock in tunnel'
interact-0.5.2/lib/interact/progress.rb:98:in with_progress'
tunnel-vmc-plugin-0.2.2/lib/tunnel-vmc-plugin/plugin.rb:40:intunnel'
mothership-0.5.1/lib/mothership/base.rb:66:in run'
mothership-0.5.1/lib/mothership/command.rb:72:inblock in invoke'
What actions should I take to resolve this?
To offer further background below are a few details about the env. my app is running in:
vmc stats logoff
Using manifest file manifest.yml
Getting stats for logoff... OK
instance cpu memory disk
0 0.1% 74.2K of 2G 63.3M of 2G
vmc env logoff
Using manifest file manifest.yml
Getting env for logoff... OK
vmc services
Getting services... OK
name service version
mysql-service mysql 5.1
This is because you have used all of your allotted 2Gb of RAM. To tunnel to a service, vmc needs to deploy a small Ruby application called Caldecott, this uses 64Mb. So in short, you need to free up 64Mb!
Related
I am trying to open an FTP connection over SSL in my code. I'm able to connect and list a directory using FileZilla of WinSCP. But when listing the directory through .NET code using FtpWebClient, I get the error
(425) Can't open data connection
Since I'm able to connect using FileZilla from the same computer, I'm not sure how to go about troubleshooting this.
Here's my code
public void FtpStuff()
{
string url = "ftp://my.server.com";
FtpWebRequest request = (FtpWebRequest)WebRequest.Create(url);
request.Credentials = new NetworkCredential("myname", "password");
request.EnableSsl = true;
request.Method = WebRequestMethods.Ftp.ListDirectory;
FtpWebResponse response = (FtpWebResponse)request.GetResponse();
StreamReader streamReader = new StreamReader(response.GetResponseStream());
// This is the line that throws the exception
string line = streamReader.ReadLine();
}
I also tried FluentFTP. Here's my code for that. I get the exception
Unable to build data connection: Operation not permitted.
public void FtpStuff()
{
FtpClient client = new FtpClient();
client.Host = "my.server.com";
client.Credentials = new NetworkCredential("myname", "password");
client.EncryptionMode = FtpEncryptionMode.Explicit;
client.Connect();
// This line gives me an exception.
var files = client.GetListing();
}
Here is the logging information from FluentFTP. I changed the real user name and IP, but the rest of the data (including the port) is the real data. My FTP service provider specifies that I have to connect on port 21. The problem seems to happen towards the end after the EPSV command is issued and a connection on a new port is established.
# Connect()
The thread 0x5514 has exited with code 0 (0x0).
The thread 0xc80 has exited with code 0 (0x0).
The thread 0x89d4 has exited with code 0 (0x0).
Status: Connecting to 123.123.123.123:21
Response: 220 FTP Server Ready
Command: AUTH TLS
Response: 234 AUTH TLS successful
Status: FTPS Authentication Successful
Status: Time to activate encryption: 0h 0m 0s. Total Seconds: 0.1339995.
Command: USER me#mysite.com
The thread 0x6ddc has exited with code 0 (0x0).
Response: 331 Password required for me#mysite.com
Status: Testing connectivity using Socket.Poll()...
Command: PASS ***
Response: 230-***************************************************************************
Response: NOTICE TO USERS
Response: This computer system is private property. It is for authorized use only.
Response: Users (authorized or unauthorized) have no explicit or implicit
Response: expectation of privacy.
Response:
Response: Any or all uses of this system and all files on this system may be
Response: intercepted, monitored, recorded, copied, audited and inspected by
Response: using this system, the user consents to such interception, monitoring,
Response: recording, copying, auditing, inspection, and disclosure at the
Response: discretion of such personnel or officials. Unauthorized or improper use
Response: of this system may result in civil and criminal penalties and
Response: administrative or disciplinary action, as appropriate. By continuing to
Response: use this system you indicate your awareness of and consent to these terms
Response: and conditions of use. LOG OFF IMMEDIATELY if you do not agree to the
Response: conditions stated in this warning.
Response: ****************************************************************************
Response: 230 User me#mysite.com logged in
Command: PBSZ 0
Response: 200 PBSZ 0 successful
Command: PROT P
Response: 200 Protection set to Private
Command: FEAT
Response: 211-Features:
Response: AUTH TLS
Response: CCC
Response: CLNT
Response: EPRT
Response: EPSV
Response: HOST
Response: MDTM
Response: MFF modify;UNIX.group;UNIX.mode;
Response: MFMT
Response: MLST modify*;perm*;size*;type*;unique*;UNIX.group*;UNIX.groupname*;UNIX.mode*;UNIX.owner*;UNIX.ownername*;
Response: PBSZ
Response: PROT
Response: REST STREAM
Response: SIZE
Response: SSCN
Response: TVFS
Response: 211 End
Status: Text encoding: System.Text.ASCIIEncoding
Command: SYST
Response: 215 UNIX Type: L8
# GetListing(null, Auto)
# GetWorkingDirectory()
Command: PWD
Response: 257 "/" is the current directory
Command: TYPE I
Response: 200 Type set to I
# OpenPassiveDataStream(AutoPassive, "MLSD /", 0)
Command: EPSV
Response: 229 Entering Extended Passive Mode (|||50304|)
Status: Connecting to 123.123.123.123:50304
Command: MLSD /
Response: 150 Opening BINARY mode data connection for MLSD
Status: FTPS Authentication Successful
Status: Time to activate encryption: 0h 0m 0s. Total Seconds: 0.1210002.
+---------------------------------------+
-----------------------------------------
Status: Disposing FtpSocketStream...
# CloseDataStream()
Response: 425 Unable to build data connection: Operation not permitted
Status: Disposing FtpSocketStream...
Exception thrown: 'FluentFTP.FtpCommandException' in FluentFTP.dll
Here are my FileZilla logs.
Status: Resolving address of mysite.com
Status: Connecting to 123.123.123.123:21...
Status: Connection established, waiting for welcome message...
Response: 220 FTP Server Ready
Command: AUTH TLS
Response: 234 AUTH TLS successful
Status: Initializing TLS...
Status: Verifying certificate...
Status: TLS connection established.
Command: USER me#mysite.com
Response: 331 Password required for me#mysite.com
Command: PASS ************
Response: 230-***************************************************************************
Response: NOTICE TO USERS
Response: This computer system is private property. It is for authorized use only.
Response: Users (authorized or unauthorized) have no explicit or implicit
Response: expectation of privacy.
Response:
Response: Any or all uses of this system and all files on this system may be
Response: intercepted, monitored, recorded, copied, audited and inspected by
Response: using this system, the user consents to such interception, monitoring,
Response: recording, copying, auditing, inspection, and disclosure at the
Response: discretion of such personnel or officials. Unauthorized or improper use
Response: of this system may result in civil and criminal penalties and
Response: administrative or disciplinary action, as appropriate. By continuing to
Response: use this system you indicate your awareness of and consent to these terms
Response: and conditions of use. LOG OFF IMMEDIATELY if you do not agree to the
Response: conditions stated in this warning.
Response: ****************************************************************************
Response: 230 User me#mysite.com logged in
Command: SYST
Response: 215 UNIX Type: L8
Command: FEAT
Response: 211-Features:
Response: AUTH TLS
Response: CCC
Response: CLNT
Response: EPRT
Response: EPSV
Response: HOST
Response: MDTM
Response: MFF modify;UNIX.group;UNIX.mode;
Response: MFMT
Response: MLST modify*;perm*;size*;type*;unique*;UNIX.group*;UNIX.groupname*;UNIX.mode*;UNIX.owner*;UNIX.ownername*;
Response: PBSZ
Response: PROT
Response: REST STREAM
Response: SIZE
Response: SSCN
Response: TVFS
Response: 211 End
Status: Server does not support non-ASCII characters.
Command: PBSZ 0
Response: 200 PBSZ 0 successful
Command: PROT P
Response: 200 Protection set to Private
Status: Logged in
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/" is the current directory
Command: TYPE I
Response: 200 Type set to I
Command: PASV
Response: 227 Entering Passive Mode (123,123,123,123,197,68).
Command: MLSD
Response: 150 Opening BINARY mode data connection for MLSD
Response: 226 Transfer complete
Status: Directory listing of "/" successful
I can also connect using WinSCP. As suggested in comments, I did check if TLS/SSL session ID is reused when opening the data connection. It seems that it is.
227 Entering Passive Mode (???)
MLSD
Connecting to ??? ...
Connection pending
Data connection opened
Trying reuse main TLS session ID
Session ID reused
150 Opening data channel for directory listing of "/"
.NET framework does not support TLS/SSL session reuse. If your server requires it (what it looks it does and what is quite common nowadays and what is good thing for security), you cannot use FtpWebRequest nor FluentFTP. Both use the .NET implementation of TLS/SSL.
You will have to use FTP library that uses own TLS/SSL implementation.
You can use my WinSCP .NET assembly. Though contrary to FluentFTP, it's not a native .NET library, it has dependencies on an external binary. But that's what makes it working.
Some references:
https://github.com/robinrodricks/FluentFTP/issues/347
https://github.com/dotnet/runtime/issues/27916
"Authentication failed because the remote party has closed the transport stream" when transferring to/from FTP server over TLS/SSL using FluentFTP
Upload file to implicit FTPS server in C# with TLS session reuse
Suddenly getting "150 Opening Data channel for file download from server" after the FTP downloads was working for years – According to this post and other references elsewhere, the TLS/SSL session reuse was supported earlier with .NET Framework, but some update broke it. In .NET Core it was never working (see also the dotnet GitHub link above).
I've been trying to set up a webserver on localhost which supports HTTP/3. I've successfully run an caddy server run in docker which answers to GET requests with this header:
alt-svc: h3-27=":443"; ma=2592000
content-encoding: gzip
content-length: 1521
content-type: text/html; charset=utf-8
date: Thu, 07 May 2020 07:27:44 GMT
server: Caddy
status: 200
vary: Accept-Encoding
X-DNS-Prefetch-Control: off
Even though the alt-scv header was received I couldn't detect any h3-27 requests in the network logs of the developer tools.
Also created a CA, which I added to chrome, and signed the certificate of the server which Chrome accepts. I ran Chrome with the flags --enable-quic --quic-version="h3-27", as suggested in this article. I've tried the same with an nginx server based on this image and couldn't make it work as well.
What am I missing?
Caddyfile:
{
experimental_http3
}
localhost {
root * /usr/share/caddy/
encode zstd gzip
templates
file_server
tls /etc/caddy/certs/localhost.crt /etc/caddy/certs/localhost.key
}
Caddy Output:
2020/05/07 07:23:50.939 INFO using provided configuration {"config_file": "/etc/caddy/Caddyfile", "config_adapter": "caddyfile"}
2020/05/07 07:23:51.252 INFO admin admin endpoint started {"address": "tcp/localhost:2019", "enforce_origin": false, "origins": ["127.0.0.1:2019", "localhost:2019", "[::1]:2019"]}
2020/05/07 07:23:51 [INFO][cache:0xc00088da90] Started certificate maintenance routine
2020/05/07 07:23:51 [WARNING] Stapling OCSP: no OCSP stapling for [localhost bar.localhost]: no OCSP server specified in certificate
2020/05/07 07:23:51.254 INFO http skipping automatic certificate management because one or more matching certificates are already loaded {"domain": "localhost", "server_name": "srv0"}
2020/05/07 07:23:51.254 INFO http enabling automatic HTTP->HTTPS redirects {"server_name": "srv0"}
2020/05/07 07:23:51.255 INFO tls cleaned up storage units
2020/05/07 07:23:51.256 INFO http enabling experimental HTTP/3 listener {"addr": ":443"}
2020/05/07 07:23:51.257 INFO autosaved config {"file": "/config/caddy/autosave.json"}
2020/05/07 07:23:51.257 INFO serving initial configuration
Found the reason myself. The current version of Chrome (Version 81.0.4044.138) does not support this version of Quic (h3-27). It could be fixed by using using chrome-dev (Version 84.0.4136.5).
I'm trying to set up HAProxy to server a static JSON file on 504 errors. To test, we've set up the configuration file to timeout after 10 seconds, and to use the errorfile option:
defaults
log global
mode http
retries 3
timeout client 10s
timeout connect 10s
timeout server 10s
option tcplog
balance roundrobin
frontend https
maxconn 2000
bind 0.0.0.0:9000
errorfile 504 /home/user1/test/error.json
acl employee-api-service path_reg /employee/api.*
use_backend servers-employee-api if employee-api-service
backend servers-employee-api
server www.server.com 127.0.0.1:8000
Effectively, I'm trying to serve JSON instead of HTML on a timeout, so the backend service can fail gracefully. However, on testing, we could not get anything, neither HTML or JSON. On looking at the response, it simply says it failed, with no status code. Is my setup correct for errorfile? Does HAProxy 1.5 support this?
According to the documentation of errorfile:
<file> designates a file containing the full HTTP response. It is
recommended to follow the common practice of appending ".http" to
the filename so that people do not confuse the response with HTML
error pages, and to use absolute paths, since files are read
before any chroot is performed.
So, the file should contain a complete HTTP response but you're trying to serve JSON only.
The documentation further says that:
For better HTTP compliance, it is
recommended that all header lines end with CR-LF and not LF alone.
The example configuration, for example,
errorfile 503 /etc/haproxy/errorfiles/503sorry.http
shows the common practice of .http extension for the error file.
You can find samples of some default error files here.
Sample (504.http):
HTTP/1.0 504 Gateway Time-out
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>504 Gateway Time-out</h1>
The server didn't respond in time.
</body></html>
So, in your scenario, 504.http would be like this:
HTTP/1.0 504 Gateway Time-out
Cache-Control: no-cache
Connection: close
Content-Type: application/json
{
"message": "Gateway Timeout"
}
Also, you need to keep the file size under limit i.e. BUFSIZE (8 or 16 KB) as described in the documentation.
There might be some error logs for not serving your JSON file. You might want to look at HAProxy's logs again thoroughly. Just to be sure.
I have installed tinyproxy in CentOS 7 machine and changed the port to 8080 in tinyproxy.conf
Wherenever I am hitting request I am getting following logs in tinyproxy.log:-
CONNECT Mar 15 08:14:42 [22148]: Connect (file descriptor 6): <IP> [<IP>]
NOTICE Mar 15 08:14:42 [22148]: Unauthorized connection from "<IP>" [<IP>].
INFO Mar 15 08:14:42 [22148]: Read request entity of 1200 bytes
My request is reaching to proxy and proxy is not forwarding it to the destination.
In the Tinyproxy config file (/etc/tinyproxy/tinyproxy.conf) you can use the Allow directive to explicitly specify the host(s) that will be connecting to the proxy. You can also comment out or remove all Allow <host> lines to allow connections from all hosts. See below description from the config file (here I've commented out Allow 127.0.0.1 and since there are no other entries all connections will be allowed):
# Allow: Customization of authorization controls. If there are any
# access control keywords then the default action is to DENY. Otherwise,
# the default action is ALLOW.
#
# The order of the controls are important. All incoming connections are
# tested against the controls based on order.
#
#Allow 127.0.0.1
I'm attempting to configure HAProxy to serve an RSA or ECC certificate depending on the client's browser. I initially am trying to get ECC certificates configured, and I noticed that the latest version of Chrome does not support them. Wondering if anyone else is having this problem? I am using OS X 10.11.4 with the following versions:
Chrome (50.0.2661.94) (64-bit) [doesn't work]
Firefox (46.0) (64-bit) [works]
Safari (9.1 11601.5.17.1) (64-bit) [works]
cURL (7.43.0 (x86_64-apple-darwin15.0) libcurl/7.43.0 SecureTransport zlib/1.2.5) [works]
The cURL command I call via curl --ciphers ecdhe_ecdsa_aes_128_sha --ssl --head --tlsv1.2 https://<url> and it returns 200 OK.
And I am using Ubuntu Xenial 16.04 LTS on the server side with the following versions:
[root#haproxy-server]: /etc/haproxy # haproxy -vv
HA-Proxy version 1.6.4 2016/03/13
Copyright 2000-2016 Willy Tarreau <willy#haproxy.org>
Build options :
TARGET = linux2628
CPU = generic
CC = gcc
CFLAGS = -g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2
OPTIONS = USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1
Default settings :
maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.2g 1 Mar 2016
Running on OpenSSL version : OpenSSL 1.0.2g-fips 1 Mar 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.38 2015-11-23
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with Lua version : Lua 5.3.1
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.
Here's the screenshot of the exact problem: http://imgur.com/wlmQbIi
Here's the screenshot of the same website with Safari: http://imgur.com/FEwmmj9
And finally, my haproxy.cfg file:
global
log /dev/log local0
log /dev/log local1 notice
user haproxy
group haproxy
chroot /var/lib/haproxy
daemon
stats socket /run/haproxy/admin.sock level admin
maxconn 15000
spread-checks 5
tune.ssl.default-dh-param 2048
tune.ssl.maxrecord 1400
tune.idletimer 1000
ssl-default-bind-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
ssl-default-server-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
defaults
log global
mode http
retries 3
balance roundrobin
hash-type map-based
option httplog
option dontlognull
option forwardfor
option http-server-close
option redispatch
option abortonclose
log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 30s
timeout http-keep-alive 10s
timeout check 10s
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend http-frontend
bind *:80 accept-proxy
reqadd X-Forwarded-Proto:\ http
use_backend %[req.hdr(host),lower,map_sub(/etc/haproxy/backend.map,test-backend)]
frontend https-frontend
bind *:443 accept-proxy ssl crt /etc/ssl/pem/ecc alpn http/1.1
log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r\ ssl_version:%sslv\ ssl_cipher:%sslc\ %[ssl_fc_sni]\ %[ssl_fc_npn]
rspadd Strict-Transport-Security:\ max-age=31536000;\ includeSubdomains;\ preload
rspadd X-Frame-Options:\ DENY
reqadd X-Forwarded-Proto:\ https
use_backend %[req.hdr(host),lower,map_sub(/etc/haproxy/backend.map,test-backend)]
backend test-backend
balance leastconn
redirect scheme https code 301 if !{ ssl_fc }
server test-server 10.10.10.40:80 check
I know this post is not in the right seciton of StackExchange (sorry!) but I wanted to post a potential solution. I think the problem is the elliptic curves support in Chrome vs. Firefox vs. Safari. From the SSLLabs website:
Safari 9 / OS X 10.11: secp256r1, secp384r1, secp521r1
Firefox 44 / OS X: secp256r1, secp384r1, secp521r1
Chrome 48 / OS X: secp256r1, secp384r1
The problem is the private key for the ECC certificate I was testing was generated with secp521r1 (http://imgur.com/dbrJQuW), which the latest version of Chrome on OS X 10.11 doesn't support.
See this issue: https://security.stackexchange.com/questions/100991/why-is-secp521r1-no-longer-supported-in-chrome-others
It seems that only the following two cipher suite are supported by your web server:
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
I suppose that missing some cipher suite (at least TLS_RSA_WITH_AES_128_CBC_SHA) is the reason of your problem.
The cipher suite TLS_RSA_WITH_AES_128_CBC_SHA must be supported in TLS 1.2 (see the section 9 Mandatory Cipher Suites or RFC5246). In the same way I would you recommend to see forward and to include protocols
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
and the suites
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
are strictly recommended too. See TLS 1.3 specification. You use Nginx web server, which should support TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 and TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, which are very good because of combination the security and the performance. I'd recommend you to include all the Cipher Suites.
I'd recommend you additionally to use or at least to examine carefully the recommendation of Nginx setting for modern or intermediate web browsers by Mozilla SSL Configuration Generator. You can read more about the suites here.