I've been trying to get InfluxDB to accept a self-signed certificate, but so for, no luck. I've been following instructions from here:
https://docs.influxdata.com/influxdb/v2.3/security/enable-tls/#configure-influxdb-to-use-tls
I created the cert and key with this command:
openssl req -x509 -nodes -newkey rsa:2048 -keyout influxdb-selfsigned.key -out influxdb-selfsigned.crt -days 9999 -config "C:\OpenSSL\openssl.cnf"
The config.yml file is as follows:
http-bind-address: ":8087"
tls-cert: influxdb-selfsigned.crt
tls-key: influxdb-selfsigned.key
Note, I made the bind port 8087 to ensure it was reading the configuration.
When I start influx from the command line, there are no error messages. Initially there were some TLS handshake errors, but those disappeared, I think when I added the crt and key to the configuration.
However, when I access the URL https://localhost:8087, chrome shows a "not secure" message and I have to click through warnings to get to the site.
To try to get Chrome to trust the certificate, - I followed the instructions from this site:
https://www.pico.net/kb/how-do-you-get-chrome-to-accept-a-self-signed-certificate
I exported the cert, then re-imported it as trusted.
However, I still get the "not secure" message in Chrome.
Also, the InfluxDB console shows this message:
info http: TLS handshake error from [::1]:63065: remote error: tls: unknown certificate {"log_id": "0cKnmWB0000", "service": "http"}
Any ideas how to get the cert working?
Currently it seems this is no easy way or workaround in open source version. The community has been asking for this feature but no progress yet. See more details here.
However, in Enterprise version, you could configure the server to know you are using self-signed certificates by setting this configuration in influxdb-meta.conf file:
# If using a self-signed certificate:
https-insecure-tls = true
Related
Chrome now throws NET::ERR_CERT_INVALID for some certificates that are supported by other browsers.
The only clue I can find is in this list of questions about the new Chrome Root Store that is also blocking enterprise CA installations.
https://chromium.googlesource.com/chromium/src/+/main/net/data/ssl/chrome_root_store/faq.md
In particular,
The Chrome Certificate Verifier will apply standard processing to include checking:
the certificate's key usage and extended key usage are consistent with TLS use-cases.
the certificate validity period is not in the past or future.
key sizes and algorithms are of known and acceptable quality.
whether mismatched or unknown signature algorithms are included.
that the certificate does not chain to or through a blocked CA.
conformance with RFC 5280.
I verified my certificates work as expected in Edge.
Further, I verified the certificate is version "3", has a 2048-bit key, and has the extended key usage for server authentication.
I still don't understand which "standard" this certificate is expected to conform to when the browser only says "invalid". Is there a simple template or policy I can use?
Chrome now rejects TLS certificates containing a variable known as pathLenConstraint or sometimes displayed as Path Length Constraint.
I was using certificates issued by Microsoft Active Directory Certificate Services. The Basic Constraints extension was enabled, and the AD CS incorrectly injects the Path length Constraint=0 for end entity, non-CA certificates in this configuration.
The solution is to issue certificates without Basic Constraints. Chrome is equally happy with Basic Constraints on or off, so long as the path length variable is not present.
One of the better resources for troubleshooting was this Certificate Linter:
https://crt.sh/lintcert
It found several errors in the server certificate, including the path length set to zero.
I also found a thread discussing a variety of Certificate Authorities that would issue certificates the same way, so it is a fairly common issue.
https://github.com/pyca/cryptography/issues/3856
Another good resource was the smallstep open source project that I installed as an alternative CA. After generating a generic certificate, the invalid cert error went away and I realized there was something going on between the Microsoft and Google programs.
The best favour you can do yourself is to run Chrome with debug logging to find the exact cause of the issue:
chrome --enable-logging --v=1
This, I believe, will print:
ERROR: Target certificate looks like a CA but does not set all CA properties
Meanwhile it seems they have reverted this verification, which if I'm not mistaken will be released as Chrome 111 in the beginning of March.
See: https://chromium-review.googlesource.com/c/chromium/src/+/4119124
Following #Robert's answer, I used https://crt.sh/lintcert to fix all the issues that I had, so my self-signed certificate will keep on working, as it suddenly stopped working and I got NET::ERR_CERT_INVALID
Here's How I did it:
# https://www.openssl.org/docs/manmaster/man5/x509v3_config.html
cat > "$_X509V3_CONFIG_PATH" << EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=critical,CA:true
keyUsage=critical,digitalSignature,nonRepudiation,cRLSign,keyCertSign
subjectAltName=#alt_names
issuerAltName=issuer:copy
subjectKeyIdentifier=hash
[alt_names]
DNS.1=somesubdomain.mydomain.com.test
EOF
openssl x509 -req \
-days "$_ROOTCA_CERT_EXPIRE_DAYS" \
-in "$_ROOTCA_PEM_PATH" \
-signkey "$_ROOTCA_KEY_PATH" \
-extfile "$_X509V3_CONFIG_PATH" \ # <--- Consuming the extensions file
-out "$_DOMAIN_CRT_PATH"
Following the above, my lint errors/issues are, and even though there's a single ERROR, my Chrome browser trusts the root CA and the self-signed certificate
cablint WARNING CA certificates should not include subject alternative names
cablint INFO CA certificate identified
x509lint ERROR AKID without a key identifier
x509lint INFO Checking as root CA certificate
For those of you who wish to generate a self-signed certificate for local development with HTTPS, the following gist does that trick- https://gist.github.com/unfor19/37d6240c35945b5523c77b8aa3f6eca0
Usage:
curl -L --output generate_self_signed_ca_certificate.sh https://gist.githubusercontent.com/unfor19/37d6240c35945b5523c77b8aa3f6eca0/raw/07aaa1035469f1e705fd74d4cf7f45062a23c523/generate_self_signed_ca_certificate.sh && \
chmod +x generate_self_signed_ca_certificate.sh
./generate_self_signed_ca_certificate.sh somesubdomain.mydomain.com
# Will automatically create a self-signed certificate for `somesubdomain.mydomain.com.test`
I am trying to build an SSL Server under Python 3.4. The point is to communicate and exchange data with a programme through a defined protocol based on JSON data format.
So I used a basic "echo server" and client in SSL Protocol and modified those to see if I could exchange data. It worked and sending "hello" one side comes as b"hello" on the other side and it works both ways.
I start the server side, connect the program, it communicates succesfully, but:
I am expecting something like : LOGIN:n::{“user”:”XXXXX”, , ”password”:”YYYYY ”, app”:”ZZZZZ”, “app_ver”:”zzz”, ”protocol”:”xxx”,”protocol_ver”:”xxxx”} arriving from the client (program)
But instead I am getting something like this b"\x16\x03\x03\x00\x8e\x01\x00\x00\x8a\x03\x03^\x9e\xeb\xd8\x8f\xd9 \x05v\xbbF:}\xda\x17\xf7\x13\xff\xa9\xde=5\xfb_\xbco\x16\x96EL#\x00\x00*\xc0,\xc0+\xc00\xc0/\x00\x9f\x00\x9e\xc0$\xc0#\xc0(\xc0'\xc0\n\xc0\t\xc0\x14\xc0\x13\x00\x9d\x00\x9c\x00=\x00<\x005\x00/\x00\n\x01\x00\x007\x00\n\x00\x08\x00\x06\x00\x1d\x00\x17\x00\x18\x00\x0b\x00\x02\x01\x00\x00\r\x00\x14\x00\x12\x06\x01\x06\x03\x04\x01\x05\x01\x02\x01\x04\x03\x05\x03\x02\x03\x02\x02\x00#\x00\x00\x00\x17\x00\x00\xff\x01\x00\x01\x00"
I thought it was simply encoded, but I have tried the bytemessage.decode()method, with utf-8, cp437, cp1250, cp1252, latin-1, etc. I have also tried codecs.decode() with hex. No success, I Don't understand what language is this.
I am new to SSL so I suppose I am missing something obvious here, but I have no idea what …
Any help would be greatly appreciated.
Thanks in advance !
---- Edit here is the code of my server-----
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('localhost', 5000)
print ('starting up on %s port %s' % server_address)
sock.bind(server_address)
sock.listen(1)
while True:
print ( 'waiting for a connection')
connection, client_address = sock.accept();
try:
print( 'connection from', client_address)
while True:
data = connection.recv(16)
print ( 'received "%s"' % data)
if True:
#data2=b'{"timing":{"liveEvents": {"sector": {"dayTime": 1483523892618,"driver": 1,"isValid": false,"participant": "0","sector": 3,"time": -1}}}}'
print ('sending data to the client')
#connection.sendall(data2)
else:
print ( 'no more data from', client_address)
break
finally:
connection.close()
b"\x16\x03\x03...
This is a TLS message. Looks like your client tries to speak TLS to your server but your server cannot properly handle it. Instead of treating the data as TLS it will assume that the TLS is the actual application data.
Looking at your server code the reason is clear: you are not doing any SSL there, i.e. you are doing a plain TCP socket. SSL will not magically appear just because a clients tries to talk SSL with the server but you need to use the ssl module, properly wrap_socket and provide the necessary server certificate and key. For some simple example see the documentation.
As #Steffen mentioned , I wasn't handling SSL at all, which I now do with ssl.wrap_socket(sock,certfile='certificat.pem', keyfile='cle.pem', server_side=True)
Operation on server side requires certificates and key files in pem, which I generated with SelfSSL7 and then split the pfx into 2 pem key and certificate files with OpenSSL
openssl pkcs12 -in yourpfxfile.pfx -nocerts -out privatekey.pem -nodes
openssl pkcs12 -in yourpfxfile.pfx -nokeys -out publiccert.pem -nodes
Maybe not the fastest solution for a self signed certificate since I now have OpenSSL installed but …
Finally, the expected message !!
starting up on localhost port 11000
waiting for a connection
connection from ('127.0.0.1', 60488)
received "b'PING:0::\r\n'"
sending data to the client
received "b'LOGIN:::{"user":"test","password":"test","app":"AppName","app_ver":"1.0.0","protocol":" ","protocol_ver":"1.0.0"}\r\n'"
sending data to the client
Again thank you very much #SteffenUllrich
I've deployed Keycloak on WildFly 10 via Docker. SSL support was enabled via cli. Final standalone.xml has:
<security-realm name="UndertowRealm">
<server-identities>
<ssl>
<keystore path="keycloak.jks" relative-to="jboss.server.config.dir" keystore-password="changeit"
alias="mydomain" key-password="changeit"/>
</ssl>
</server-identities>
</security-realm>
Undertow subsystem:
<https-listener name="default-https" security-realm="UndertowRealm"
socket-binding="https"/>
Key was generated and placed in $JBOSS_HOME/standalone/configuration
keytool -genkey -noprompt -alias mydomain -dname "CN=mydomain,
OU=mydomain, O=mydomain, L=none, S=none, C=SI" -keystore
keycloak.jks -storepass changeit -keypass changeit
Port 8443 is exposed via Docker.
Accessing https://mydomain:8443/ in chrome results in ERR_CONNECTION_CLOSED. Firefox returns "Secure Connection Failed, the connection was interrupted..."
However, OpenSSL client works nicely:
openssl s_client -connect mydomain:8443
Input:
GET / HTTP/1.1
Host: https://mydomain:8443
This returns the Keycloak welcome page.
So clearly WildFly is working but I am being blocked by the browsers for whatever reason. What could this reason be? I was under the impression that I should be able to add an exception for self signed certificate in either browser. Maybe the generated key length is too short or maybe I am hitting some other security constraint imposed by Firefox/Chrome?
Using these parameters in keytool solved the problem: -keyalg RSA -keysize 2048
... -dname "CN=mydomain
The certificate is probably malformed. The Browsers and other user agents, like cURL and OpenSSL, use different policies to validate a end-entity certificate. The browser will reject a certificate if the hostname is in the Common Name (CN), while other user agents will accept it.
The short answer to this problem is, place DNS names in the Subject Alternate Name (SAN), and not the Common Name (CN).
You may still encounter other problems, but getting the names right will help immensely with browsers.
However, OpenSSL client works nicely:
openssl s_client -connect mydomain:8443
OpenSSL prior to 1.1.0 did not perform hostname validation. Prior version will accept any name.
cURL or Wget would be a better tool to test with in this case.
For reading on the verification you should perform when using OpenSSL, see:
SSL/TLS Client
For reading on the rules for hostnames and where they should appear in a X509 certificate, see:
How do you sign Certificate Signing Request with your Certification Authority?
How to create a self-signed certificate with openssl?
I need postman to connect to my server which requires client certificate. After some research, it seems that postman does not handle certificates itself but relies on Chrome certificates instead. My next step was to try to install the certificates in Chrome, my cert structure is like this:
Self-signed root CA => intermediate CA 1 => intermediate CA 2 => client cert
I have a file certchain.pem that contains client cert followed by intermediate CA 2 cert then intermediate CA 1 cert, I also have client.key file. I tried to install the chain in Chrome but it seems that Chrome requires pkcs12 so I split off certchain.pem into client.crt and middle.pem, then I convert everything to pkcs12 by:
openssl pkcs12 -in client.crt -inkey client.key -certfile middle.pem -export -out client.p12
I could install client.p12 into Chrome but it doesn't seem to retain the intermediate certs, when I choose View, it only shows info about client cert.
I've tested that client.p12 works by installing it into Firefox, there I can see info about the intermediate certs. I've tested that my certs work by doing a curl with them. Any other idea?
OS: Ubuntu 12.04 64-bit
PHP version: 5.4.6-2~precise+1
When I test an https page I am writing through the built-in webserver (php5 -S localhost:8000), Firefox (16.0.1) says "Problem loading: The connection was interrupted", while the terminal tells me "::1:37026 Invalid request (Unsupported SSL request)".
phpinfo() tells me:
Registered Stream Socket Transports: tcp, udp, unix, udg, ssl, sslv3,
tls
[curl] SSL: Yes
SSL Version: OpenSSL/1.0.1
openssl:
OpenSSL support: enabled
OpenSSL Library Version OpenSSL 1.0.1 14 Mar 2012
OpenSSL Header Version OpenSSL 1.0.1 14 Mar 2012
Yes, http pages work just fine.
Any ideas?
See the manual section on the built-in webserver shim:
http://php.net/manual/en/features.commandline.webserver.php
It doesn't support SSL encryption. It's for plain HTTP requests. The openssl extension and function support is unrelated. It does not accept requests or send responses over the stream wrappers.
If you want SSL to run over it, try a stunnel wrapper:
php -S localhost:8000 &
stunnel3 -d 443 -r 8080
It's just for toying anyway.
It's been three years since the last update; here's how I got it working in 2021 on macOS (as an extension to mario's answer):
# Install stunnel
brew install stunnel
# Find the configuration directory
cd /usr/local/etc/stunnel
# Copy the sample conf file to actual conf file
cp stunnel.conf-sample stunnel.conf
# Edit conf
vim stunnel.conf
Modify stunnel.conf so it looks like this:
(all other options can be deleted)
; **************************************************************************
; * Global options *
; **************************************************************************
; Debugging stuff (may be useful for troubleshooting)
; Enable foreground = yes to make stunnel work with Homebrew services
foreground = yes
debug = info
output = /usr/local/var/log/stunnel.log
; **************************************************************************
; * Service definitions (remove all services for inetd mode) *
; **************************************************************************
; ***************************************** Example TLS server mode services
; TLS front-end to a web server
[https]
accept = 443
connect = 8000
cert = /usr/local/etc/stunnel/stunnel.pem
; "TIMEOUTclose = 0" is a workaround for a design flaw in Microsoft SChannel
; Microsoft implementations do not use TLS close-notify alert and thus they
; are vulnerable to truncation attacks
;TIMEOUTclose = 0
This accepts HTTPS / SSL at port 443 and connects to a local webserver running at port 8000, using stunnel's default bogus cert at /usr/local/etc/stunnel/stunnel.pem. Log level is info and log outputs are written to /usr/local/var/log/stunnel.log.
Start stunnel:
brew services start stunnel # Different for Linux
Start the webserver:
php -S localhost:8000
Now you can visit https://localhost:443 to visit your webserver: screenshot
There should be a cert error and you'll have to click through a browser warning but that gets you to the point where you can hit your localhost with HTTPS requests, for development.
I've been learning nginx and Laravel recently, and this error has came up many times. It's hard to diagnose because you need to align nginx with Laravel and also the SSL settings in your operating system at the same time (assuming you are making a self-signed cert).
If you are on Windows, it is even more difficult because you have to fight unix carriage returns when dealing with SSL certs. Sometimes you can go through the steps correctly, but you get ruined by cert validation issues. I find the trick is to make the certs in Ubuntu or Mac and email them to yourself, or use the linux subsystem.
In my case, I kept running into an issue where I declare HTTPS somewhere but php artisan serve only works on HTTP.
I just caused this Invalid request (Unsupported SSL request) error again after SSL was hooked up fine. It turned out to be that I was using Axios to make a POST request to https://. Changing it to POST http:// fixed it.
My recommendation to anyone would be to take a look at where and how HTTP/HTTPS is being used.
The textbook definition is probably something like php artisan serve only works over HTTP but requires underlying SSL layer.
Use Ngrok
Expose your server's port like so:
ngrok http <server port>
Browse with the ngrok's secure public address (the one with https).
Note: Though it works like a charm, it seems an overkill since it requires internet and would appreciate better recommendations.