How to access the Openshift cluster using .kubeconfig file - openshift

I have to authenticate the openshift cluster via .kube/config file. For that, I have generated a x509 client certificate and key using OpenSSL.I converted that certificate into .pem format using the following command x509 -in xyz.crt -out xyz.pem -outform PEM I had generated one .kube/config file for openshift authentication I put the ca.pem, xyz.pem and xyz_key.pem into that openshift .kube/config file.But I am
facing Error like error tls- failed to find any PEM data in certificate input
Kinds regard and thank you for your patience.

Related

How to get InfluxDB to accept a self-signed certificate?

I've been trying to get InfluxDB to accept a self-signed certificate, but so for, no luck. I've been following instructions from here:
https://docs.influxdata.com/influxdb/v2.3/security/enable-tls/#configure-influxdb-to-use-tls
I created the cert and key with this command:
openssl req -x509 -nodes -newkey rsa:2048 -keyout influxdb-selfsigned.key -out influxdb-selfsigned.crt -days 9999 -config "C:\OpenSSL\openssl.cnf"
The config.yml file is as follows:
http-bind-address: ":8087"
tls-cert: influxdb-selfsigned.crt
tls-key: influxdb-selfsigned.key
Note, I made the bind port 8087 to ensure it was reading the configuration.
When I start influx from the command line, there are no error messages. Initially there were some TLS handshake errors, but those disappeared, I think when I added the crt and key to the configuration.
However, when I access the URL https://localhost:8087, chrome shows a "not secure" message and I have to click through warnings to get to the site.
To try to get Chrome to trust the certificate, - I followed the instructions from this site:
https://www.pico.net/kb/how-do-you-get-chrome-to-accept-a-self-signed-certificate
I exported the cert, then re-imported it as trusted.
However, I still get the "not secure" message in Chrome.
Also, the InfluxDB console shows this message:
info http: TLS handshake error from [::1]:63065: remote error: tls: unknown certificate {"log_id": "0cKnmWB0000", "service": "http"}
Any ideas how to get the cert working?
Currently it seems this is no easy way or workaround in open source version. The community has been asking for this feature but no progress yet. See more details here.
However, in Enterprise version, you could configure the server to know you are using self-signed certificates by setting this configuration in influxdb-meta.conf file:
# If using a self-signed certificate:
https-insecure-tls = true

Sending JSON to endpoint with certificate and using private key

I want to send JSON to an endpoint using vb or C# - easy enough to do. I also think I know how to attach the .CRT file to the request. However, I am unsure how to make use of the .pem private key file? Do I attach that to the request too? I'm using.net framework 4.
Solution: You need to create a .pfx file:
openssl pkcs12 -in a.crt -inkey a.pem -export -out a.pfx
and then add that to your request as shown here.

ejabberd: How to use "ldap_tls_certfile"

According to the ejabberd docs, you can use ldap_tls_certfile in order to verify the TLS connection to the LDAP server. But which certificate is expected here?
Quoting the docs:
A path to a file containing PEM encoded certificate along with PEM
encoded private key. This certificate will be provided by ejabberd
when TLS enabled for LDAP connections. There is no default value,
which means no client certificate will be sent.
Sooo.... I tried to use a concatenated PEM file containing first the host certificate of the ejabberd server, then second the host key. But this leads to the following errors:
<0.471.0>#eldap:connect_bind:1073 LDAP connection to
ldap1.example.com:636 failed: received CLIENT ALERT: Fatal - Handshake
Failure - {bad_cert,hostname_check_failed}
<0.1975.0> TLS client: In state certify at ssl_handshake.erl:1372 generated CLIENT ALERT: Fatal - Handshake Failure - {bad_cert,hostname_check_failed}
This obviously is not what is expected. Is it the public certificate of the LDAP server? But then, what private key is expected?
I'm a bit lost here. Anyone mind to lend me a hand?
Disclaimer: I never used LDAP TLS.
Looking at the ejabberd source code, the value of ejabberd's option ldap_tls_certfile
is copied into eldap's option tls_certfile
https://github.com/processone/ejabberd/blob/e4d600729396a8539e48ac0cbd97ea1b210941cd/include/eldap.hrl#L72
And later the value of eldap's tls_certfile is copied into ssl's option certfile
https://github.com/processone/ejabberd/blob/e4d600729396a8539e48ac0cbd97ea1b210941cd/src/eldap.erl#L580
That option, among others, is provided as an argument when calling ssl:connect/4
https://github.com/processone/ejabberd/blob/e4d600729396a8539e48ac0cbd97ea1b210941cd/src/eldap.erl#L1140
So, the option that you set in ejabberd is named 'certfile' in ssl:connect, you can see here its documentation:
https://erlang.org/doc/man/ssl.html#connect-4
Searching for certfile in that page, it shows this description:
Path to a file containing the user certificate on PEM format.
Is it the public certificate of the LDAP server?
Try that one and comment here.
But then, what private key is expected?
Try not putting any private key. In any case, when the LDAP certificate was created, it produced a private key file, too.

Enable SSL in MySQL and PostgreSQL server

Currently, I have used the MySQL 6.5 and PostgreSQL 9.5 version. I need to enable(Configure) SSL in both servers. I have now (.pfx) SSL certificate. Can you please suggest me to how to configure the SSL in both servers. I have searched a lot of documents on online but I didn't get any clear idea about that.
OS: Windows 10
For PostgreSQL you will need a crt and key file.
To get the crt, Key file you can use following commands -
openssl pkcs12 -in [yourfile.pfx] -nocerts -out [keyfile-encrypted.key]
openssl pkcs12 -in [yourfile.pfx] -clcerts -nokeys -out [certificate.crt]
Note you may have to use pkcs8/pkcs7 depending on your pfx file.
this is now to be used in your postgresql.conf file, refer the article to identify the attributes that you need to set in here - https://www.postgresql.org/docs/9.5/static/runtime-config-connection.html
keys are - ssl_cert_file ,ssl_key_file

Can't access WildFly over HTTPS with browsers but can with OpenSSL client

I've deployed Keycloak on WildFly 10 via Docker. SSL support was enabled via cli. Final standalone.xml has:
<security-realm name="UndertowRealm">
<server-identities>
<ssl>
<keystore path="keycloak.jks" relative-to="jboss.server.config.dir" keystore-password="changeit"
alias="mydomain" key-password="changeit"/>
</ssl>
</server-identities>
</security-realm>
Undertow subsystem:
<https-listener name="default-https" security-realm="UndertowRealm"
socket-binding="https"/>
Key was generated and placed in $JBOSS_HOME/standalone/configuration
keytool -genkey -noprompt -alias mydomain -dname "CN=mydomain,
OU=mydomain, O=mydomain, L=none, S=none, C=SI" -keystore
keycloak.jks -storepass changeit -keypass changeit
Port 8443 is exposed via Docker.
Accessing https://mydomain:8443/ in chrome results in ERR_CONNECTION_CLOSED. Firefox returns "Secure Connection Failed, the connection was interrupted..."
However, OpenSSL client works nicely:
openssl s_client -connect mydomain:8443
Input:
GET / HTTP/1.1
Host: https://mydomain:8443
This returns the Keycloak welcome page.
So clearly WildFly is working but I am being blocked by the browsers for whatever reason. What could this reason be? I was under the impression that I should be able to add an exception for self signed certificate in either browser. Maybe the generated key length is too short or maybe I am hitting some other security constraint imposed by Firefox/Chrome?
Using these parameters in keytool solved the problem: -keyalg RSA -keysize 2048
... -dname "CN=mydomain
The certificate is probably malformed. The Browsers and other user agents, like cURL and OpenSSL, use different policies to validate a end-entity certificate. The browser will reject a certificate if the hostname is in the Common Name (CN), while other user agents will accept it.
The short answer to this problem is, place DNS names in the Subject Alternate Name (SAN), and not the Common Name (CN).
You may still encounter other problems, but getting the names right will help immensely with browsers.
However, OpenSSL client works nicely:
openssl s_client -connect mydomain:8443
OpenSSL prior to 1.1.0 did not perform hostname validation. Prior version will accept any name.
cURL or Wget would be a better tool to test with in this case.
For reading on the verification you should perform when using OpenSSL, see:
SSL/TLS Client
For reading on the rules for hostnames and where they should appear in a X509 certificate, see:
How do you sign Certificate Signing Request with your Certification Authority?
How to create a self-signed certificate with openssl?