openshift and let's encrypt certificates - openshift

Is there any integration for Let's Encrypt in OpenShift (or, is this planned)? Let's encrypt are going to issue certs that expire in 90 days[1] -- and a big part of their plan is to have automation setups via people who use their certs so that they're always updated with new certs. Given this, some integration from OpenShift would be necessary.
Thanks,
[1] https://letsencrypt.org/2015/11/09/why-90-days.html

Currently, the ability to automate ssl certificate renewals and installation on OpenShift Online is not possible because the ssl certificates are stored at the node level, and ssl connections are terminated by the node level proxy (Reference this). If you would like to see it included in future versions, you should vote here and get people to vote on it. It's possible that you could automate it locally somewhat (or build a module to do it) by using the OpenShift Online API. Another suggestion would be to get a free ssl certificate from StartSSL that lasts for a year and install it either using the command line, or the web console.

Related

Apereo CAS overlay Idp saml generate on Openshift

Has anyone came across this scenario?
Since we have to deploy the IDP/CAS server on microservice architecture on Openshift.
We don't have local storage, every time we generate the idp metadata when the pods deploy.
however, SP also need to handle the idp metadata x509 cert. Is there anyway to handle this situation?
As a general guideline, always ensure you have enough diagnostics data in your report and report the exact CAS version numbers that exhibit the seemingly-faulty behavior.
Since you don't mention this, I am going to point out a few resources to you:
This link describes strategies where SAML SP metadata can be managed without local storage.
An updated version of the same page for the next CAS version describes strategies where SAML IDP metadata can be managed without local storage.

Certificate Validation on Cloud SQL

I've found that if you connect to a Cloud SQL instance over SSL the CommonName provided in the server's certificate is my-project-123456:myinstance which renders the certificate un-validatable, as the client expects that the CN to be either the hostname or IP.
Every solution to this problem seems to amount to "just disable validation", which is not acceptable to me because:
Why has GCP decided to do everything else correctly, providing a CA cert and client certificates, only to drop the ball on identity validation? By disabling validation you're basically saying "I'm OK with being MITMed at some point".
What about projects where we can't play fast and loose with validation because of PIPA/HIPAA?
What about mySQL clients that don't support turning validation off? eg: All PHP 5.6 mysql libs using mysqlnd prior to the upcoming 5.6.16 release.
Is there any way to make SSL work correctly on Cloud SQL?
One of the reasons for not having the IP address of the instance in the common name of the server certificate is because these IPs can change. What is the IP address of instance A today can be the IP address of instance B tomorrow, because A was deleted, or A decided that it doesn't want the IP address anymore. So, the instance name was decided as being a more unique identification of the instance.
Also, the mysql client libraries by default have hostname verification disabled. http://dev.mysql.com/doc/refman/5.7/en/ssl-options.html
With regards to MITM attacks, it is not possible to MITM attack a Cloud SQL instance because the server certificate and the each of the client certificates are signed by unique self-signed CAs which are never used to sign more than one cert. The server only ever trusts certificates signed by one of these CAs. The reason for using unique CAs per client cert was because MySQL 5.5 did not support certificate revocation lists, and we also did not want to deal CRLs, but wanted to support deletion of client certs.
We will look into ways of supporting SSL for clients which cannot turn off hostname validation. But I cannot promise an ETA on this.
Cloud SQL Team.

Issue when trying to connect to the cluster after updating the version of Java SDK

We are experiencing the issue when trying to connect to the cluster after updating the version of Java SDK.
The setup of the system is as follows:
We have a web application that is using Java SDK and a Couchbase cluster. In between we have a VIP (Virtual IP Address). We realise that isn’t ideal but we’re not able to change that immediately since VIP was mandated by Tech Ops. VIP is basically only there to reroute the initial request on application startup. That way we can make modifications on the cluster and ensure that when application starts it can find the cluster regardless of the actual nodes in the cluster and their IPs.
Prior to the issue we used JAVA SDK version 1.4.4. Our application would start and Java SDK would initiate a request on port 8091 to VIP. Please note that port 8091 is the only port open on VIP. VIP would reroute the request to one of the node cluster currently in use the cluster would respond to Java SDK. At that point Java SDK would discover all the nodes in the cluster and application would run fine. During up time if we would add, remove a node from the cluster Java SDK would update automatically and everything would run without the issue.
In the last sprint we updated the Java SDK to version 2.1.3. Our application would start and Java SDK would initiate a request on port 11210 to VIP. Since this port is not open the request would fail and Java SDK would throw an exception:
Caused by: java.lang.RuntimeException: java.util.concurrent.TimeoutException
at com.couchbase.client.java.util.Blocking.blockForSingle(Blocking.java:93)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:108)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:99)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:89)
No further request would be made on any port.
It appears the order in which port are being used has been changed between versions. Could somebody please confirm, or dispute, that the order in which ports are being used for cluster discovery has been changed between versions. Also could somebody please provide some advice on how we could resolve the issue. We are trying to understand the clients behavior, if we could open all those ports on the VIP would the client still then function correctly and at full performance?
The issue is happening on our production environment which we cannot use for testing out potential solutions since it will interfere with our products.
In v2.x of the Java SDK, it defaults to 11210 to get the cluster map to bootstrap the application. This is a huge improvement actually as now the map comes from the managed cache and not the cluster manager (8091). The SDK should use 8091 as a fall back if it cannot get the map on 11210 though. Regardless, you really want to get that map from 11210, trust me. It cleans up a lot of problems.
To resolve this long term and follow Couchbase best practices, upgrade to the Java 2.2.x SDK, get rid of the VIP entirely and go with a DNS SRV record instead. That gives you one DNS entry for the SDK connection object and you just manage the node list in DNS. It works great. I say SDK 2.2 as the DNS SRV record solution is fully supported there, in 2.1 it is experimental. VIPs are specifically recommended against by Couchbase these days. In older versions of the SDKs it was fine to do this and it helped with limiting the number of connections from the app to the DB nodes, but that is no longer necessary and can actually be a bad thing.
in addition to Kirk's long term answer (which I also advise you to follow), a shorter term solution may be to deactivate the 11210 bootstraping (carrier bootstrap) through the CouchbaseEnvironment by calling bootstrapCarrierEnabled(false) on the builder.
I don't guarantee that it'll work with a vIP even after that, but that may be worth a try if you're in a hurry.

Getting Mysql2::Error (SSL connection error: ASN: bad other signature confirmation) on Heroku App with AWS RDS

Mysql2::Error (SSL connection error: ASN: bad other signature confirmation):
I am making an administration site. The environment is Rails 4.2 and Ruby 2.2, connecting AWS RDS with Heroku server.
I don't know why getting this error. It suddenly appeared. I can't find any errors other than this. Although I passed my codes two days ago, I got this error this time.(I haven't touched this code while the two days.)
How can I solve this problem?
For me, this had to do with the RDS SSL Certificate Rotation that happened on April 3rd, 2015.
However, in my case, just using the root certificate did not work, and I had to use a intermediate certificate for my region as well. Details:
Go into the AWS rds console and reboot your RDS instance.
Download the new root certificate https://s3.amazonaws.com/rds-downloads/rds-ca-2015-root.pem. Put it into the config directory of your app.
Download the intermediate certificate for your database region
here. I had to use the US east one, but you will have to pick the one for your region.
This is the key step. You need to combine the intermediate certificate and the root certificate into one file so that the intermediate certificate is above the root certificate, forming a certificate chain. Open the intermediate certificate using a text editor, copy its contents, and paste them into config/rds-ca-2015-root.pem, on top, above the root certificate. So, after you are done, config/rds-ca-2015-root.pem should be the intermediate certificate followed by the root certificate, all in this file.
Get your current database url
heroku config
and then look for the DATABASE_URL property
Update your database URL to use the new certificate file. All you should have to change is the name of the certificate (since its now called
rds-ca-2015-root.pem)
heroku
config:add DATABASE_URL="mysql2://DB_NAME:DB_PASSWORD#DB_URL/DB_NAME?sslca=config/rds-ca-2015-root.pem"
Commit the changes and redeploy to Heroku.
Four years later (2019) and AWS are rotating CA certs again, as expected.
RDS users are recommended to switch from the 2015 cert to the 2019 cert by 2019-11-01, and "no later than" 2020-02-05. The 2015 certificates expire on 2020-03-05.
I used the following procedure, based on RDS' Rotating Your SSL/TLS Certificate guide.
Schedule downtime
Download new certificates, save in config
Only the root cert is needed: rds-ca-2019-root.pem
The instructions mention a 2015+2019 bundle, but I couldn't find it. This file is 2019 only.
Region-specific intermediate certs are not needed
Commit, but don't deploy yet
heroku maintenance:on
In RDS web console, modify server
In the Network & Security section, choose rds-ca-2019
Apply changes immediately
Scale dynos down to 0
heorku config:set DATABASE_URL=mysql2://myuser:mypassword#myhost.rds.amazonaws.com/mydb?sslca=config/rds-ca-2019-root.pem
Deploy
Scale dynos up, watch logs
heroku maintenance:off
There are many reasonable variations on this procedure, this is just what worked for me.

MySQL Community Server - Security Patches

I have been running a MySQL Community Server for a couple of years now and a new client has asked for a report from a vulnerability scanner on our network. I am using OpenVAS and the network is fine apart from the server, its returning a high threat stating that a MySQL security patch needs to be applied. I've gone onto the Oracle website and I believe that I require a Support Identifier to apply the patch, so I done some Googling and its basically a subscription from Oracle. As its a small company is there a way to apply this patch for the community edition without the need to fork out a ton of money, or shall I just filter incoming traffic to the mysql port (Its not the actual fix but at least its one)?
Cheers for the help!
A first measure would be closing the MySQL port through a firewall (iptables), or at least restricting it to the machines in the internal network needing direct access to MySQL.
As for the patch: Maybe there are newer pre-built packages for your OS/distro which already contain the bugfix.