Upgraded plans on ClearDB and unable to connect when deployed to heroku - mysql

I upgraded my plan from the free tier to a dedicated 25 plan.
When I updated and tested locally I am ABLE to connect. Same with workbench, I could query my data.
When I updated my Env vars in Heroku tho, it fails to establish a connection without any real error. I restarted all the dynos but still no luck. I believe this is a networking issue with Heroku maybe. ANYTHING HELPS
org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is java.sql.SQLException: Access denied for user 'b3b4204cd59615'#'ip-x-x-x-x.ec2.internal' (using password: YES)] with root cause
java.sql.SQLException: Access denied for user 'b3b4204cd59615'#'ip-x-x-x-x.ec2.internal' (using password: YES)
My MySQL version: 'MySQL Community Server (GPL) version 5.6.50'
And using the most up to date version of 'mysql-connector-java'
implementation 'mysql:mysql-connector-java:8.0.28'

OKAY I FIGURED IT OUT! It's not anything obvs so you have to know a back story.
When you add clearDB to the project it automatically creates a ENV var of 'CLEARDB_DATABASE_URL' and that is a string that has username, password, and url data stored in it. Then they show you a way to make a DataSource obj in their documentation with a config file. Expecting you to follow that exactly I guess.
Well I didn't want to make a config but rather have spring do the work for me with setting them in application.properties and then injecting that with env vars dedicated to each value. ie 'db.url', 'db.password', and 'db.username' and it worked.
so when I upgraded, it gave me a new ENV VAR that had the new URL as username and password remained the same. ("CLEARDB_CYAN_CLEARDB_HOSTNAME_1")
so I wasn't using 'CLEARDB_DATABASE_URL' anymore but my own env vars. (like this)
spring:
datasource:
driverClassName: com.mysql.cj.jdbc.Driver
username: ${db.username}
password: ${db.password}
url: ${db.url}
Well, I'm pretty sure they are some behind the scenes networking permissions going on that looks at that value and allows access to it or something, as what I did to fix my issue is although I'm not using that var anymore, I took the new URL value and replaced the old with it to point at my new Datasource connection in the ENV var of 'CLEARDB_DATABASE_URL' and once I restarted my dynos again, It was able to connect successfully deployed.
so example:
mysql://b3b4204cdxxxxxx:password#us-xxxx-xxxx-03.cleardb.com/heroku_XXXXXXXXXXXXX?reconnect=true
became:
mysql://b3b4204cdxxxxxx:password#us-mm-xxx-xxxxxxxxxx.g5.cleardb.net/heroku_XXXXXXXXXXXXX?reconnect=true
Again even though my source code makes no use of this -- IT HAS TO BE UPDATED

Related

How do I connect to a MySQL database server running on PlanetScale with SSL from node.js on localhost?

I'm trying to connect to the MySQL server on PlanetScale, but can't as it requires SSL.
Here's their doc for that, but it's unclear what it says.
https://planetscale.com/docs/concepts/secure-connections
Here's the connection URL: DATABASE_URL='mysql://co30rXXXXXXX:pscale_pw_XXXXXXX#hoqx01444p30.us-east-4.psdb.cloud/restaurant?ssl={"rejectUnauthorized":true}'
Here's what I see from my terminal when I run yarn run migration-run
yarn run v1.22.18 $ npx prisma migrate dev Environment variables
loaded from .env Prisma schema loaded from prisma/schema.prisma
Datasource "db": MySQL database "restaurant" at
"hoqx0XXXXX.us-east-4.psdb.cloud:3306"
Error: Migration engine error: unknown error: Code: UNAVAILABLE server
does not allow insecure connections, client must use SSL/TLS
error Command failed with exit code 1. info Visit
https://yarnpkg.com/en/docs/cli/run for documentation about this
command.
Is there anyone who has tried to connect to PlanetScale DB from Node.js on localhost? I have tried some other suggestions from Stackoverflow, but don't seem to work.
?ssl={"rejectUnauthorized":false}&sslcert=/etc/ssl/certs/ca-certificates.crt
Adding these params at the end of the connection link, the issue has been fixed. :)
SSL ISSUE ON WINDOWS
If you're working on a Windows machine and using a .env file for your connection string, here is what worked for me to run locally (windows does not have a default /etc/ssl/certs/ reference as answered here).
You get your connection string from the PlanetScale console, via "overview" > "connect"
This will look something like:
DATABASE_URL='mysql://xxxxxx:*****#aws-eu-west-1.connect.psdb.cloud/dbName?ssl={"rejectUnauthorized":true}'
When plainly using this you will most likley get the follow error message (as the question states):
Code: UNAVAILABLE server does not allow insecure connections, client must use SSL/TLS
You therefore need to provide a local cert, one can be downloaded from the following trusted location:
https://curl.se/docs/caextract.html
Next, you need to save this file to a logical location on disk that can be referenced in your connection string, for example c:/temp/cacert.pem
Once saved you can then append then following to your connection string:
&sslcert=C:\\temp\\cacert.pem
Restart your server and you should be all set! 🎉
The equivelant ssl cert update in NodeJs would look as follows:
const connection = mysql.createConnection({
host: 'hostNameHere',
user: 'userNameHere',
password: 'passwordHere',
database: 'dbHere',
ssl: {
ca: fs.readFileSync('C:\\temp\\cacert.pem')
}
});

Rundeck 3.3.6 Community - Move from H2 Db to MySql 8.0

My Ansible/Rundeck host is an Ubuntu 20.04 LTS system. I installed Ansible to tinker and then installed Rundeck. Once I was able to get the two talking and working properly (in my mind), I thought it would be best to move Rundeck to a production level DB engine instead of H2. I installed MySQL on the same host and setup the DB and the DB user as directed in the Rundeck docs. I then modified the RD properties file as the same document instructs but I keep getting a failure to connect to the database.
First it was this error:
WARN internal.JdbcEnvironmentInitiator - HHH000341: Could not obtain connection metadata : Could not connect to address=(host=10.10.140.23)(port=3306)(type=master) : Socket fail to connect to host:10.10.140.23, port:3306. Connection refused (Connection refused)
So then I researched the issue and it suggested to validate the user account in MySQL, grants, access, etc. - It all works from a command line testing in MySQL.
I read in one of my searches that some people had luck with removing the useSSL=false or setting it to true. That led to my next error of:
WARN internal.JdbcEnvironmentInitiator - HHH000341: Could not obtain connection metadata : Could not connect to address=(host=localhost)(port=3306)(type=master) : RSA public key is not available client side (option serverRsaPublicKeyFile)
During my research on this error, I read that I needed to add a property to allow the retrieval of the RSA keys, and I did but it didn't change a thing.
I then downloaded the Oracle MySQL jdbc driver and placed it in the var/lib/rundeck/lib folder and changed the driver class name in the properties file and then I received my next error of
WARN internal.JdbcEnvironmentInitiator - HHH000341: Could not obtain connection metadata : Could not connect to address=(host=127.0.0.1)(port=3306)(type=master) : (conn=355) Access denied for user 'sa'#'localhost' (using password: YES)
Current charset is UTF-8. If password has been set using other charset, consider using option 'passwordCharacterEncoding'
when I attempted to run Rundeck.
At this point I am back on H2 and I am too much of a Linux novice to understand what the issue may be. Can anyone kindly point me in a direction that helps as the Rundeck docks for using a MySQL DB seem to either be old or missing some content as a lot of the searches I have made on trying to resolve the issue directs me to perform things slight differently or all new commands that the Rundeck docs don't even mention.
I've fixed same issue stopping the Rundeck instance, later adding the following config on the rundeck-config.properties file (at /etc/rundeck path, check this):
# works with allowPublicKeyRetrieval=true
dataSource.url = jdbc:mysql://mysql_server_ip/rundeck?autoReconnect=true&useSSL=false&allowPublicKeyRetrieval=true
dataSource.username=rundeckuser
dataSource.password=your_password
dataSource.driverClassName=com.mysql.cj.jdbc.Driver
# to store projects on backend
rundeck.projectsStorageType=db
Next, flushing the connections on the database side with mysqladmin flush-hosts -u root -p.
Now, starting your Rundeck service, you can check that is using MySQL 8 as a data source for your projects.
EDIT: On the MySQL side, make sure that you've created the user properly, I followed these steps:
CREATE DATABASE rundeck;
CREATE USER 'rundeckuser'#'%' IDENTIFIED BY 'P4ssw0rd';
GRANT ALL PRIVILEGES ON rundeck.* TO 'rundeckuser'#'%';
exit;
Also check how MySQL 8 is storing the user's passwords.

Containerized server application failing to connect to MySQL databases

I'm trying to connect my server code running as a Docker container in our Kubernetes cluster (hosted on Google Container Engine) to a Google Cloud SQL managed MySQL 5.7 instance. The issue I'm running into is that every connection is being rejected by the database server with Access denied for user 'USER'#'IP' (using password: YES). The database credentials (username, password, database name, and SSL certificates) are all correct and work when connecting via other MySQL clients or the same application running as a container on a local instance.
I've verified that all credentials are the same on the local and the server-hosted versions of the app and that the user I'm connecting with has the wildcard % host specified. Not really sure what to check next here, to be honest...
An edited version of the connection code is below:
let connectionCreds = {
host: Config.SQL.HOST,
user: Config.SQL.USER,
password: Config.SQL.PASSWORD,
database: Config.SQL.DATABASE,
charset: 'utf8mb4',
};
if (Config.SQL.SSL_ENABLE) {
connectionCreds['ssl'] = {
key: fs.readFileSync(Config.SQL.SSL_CLIENT_KEY_PATH),
cert: fs.readFileSync(Config.SQL.SSL_CLIENT_CERT_PATH),
ca: fs.readFileSync(Config.SQL.SSL_SERVER_CA_PATH)
}
}
this.connection = MySQL.createConnection(connectionCreds);
Additional information: the server application is built in Node using the mysql2 library to connect to the database. There are no special firewall rules in place that are causing network issues, and that's confirmed by the fact that the library IS connecting, but failing to authenticate.
After setting up Cloud SQL Proxy I managed to figure out what the actual error was: somewhere between the secret and the pod configuration an extra newline was being added to the database name, causing any connection attempt to fail. With the proxy set up this was made clear because there was an actual error message to that effect displayed.
(notably all of my logging around the credentials that I was using to validate that the credentials were accurate didn't explicitly display the newline and was disguised by the fact that the console display added line breaks to wrap the display, and it happened to line up exactly with where the database name ended)
Have you read the documentation on https://cloud.google.com/sql/docs/mysql/connect-container-engine ?
In Container Engine, you need to set up a Cloud SQL Proxy container alongside your application pod and talk to it. The Cloud SQL Proxy will then make the actual call to Cloud SQL service.
If the container worked locally, I assume you have Application Default Credentials set on your development machine. It could be failing because those credentials are not on your container as a Service Account file. Try configuring a Service Account file, or create your GKE cluster with --scopes argument that gives your instances access to Cloud SQL.

Amazon EC2 Tomcat7 instance unable to access MySQL db on same system

I think I've seen a variety of similar posts on this topic, but am still unable to resolve my issue, so I figured I'd post with my specifics.
I have an Amazon AWS Linux EC2 instance running Tomcat7 web server. On the same machine I am also running a MySQL5 server, but I am unable to get the Tomcat app to talk to the MySQL database.
My Java app on tomcat tries to connect to MySQL by reading from a properties file:
jdbc.mysql.host.path=jdbc:mysql://localhost/
jdbc.mysql.schema=prod
jdbc.mysql.username=root
jdbc.mysql.password=<password>
I am accessing the app from another system via web browser, but when the app tries to connect to the database I get the following error in catalina.out:
java.sql.SQLException: Access denied for user 'root'#'localhost' (using password: YES)
I'm pretty sure the issue has to do with permissions and communication between Tomcat and MySQL, because I've written a simple java program utilizing the same code to read the same properties file, and the connection is made successfully.
Here are some things I have attempted to remedy the issue:
change the owner of the properties file (currently owned by 'Tomcat')
ensured that user 'root' has been granted all privileges in MySQL
ensured that port 3306 (MySQL default port) is accessible by my test server
updated iptables made various modifications to /etc/my.cnf file
(tried to bind ip, but that didn't work)
I have a hunch that the issue may be related to the fact that I am trying to access the MySQL database using user 'root'. Even though I'm accessing it via localhost, the system may not support this because MySQL treats this as access from a separate host and (maybe?) root access from other hosts isn't allowed?
Any suggestions on things to try would be greatly appreciated...
I believe the issue was a combination of things.
Here are some items to consider that ultimately fixed it for me:
- making sure you were accessing the correct app via browser (I was using ROOT app, but trying to connect to another one)
- making sure a user exists in MySQL using 'Create User ....'
- making sure all privileges are granted on the database in question, for some reason granting all privileges on . wasn't working for me

symfony database access configuration problem

I just joined a web dev project that uses Symfony 1.4 on CentOS 5.4 with MySQL. The server is down. My first task in the project is to get it back up. I don't know a lot about Symfony.
The Apache server log says
Access denied for user 'root'#'localhost' (using password: NO)
From all I can tell, the database access configuration is stored in
/var/www/html/<project name>/config/databases.yml
and for some reason, there's also some config in
/var/www/html/<project name>/config/propel.ini
There was no password for user root in either of the files, so I thought adding it and restarting Apache would finx the issue. It does not, the error message stays the same. I might be looking at the wrong config files, but I can't find any other.
Any wild guesses how to fix this ?
Cheers,
ssc
OK, I know now what I did wrong: The password indeed has to be added to databases.yml, but whenever the configuration is changed, the symfony cache needs to be cleared by executing
./symfony cc
in the /var/www/html/<project name>/ folder.
You are right, the database config file is stored at /WEB/project/config/databases.yml
You can also try to run the configure:database command from the symfony command line tool.
php symfony configure:database "mysql:host=DBHOST;dbname=DBNAME" USER PASS
A getting started guide, and much more can be found at: http://www.symfony-project.org/jobeet/1_4/Propel/en/