Google Cloud SQL - Catch bad logins - mysql

I have an existing MySQL (version 5.7) instance hosted (managed) by Google Cloud SQL. I want to get a notification when someone is trying to connect my database with a bad username\password.
My idea was to look for it on the Google Stackrive logs, but it's not there.
There is an option to collect this information?
UPDATE 1:
I tried to connect the instance with gcloud but unfortunately, it's not working.
$ gcloud sql connect mydb
Whitelisting your IP for incoming connection for 5 minutes...done.
ERROR: (gcloud.sql.connect) It seems your client does not have ipv6 connectivity and the database instance does not have an ipv4 address. Please request an ipv4 address for this database instance.
It's because the database is accessible only inside the internal network. I searched for flags like --internal-ip but didn't find one.
However, I was guessing that it's not making any difference if I'll try to access the database from my DB editor (workbench). So I did it:
Searching for the query that #Christopher advised - but it's not there.
What I missed?
UPDATE 2:
Screenshot of my Stackdrive:
Even if I remove this (resource.labels.database_id="***") condition - the result is the same.

There is an option to collect this information?
One of the best options to collect information about who is trying to connect to your Google Cloud SQL instance with wrong credentials is Stackdriver Logging.
Before beginning
To reproduce this steps, I connected to the Cloud SQL instance using the gcloud command:
gcloud sql connect [CLOUD_SQL_INSTANCE]
I am not entirely sure if using the mysql command line something will change along the lines, but in case it does, you should only look for the new log message, and update the last boolean entry (from point 4 on).
How to collect this information from Stackdriver Logging
Go under Stackdriver → Logging section.
To get the information we are looking for, we will use advanced log queries. Advanced log queries are expressions that can specify a set of log entries from any number of logs. Advanced logs queries can be used in the Logs Viewer, the Logging API, or the gcloud command-line tool. They are a powerful tool to get information from logs.
Here you will find how to get and enable advanced log queries in your logs.
Advanced log queries are just boolean expressions that specify a subset of all the log entries in your project. To find out who has enter with wrong credentials into your Cloud SQL instance running MySQL, we will use the following queries:
resource.type="cloudsql_database"
resource.labels.database_id="[PROJECT_ID]:[CLOUD_SQL_INSTANCE]"
textPayload:"Access denied for user"
Where [PROJECT_ID] corresponds to your project ID and [CLOUD_SQL_INSTANCE] corresponds to the name of the Cloud SQL instance you would like to supervise.
If you notice, the last boolean expression corresponding to textPayload uses the : operator.
As described here by using the : operator we are looking for matches with any sub string in the log entry field, so every log that matches the string specified, which in this case is: "Access denied for user".
If now some user enters the wrong credentials, you should see a message like the following appear within your logs:
[TIMEFRAME][Note] Access denied for user 'USERNAME'#'[IP]' (using password: YES)
From here is a matter of using one of GCP products to send you a notification when a user enters the wrong credentials.
I hope it helps.

As said in the GCP documentation :
Cloud Shell doesn't currently support connecting to a Cloud SQL instance that has only a private IP address.

Related

GCP Cloud SQL for MySQL general log generates multiple sub logs

I have a setup of a MySQL Instance installed on Google Cloud with the following flags:
general_log: on
log_output: FILE
On the client side, I'm connecting via Cloud SQL Proxy authentication with DBeaver. The issue is that when I execute queries containing new lines on DBeaver, the logs as shown on Logs Explorer page are being split into multiple sub-logs, each containing a line from that query. Is there some way I can concatenate these logs by reconfiguring the SQL Instance's flags or using a different GCP plugin for audit logging, other than general-log? I need to resolve this issue not on the client side (I'm aware that I can simply reformat the text editor on DBeaver to eliminate new line characters).
I'm aware of this new auditing plugin cloudsql_mysql_audit, but when I install it to my SQL Instance I can't see any logs at all.

Getting an error while using GCP Data Migration Service

I am trying to move my database from a Managed Digital Ocean MYSQL database to GCP Cloud SQL and I thought I'd give the Database Migration Service a try.
Note that I have already tried the One time mysql dump method and it
works fine. I just wanted to try out the Continuous method to minimize down time.
Before even creating the job and running, I try to "Test the job" but I get the following error:
zoho-tracker is the project name and destination-mysql-8 is the destination profile name. This is something else that confuses me, the button says "Go To Source" while the error string shows the destination profile name.
I have tried reading the docs as much as I could and I have checked the prerequisites too. Here are some points of information:
Source MYSQL version is 8.0.20 and destination value I am setting is 8.
The GTID Mode value that I checked using SHOW VARIABLES LIKE 'gtid_mode' is found to be ON.
The server_id value is 2(non zero).
All tables in the relevant DB are innoDB.
The user on the source has the following privileges: SELECT, EXECUTE, RELOAD, SHOW VIEW, REPLICATION CLIENT, REPLICATION SLAVE.
The user/pass/host combo has been verified many times and is correct.
The user was created with 'username'#'%' string and not 'username'#'localhost'.
The user was created with mysql_native_password plugin (although I have tried different users which use the caching_sha2 plugin too).
The connectivity method is IP allowlist and all connections while testing are allowed, so I don't think I need to add the destination IP to the allowlist.
The version_comment variable in the source has a value of 'Source Distribution' not 'Maria DB'.
Any pointers would be appreciated.

Testing Script - Find open MySQL Ports and check Database

following Problem:
I want to Check all Open MySQL Ports in a network and give myself a list of them.
After this i want to check if i can get access to the MySQL database from the open ports.
It Would be just a security check script to avoid other people getting access to the databases.
Bash/perl/Powershell... maybe someone can give me a hint?
You can use NMAP for all port scanning tasks.
EDIT:
Lets asssume an example: mysql-vuln-cve2012-2122(This vulnerability tries to access the MySql server through open ports by bypassing authentication, if possible, also dumps the MySQL usernames and password hashes.)
Pre-requisite: You need the 'Vulns' library to be installed separately. Please read the documentation, to know more about how to install and other details, since it would be too tedious to explain it here.
mysql-vuln-cve2012-2122.pass
MySQL password. Default: nmapFTW.
mysql-vuln-cve2012-2122.user
MySQL username. Default: root.
mysql-vuln-cve2012-2122.iterations
Connection retries. Default: 1500.
mysql-vuln-cve2012-2122.socket_timeout
Socket timeout. Default: 5s.
Please leave the password blank to check for non-password vulnerabilities.
Command to run:
nmap -p3306 --script mysql-vuln-cve2012-2122 <target>
Here is your MySql instance
This will give an output, something like this:
PORT STATE SERVICE REASON
3306/tcp open mysql syn-ack
mysql-vuln-cve2012-2122:
VULNERABLE:
Authentication bypass in MySQL servers.
State: VULNERABLE
IDs: CVE:CVE-2012-2122
Description:
When a user connects to MariaDB/MySQL, a token (SHA
over a password and a random scramble string) is calculated and
compared
with the expected value. Because of incorrect casting, it might've
happened that the token and the expected value were considered
equal,
even if the memcmp() returned a non-zero value. In this case
MySQL/MariaDB would think that the password is correct, even while
it is
not. Because the protocol uses random strings, the probability of
hitting this bug is about 1/256.
Which means, if one knows a user name to connect (and "root"
almost
always exists), she can connect using *any* password by repeating
connection attempts. ~300 attempts takes only a fraction of
second, so
basically account password protection is as good as nonexistent.
Disclosure date: 2012-06-9
Extra information:
Server granted access at iteration #204
root:*9CFBBC772F3F6C106020035386DA5BBBF1249A11
debian-sys-maint:*BDA9386EE35F7F326239844C185B01E3912749BF
phpmyadmin:*9CFBBC772F3F6C106020035386DA5BBBF1249A11
For more and detailed info, please refer the above link.
The NMAP tools will not only help you in getting the list of port related vulnerabilities. It can also be used to search for other vulnerabilities like MySql injection,DDOS, brute force vulnerabilities and lot more. Though you need to download separate libraries for those.

Jawsdb on heroku, new database post migration, (Mysql2::Error: INSERT command denied to user..?)

Deployed a new version of our app on heroku and migrated over database from previous free jawsdb instance. However now every time user signs up gives
(Mysql2::Error: INSERT command denied to user <username for instance
what have i missed
migrated using a dump and re-import using mysql command line. eye balled exported data and it seems to be there (user emails etc)
all config vars look ok (DATABASE_URL is mysql2...)
i can login to the database via the url
I have not had to grant access or anything like that before, anyone come across this?
thanks
Ben
My guess is they disabled your INSERT grant because you have reached your max Storage Capacity for your plan.
To validate this is a permissions problem, log into a MySQL prompt with the user the app is running as, and enter this query:
SHOW GRANTS;
It probably list many, but no INSERT.
See this link. As explained in given link, jawsdb preliminary plan does not give you permission to add a new database. You are provided with one schema with some random name and you have to work with that only.
Check your migration
e.g. Make sure the database name matches.
For me, I got the same error as OP when trying to migrate my data. This was a fresh account with only a 50kb'ish database; nowhere close to the free-plan 5mb limit.
In my SQL export statement, my local database name is being used, however the remote MySQL (ie JawsDB) service auto-generates a db name, which will obviously not be the same. Simply used find-replace to change the database name to match remote; everything works.

What is the root error behind "Failed to establish a database connection. Check connection string, username and password."

Looking through google and stackoverflow, I found a number of questions asking about "Failed to establish a database connection. Check connection string, username and password." However, I cannot find anyone that has found what the underlying error is.
I am trying to write my first google script with a database connection; I have a mysql and oracle jdbc getConnection, both of which spawn this error. I have checked, double- and triple-check the connection information to no avail. I know the databases are accessible (can get in through other clients from several different machines like php on a linux box, sql developer on various windows PCs at home and work). How do I determine what the real error is? The error as presented to me is way too generic and abstract.
Environment:
Using a script in a Google Spreadsheet (thus inheriting whatever environment is established by google). I am attempting to use the Google API jdbc and have no further knowledge of the environment variables.
Using the following syntax:
var url = "jdbc:mysql://mysql.cb-pta.com:3306/u4lottery";
var conn = Jdbc.getConnection(url, user, password);
Again, user and password have been verified.
There is a known bug which causes problems with jdbc connections using hostnames. Try using an Ip address instead.
Bizarre, but true.. I lost almost 2 days with this bug....
Here is the link to the bug report....
Just for sake of others finding this older thread. I also had trouble using Apps Script to connect to a Google CloudSQL instance, and had to change from:
Jdbc.getCloudSqlConnection('jdbc:google:rdbms://<IP>/<DB_NAME>', '<USER>', 'PASSWORD');
to standard JDBC MySQL (which worked):
Jdbc.getConnection('jdbc:mysql:// ...same as above...
Also, beware and DO NOT include a '/' behind the DB_NAME - this will also cause failure with user/passwd error message!
One more thing to check is that you have proper firewall settings - to allow access from the source IP you are coming from (in the event of Apps Script, this is likely Google's servers - not you client browser's IP). You may have to open it up to all (0.0.0.0/0)
It might have something to do with your database being behind a firewall. According to Google documentation (in this case, Google Data Studio, but generally applicable): "If your database is behind a firewall, you will need to open access to the all of the following IP addresses. These are used by Data Studio to connect to and query your MySql database."
IP addresses here among other places: https://support.google.com/datastudio/answer/7088031