Audit Log not generating DML statement logs for Cloud-SQL - mysql

I am trying to display audit logs for Cloud-SQL in the stack-driver-console. I have already enabled audit-log for Cloud-SQL in IAM.
I connect to mysql or postgres databases in Cloud-SQL and when I connect these audit logs are displayed in the console.
request: {
#type: "type.googleapis.com/google.cloud.sql.v1beta4.SqlInstancesGetRequest"
instance: "testpostgres"
But after this if I perform any operation in that particular database like SELECT or INSERT or DELETE, there is no audit-log (data-access) generated for both mysql and postgres. In mysql instance I have set the following database flags:
audit_log ON (this is in beta version)
For mysql if I add 1 more flag **general_log** I am able to get those DML statements but they come under a different log **cloudsql.googleapis.com%2Fmysql-general.log** and don't come under audit-logs
Similarly for postgres these statements come under a different log:
**cloudsql.googleapis.com%2Fpostgres.log**
I am new to this Cloud-SQL so not aware of the logging implemented there. Why no audit logs are generated when any DML is done for the particular database in that Cloud-SQL instance and should I set any other flag for this purpose?

DML are not logged on the audit logs by default on Cloud SQL. To see the DML logs in Logging, you need use pgAudit on your PostgreSQL. By the way pgAudit is only available in PostgreSQL instances.
Steps to enable pgaudit:
Enable pgaudit using gcloud command
gcloud sql instances patch [INSTANCE_NAME] --database-flags \ cloudsql.enable_pgaudit=on,pgaudit.log=all
Create the pgaudit extension in your postgres database
CREATE EXTENSION pgaudit;
Run a simple select statement on your postgres database
Query DML statements logs in Logging:
Open Logging -> Logs Explorer
In the query builder apply this filter:
resource.type="cloudsql_database" logName="projects/<your-project-name>/logs/cloudaudit.googleapis.com%2Fdata_access" protoPayload.request.#type="type.googleapis.com/google.cloud.sql.audit.v1.PgAuditEntry"
I used the quickstart for PostgresSQL for testing.
Query postgre database:
Filter Logging using the filter above:

Related

GCP Cloud SQL for MySQL general log generates multiple sub logs

I have a setup of a MySQL Instance installed on Google Cloud with the following flags:
general_log: on
log_output: FILE
On the client side, I'm connecting via Cloud SQL Proxy authentication with DBeaver. The issue is that when I execute queries containing new lines on DBeaver, the logs as shown on Logs Explorer page are being split into multiple sub-logs, each containing a line from that query. Is there some way I can concatenate these logs by reconfiguring the SQL Instance's flags or using a different GCP plugin for audit logging, other than general-log? I need to resolve this issue not on the client side (I'm aware that I can simply reformat the text editor on DBeaver to eliminate new line characters).
I'm aware of this new auditing plugin cloudsql_mysql_audit, but when I install it to my SQL Instance I can't see any logs at all.

Howto activate named time zones

I want to automate the installation and configuration of a mysql server using azure cli.
The installation works well using azure mysql server create, however the configuration using azure mysql server configuration set -n time_zone --value Europe/Paris fails due to the following error:
Deployment failed. Correlation ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. The value 'Europe/Paris' for configuration 'time_zone' is not valid. The allowed values are '[+|-][0]{0,1}[0-9]:[0-5][0-9]|[+|-][1][0-2]:[0-5][0-9]|SYSTEM'.
As I read in the mysql docs I could enable named time zones executing the following sql SET GLOBAL time_zone = timezone;, but unfortunately my user would need super privilege for this to succeed and this is impossible in azure.
The other approach would be to run mysql_tzinfo_to_sql but this is not available using azure cli.
Is there any other way to activate named time zones?
From the Azure DB for MySQL documentation:
Populating the time zone tables
The time zone tables on your server can be populated by calling the mysql.az_load_timezone stored procedure from a tool like the MySQL command line or MySQL Workbench.
CALL mysql.az_load_timezone();
Also, in this doc (you linked to in your question):
Upon initial deployment, an Azure for MySQL server includes systems tables for time zone information, but these tables are not populated. The time zone tables can be populated by calling the mysql.az_load_timezone stored procedure from a tool like the MySQL command line or MySQL Workbench.
According to Error message the format should be one of these three:
[+|-][0]{0,1}[0-9]:[0-5][0-9]
eg. -04:30
or
[+|-][1][0-2]:[0-5][0-9]
e.g -12:00
or
SYSTEM
So have you tried doing it with quotes?
azure mysql server configuration set -n time_zone --value
"Europe/Paris"
CALL mysql.az_load_timezone();
Call this stored procedure from your session on the server,
If you are running the mysql.az_load_timezone command from MySQL Workbench, you may need to turn off safe update mode first using SET SQL_SAFE_UPDATES=0 or in Preferences->SQL Editor->Safe Updates(), and back to connect to your mysql server.

Google Cloud SQL - Catch bad logins

I have an existing MySQL (version 5.7) instance hosted (managed) by Google Cloud SQL. I want to get a notification when someone is trying to connect my database with a bad username\password.
My idea was to look for it on the Google Stackrive logs, but it's not there.
There is an option to collect this information?
UPDATE 1:
I tried to connect the instance with gcloud but unfortunately, it's not working.
$ gcloud sql connect mydb
Whitelisting your IP for incoming connection for 5 minutes...done.
ERROR: (gcloud.sql.connect) It seems your client does not have ipv6 connectivity and the database instance does not have an ipv4 address. Please request an ipv4 address for this database instance.
It's because the database is accessible only inside the internal network. I searched for flags like --internal-ip but didn't find one.
However, I was guessing that it's not making any difference if I'll try to access the database from my DB editor (workbench). So I did it:
Searching for the query that #Christopher advised - but it's not there.
What I missed?
UPDATE 2:
Screenshot of my Stackdrive:
Even if I remove this (resource.labels.database_id="***") condition - the result is the same.
There is an option to collect this information?
One of the best options to collect information about who is trying to connect to your Google Cloud SQL instance with wrong credentials is Stackdriver Logging.
Before beginning
To reproduce this steps, I connected to the Cloud SQL instance using the gcloud command:
gcloud sql connect [CLOUD_SQL_INSTANCE]
I am not entirely sure if using the mysql command line something will change along the lines, but in case it does, you should only look for the new log message, and update the last boolean entry (from point 4 on).
How to collect this information from Stackdriver Logging
Go under Stackdriver → Logging section.
To get the information we are looking for, we will use advanced log queries. Advanced log queries are expressions that can specify a set of log entries from any number of logs. Advanced logs queries can be used in the Logs Viewer, the Logging API, or the gcloud command-line tool. They are a powerful tool to get information from logs.
Here you will find how to get and enable advanced log queries in your logs.
Advanced log queries are just boolean expressions that specify a subset of all the log entries in your project. To find out who has enter with wrong credentials into your Cloud SQL instance running MySQL, we will use the following queries:
resource.type="cloudsql_database"
resource.labels.database_id="[PROJECT_ID]:[CLOUD_SQL_INSTANCE]"
textPayload:"Access denied for user"
Where [PROJECT_ID] corresponds to your project ID and [CLOUD_SQL_INSTANCE] corresponds to the name of the Cloud SQL instance you would like to supervise.
If you notice, the last boolean expression corresponding to textPayload uses the : operator.
As described here by using the : operator we are looking for matches with any sub string in the log entry field, so every log that matches the string specified, which in this case is: "Access denied for user".
If now some user enters the wrong credentials, you should see a message like the following appear within your logs:
[TIMEFRAME][Note] Access denied for user 'USERNAME'#'[IP]' (using password: YES)
From here is a matter of using one of GCP products to send you a notification when a user enters the wrong credentials.
I hope it helps.
As said in the GCP documentation :
Cloud Shell doesn't currently support connecting to a Cloud SQL instance that has only a private IP address.

How to setup/map remote mysql db in local phpmyadmin

I am working on a remote development server. I have the mysql host name, db name, user name , password of that remote server. I want to setup/replicate/map that dev server mysql in my local phpmyadmin, so that I can access the remote server db locally(for ex :- /mylocalip/remote-server-db).
Thus I don't have to do ssh connection and open the mysql in terminal. How can we do this in phpmyadmin/config.inc.php.
Let me explain again through an example. Lets say the remote server db is accessible through 213.81.203.130/phpmyadmin. I want to access that db from my local ip through an alias name by creating a mapping i.e 192.168.10.140/remote-db. Basically this can be done by adding some sort of code in phpmyadmin/config.inc.php or config.db.php. But how to do it I am not sure.
If you want to avoid using terminal, why not try MySQL Workbench to connect to the database?
UPDATE
In light of all the views to this question, I am adding a solution that more accurately matches the question. Please see this link, I believe it will be helpful. It involves editing the phpmyadmin config.inc.php file to add additional servers. This is how you can keep your localhost connection, and add any remote db connections. Simply select the server from the drop down at the login screen to phpmyadmin.
There are 3 methods to set this up
METHOD #1 : MySQL Replication
Setup MySQL Replication where the Slave has this option
replicate_do_table=mydb.mytable
Then, any DML (INSERT, UPDATE, DELETE) or DDL (ALTER TABLE) you execute will go immediately to serverB. This makes Method #1 is the fastest and most granular approach.
METHOD #2 : Copying the table to the other server
Rather than rehash, Here is an earlier post, Mr. RolandoMySQLDBA did May 31, 2011 for this method : [How do you copy a table from MySqlServer_A to MySqlServer_B?][1]
METHOD #3 : FEDERATED Table (MyISAM Only)
Suppose mytable on serverA looks like this
CREATE TABLE mydb.mytable ( ... ) ENGINE=MyISAM;
You can a mapping of the target table in serverB by running this on serverA like this
CREATE TABLE mydb.mytable_remote LIKE mytable; ALTER TABLE
mydb.mytable_remote ENGINE=FEDERATED
CONNECTION='mysql://username:password#serverB/mydb/mytable';

MySQL 5.1 / phpMyAdmin - logging CREATE/ALTER statements

Is it possible to log CREATE / ALTER statements issued on a MySQL server through phpMyAdmin? I heard that it could be done with a trigger, but I can't seem to find suitable code anywhere. I would like to log these statements to a table, preferably with the timestamp of when they were issued. Can someone provide me with a sample trigger that would enable me to accomplish this?
I would like to log these statements so I can easily synchronize the changes with another MySQL server.
There is a patch for phpMyAdmin which provides configurable logging with only some simple code modifications.
We did this at my work and then i tweaked it further to log into folders by day, log IP addresses and a couple other things and it works great.
Thanks #Unreason for the link, i couldn't recall where i found it.
Here is a script that would do what you want for mysql-proxy (check the link on official docs how to install the proxy).
To actually log the queries you can use something as simple as
function string.starts(String,Start)
return string.sub(String,1,string.len(Start))==Start
end
function read_query( packet )
if string.byte(packet) == proxy.COM_QUERY then
local query = string.lower(string.sub(packet, 2))
if string.starts(query, "alter") or string.starts(query, "create") then
-- give your logfile a name, absolute path worked for me
local log_file = '/var/log/mysql-proxy-ddl.log'
local fh = io.open(log_file, "a+")
fh:write( string.format("%s %6d -- %s \n",
os.date('%Y-%m-%d %H:%M:%S'),
proxy.connection.server["thread_id"],
query))
fh:flush()
end
end
end
The script was adopted from here, search for 'simple logging'.
This does not care about results - even if the query returned an error it would be logged (there is 'more customized logging' example, which is a better candidate for production logging).
Also, you might take another approach if it is applicable for you - define different users in your database and give DDL rights only to a certain user, then you could log everything for that user and you don't have to worry about details (for example - proxy recognizes the following server commands, out of which it inspects only Query)
Installing the proxy is straight forward, when you test it you can run it with
mysql-proxy --proxy-lua-script=/path/to/script.lua
It runs on port 4040 by default so test it with
mysql -u user -p -h 127.0.0.1 -P 4040
(make sure you don't bypass the proxy; for example on my distro mysql -u user -p -h localhost -P 4040 completely ignored the port and connected over socket, which left me puzzled for a few minutes)
The answer to your question will fall into one of the listed in MySQL Server logs
If you just want to get the CREATE/ALTER statements, I would go with the general query log. But you will have to parse the file manually. Be aware of the security issues this approach raises.
In your scenario, replication seems to be an overkill.
Triggers are not a valid option since they are only supported at SELECT, UPDATE and INSERT level and not ALTER/CREATE.
Edit 1:
The query log would be the best choice but as you mentioned on busy servers the logs would cause a considerable efficiency penalty. The only additional alternative I know of is MySQL Proxy.
I think that your best bet would be to look at the use of stored procedures and functions here to make changes to your DB. That way you could look at manually logging data.