Auditing of Mysql Queries - mysql

I would like to enable auditing of Mysql Server both on Windows and Linux. I am working on collecting logs from MySQL server for a log analyzer tool. So, I first need to know how to enable auditing of all possible queries like error, warning, success and information.
1) How to enable auditing in MySQL(any version) in Windows and Linux.
2) How to send the logs to syslog(Unix) and EventLog(Windows)?
Can anyone share a step wise solution for the above questions?
I added the below lines in my.ini but I could restart the mysql server due to the below lines. If I remove these lines and restart, the server will restart successfully.
log_output="FILE"
general_log=1
general_log_file="E:\Logs\my-sql-general-log.log"
slow-query-log=1
slow_query_log_file="my-sql-slow-log.log"
long_query_time=10
I tried with only general log and faced the same issue. I also tried general_log=on and general_log=1. Found no change.
I tried the above change on Windows 10 with MySQL Server 5.0
EDIT 1 :
I added the below line in my.ini
log="C:/Program Files/MySQL/MySQL Server 5.0/logs/general-log.log"
After adding the above line, the query logs were written in the file. The errors are still not written and they are not forwarded to Event Viewer.
EDIT 2 :
1) I added the below lines to my.ini in my Windows machine
log="C:/Program Files/MySQL/MySQL Server 5.0/logs/my-sql-general-log.log"
log_bin="C:/Program Files/MySQL/MySQL Server 5.0/logs/my-sql-bin-log.log"
log_error="C:/Program Files/MySQL/MySQL Server 5.0/logs/my-sql-error-log.log"
log_slow_queries
long_query_time = 1
In this case, all the queries are written to the general log but errors are not written to the error log. In error log, only the start/stop of mysql service logs are written.
(i) How to write errors like 'No database selected' or 'Table does not exist' to the error log file?
(ii) If I add log-output=TABLE in my.ini, the mysql service won't restart. What causes this issue? It works fine in Linux.
(iii) How to send these logs to EventViewer?
2) I added the below lines to my.cnf in my Linux machine
[mysqld_safe]
syslog
[mysqld]
general_log_file = /var/log/mysql/mysql.log
general_log = 1
log_error = /var/logs/mysql/error.log
log_slow_queries = /var/log/mysql/mysql-slow.log
long_query_time = 2
log-queries-not-using-indexes
log_bin = /var/log/mysql/mysql-bin.log
After adding these lines, the general logs are sent to the syslog server but not the error logs.
(i) How to write errors like 'No database selected' or 'Table does not exist' to the error log file and send them to syslog server?
(ii) I also tried log-output=TABLE but when logs are written to table, it is not sent to syslog server. How do I send the logs to syslog server if the logs are written to table?
The General Log is written like below -
181017 11:46:41 1 Connect root#localhost on
1 Query set autocommit=1
1 Query SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ
1 Query SHOW SESSION VARIABLES LIKE 'lower_case_table_names'
1 Query SELECT current_user()
1 Quit
Is there a way to make the logs to be written like below ?
Time User ThreadID Command Argument
181017 11:46:41 root#localhost 1 Connect root#localhost on
181017 11:46:41 root#localhost 1 Query set autocommit=1
181017 11:46:41 root#localhost 1 Query SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ
181017 11:46:41 root#localhost 1 Query SHOW SESSION VARIABLES LIKE 'lower_case_table_names'
181017 11:46:41 root#localhost 1 Query SELECT current_user()
181017 11:46:41 root#localhost 1 Quit
EDIT 3 :
I was able to send all the logs to syslog by modifying the rsyslog.conf file but I'm still unable to forward the general and slow query logs to Windows EventLog.

Related

LOAD_FILE returns only null

I want to insert the image to the database with this query:
SELECT LOAD_FILE('/Users/juliagaskevich/cool-background.svg');
I checked that:
File size < max_allowed_packet.
File has read, write and execute permissions to everyone.
The user, what I'm using to execute the query has FILE privilege.
mysql> show variables like "secure_file_priv" returns NULL.
System configuration:
macOS Monterey, version 12.5, chip Apple M1
MySQL Ver 8.0.29 for macos12 on arm64 (MySQL Community Server - GPL)
MySQL Config:
[mysqld]
user = mysql_deamon
slow_query_log_file = my-slow-query.log
slow-query-log
log-queries-not-using-indexes
general_log_file = my-GENERAL.log
max_allowed_packet = 1073741824
Also, I tried to fix my problem with guys' ideas, but...
1.vi my.cnf
secure-file-priv=/mysql/dataload(Change /mysql/dataload to your own
directory.)
2.restart mysql service
Changed my.cnf
Changed my.cnf (without quotes in secure-file-priv path)
The statement SELECT LOAD_FILE(...) delivers one string. To INSERT,
use the statement LOAD DATA INFILE ...
1290 ERROR with null secure_file_priv result
1.vi my.cnf
secure-file-priv=/mysql/dataload(Change /mysql/dataload to your own directory.)
2.restart mysql service
The statement SELECT LOAD_FILE(...) delivers one string. To INSERT, use the statement LOAD DATA INFILE ...
SOLVING
If you are using mac, you have the /etc/my.cnf (file, where is MySQL config stored), so put there this string secure_file_priv = ""
Run this to apply config changes mysqld --defaults-file=/etc/my.cnf --validate-config --log_error_verbosity=2 AND IF you have any warnings or errors fix it, because it can solve your problem
If you are using some different user than root, the user have to have a FILE privileges mysql> GRANT privilege ON privilege_level TO account_name;, for example mysql> GRANT FILE ON *.* TO pizzaadmin;
The most important thing: To apply changes, you have to restart mysqld.
There are few steps:
stop it (You can do this with this command mysqladmin shutdown)
start again (You can do this with just executing mysqld in terminal)

Mysql slave gets ruined after a restart

Please help!
I set up a master-slave replication based on the GTID mechanism.
The replication works OK, until a mysqld restart happens on slave. Then the mess begins...
After such a restart, I can not restore the replication.
When issuing a "START SLAVE" command I get the following an error message:
ERROR 1794 (HY000) at line 1: Slave is not configured or failed to
initialize properly. You must at least set --server-id to enable
either a master or a slave. Additional error messages can be found in
the MySQL error log.
Needless to say I did set server-id in my.cnf (see below).
In /var/log/mysqld.log file, I found the following error message:
[ERROR] Error creating master info: Multiple replication metadata
repository instances found with data in them. Unable to decide which
is the correct one to choose.
[ERROR] Failed to create or recover replication info repository.
I can not understand what have I done wrong.
The communication between master and slave is ssl-tunneled through stunnel, but I don't think this is a relevant fact, since until a restart everything works right.
The only way I found to re-establish the replication (after mysql restart) is to manually delete the mysql data files, and then load again the dump file imported from the master. (I use mysqldump). This is of course unreasonable.
Following are the my.cnf files:
On slave:
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Recommended in standard MySQL setup
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
server-id=2
log-bin=mysql-bin
binlog_format=ROW
relay_log=relay-log
skip-slave-start
enforce-gtid-consistency
gtid-mode=ON
log-slave-updates
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
On mater:
[mysqld]
server-id=1
log-bin=mysql-bin
binlog_format=ROW
gtid-mode=on
enforce-gtid-consistency
log-slave-updates
innodb_buffer_pool_size = 1G
query_cache_size = 32M
Slave machine: Centos 6.6, mysql 5.6.24.
Master machine: RHEL 6.6, mysql 5.6.10.
Any help wold be greatly appreciated!
Thanks
Nadav Blum
on master -
mysql> reset master;
[this command will clear binary logs of master and start with new. so save it if you want.]
when you start the slave mysqld, run the following command
mysql> stop salve;
mysql> reset slave;
mysql> change master to master_host='192.168.10.116', master_user='root', master_password='root', master_auto_position=1;
mysql> start slave;
mysql> show slave status \G
Now if all goes well then, you can restart the slave (if it is committed all the transaction then no problem else it will start to execute transection in your master binary log. You can check your relay log file)
Well, mystery solved.
Remember how I wrote that the issue has nothing to do with my usage of stunnel, as the mean for tunneling communication between master and slave ?
Well, I was wrong.
The thing is, I used localhost port 3307 as the end point for the slave communication to the master. (stunnel listened to this port and forwarded data to the master-server ip). So the "change master" was done via:
change master to master_host="localhost", master_port=3307, master_user="XXX", master_password="XXX", MASTER_AUTO_POSITION = 1;'
That "localhost" thing caused the mess. I changed it to "127.0.0.1", and now restarts cause no harm!
Thanks Hitech and Jaydee for your help!
Ran into the same problem yesterday.
Oracle support doc helped.
For people who don't have Oracle support.
CAUSE
The cause is that both TABLE and FILE replication repository metadata exist at the same time, but only one form should.
SOLUTION
Before setting up replication, remove the files specified by the my.cnf variables relay_log_info_file and master_info_file .
By default their names map to relay-log.info and master.info and they are located in the datadir. (I had to remove the master.info file)
And remove any residual configuration by executing:
STOP SLAVE;
SET SQL_LOG_BIN=0;
DELETE FROM mysql.slave_master_info ;
DELETE FROM mysql.slave_relay_log_info ;
SET SQL_LOG_BIN=1;

MySQL binary logs will not start

In-Short: My binary logs aren't starting even though log-bin is set and specified. I'm not sure how to fix it.
I have a MariaDB instance running as a service on windows that I am attempting to replicate to a MariaDB instance on a Ubuntu machine. I am using MySQL workbench 6.0 as much as I can to manage everything, and following the instructions from Oracle here for setting up master-slave replication: http://dev.mysql.com/doc/refman/5.0/en/replication-howto.html
I have made it to the fourth chapter, where I allegedly have the master and slave both configured, and I am about to read-lock the master tables for an initial data dump to the slave before I start up replication. So I flushed the tables with read lock and checked the master status:
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
That last line didn't return any binary log information. Checking further, I ran:
SHOW BINARY LOGS;
and an error message confirmed that:
Error Code: 1381. You are not using binary logging
Master Config is like this:
[mysqld]
datadir = "C:/mysql/data"
port=3306
sql_mode="STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION"
default_storage_engine=innodb
innodb_buffer_pool_size=1535M
innodb_log_file_size=50M
feedback=ON
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
log-bin-index = "C:/mysql/logs/log-bin.index"
log-bin=mysql-bin
server-id=1
innodb_flush_log_at_trx_commit=1
[client]
port=3306
How do I make sure the binary logs are rolling so I can continue with this?

MySQL General Log Not Starting on 'Set Global'

I am trying to set up my MySQL general log so that it can be switched on and off by using
SET GLOBAL general_log = 'ON'
SET GLOBAL general_log = 'OFF'
I would like it off by default (i.e. on server startup) but then have the ability to toggle it as above, so that I don't have to keep restarting the server. When I attempt to switch general logging ON as above, MySQL generates the following error:
Table 'mysql.general_log' doesn't exist
This is true - I have purposely not created this table as I would like logging to occur to file - NOT to tables. This suggests to me that that MySQL is trying to log the general queries to table even though the relevant global variables are set as below:
log_output = FILE
general_log = OFF
general_log_file = /var/log/mysql-general.log
The relevant part of the my.cnf is as follows:
[mysqld]
general-log = OFF
general-log-file = /var/log/mysql-general.log
I am using MySQL version 5.1.58 on a Linux server.
Thanks in advance,
Andy
Since the table mysql.general_log does not exist, I assume you upgraded from a previous version of MySQL and need to run mysql_upgrade to create them.
Backup all of your databases using mysqldump and do a filesystem backup of /var/lib/mysql, then execute the following commands:
mysql_upgrade -p --force
followed by
service mysql restart
or
/etc/init.d/mysql restart
If the general_log table still does not exist after taking these steps, follow the steps in this post: MySql - I dropped general_log table to manually create it.

MySQL Error 1153 - Got a packet bigger than 'max_allowed_packet' bytes

I'm importing a MySQL dump and getting the following error.
$ mysql foo < foo.sql
ERROR 1153 (08S01) at line 96: Got a packet bigger than 'max_allowed_packet' bytes
Apparently there are attachments in the database, which makes for very large inserts.
This is on my local machine, a Mac with MySQL 5 installed from the MySQL package.
Where do I change max_allowed_packet to be able to import the dump?
Is there anything else I should set?
Just running mysql --max_allowed_packet=32M … resulted in the same error.
You probably have to change it for both the client (you are running to do the import) AND the daemon mysqld that is running and accepting the import.
For the client, you can specify it on the command line:
mysql --max_allowed_packet=100M -u root -p database < dump.sql
Also, change the my.cnf or my.ini file (usually found in /etc/mysql/) under the mysqld section and set:
max_allowed_packet=100M
or you could run these commands in a MySQL console connected to that same server:
set global net_buffer_length=1000000;
set global max_allowed_packet=1000000000;
(Use a very large value for the packet size.)
As michaelpryor said, you have to change it for both the client and the daemon mysqld server.
His solution for the client command-line is good, but the ini files don't always do the trick, depending on configuration.
So, open a terminal, type mysql to get a mysql prompt, and issue these commands:
set global net_buffer_length=1000000;
set global max_allowed_packet=1000000000;
Keep the mysql prompt open, and run your command-line SQL execution on a second terminal..
This can be changed in your my.ini file (on Windows, located in \Program Files\MySQL\MySQL Server) under the server section, for example:
[mysqld]
max_allowed_packet = 10M
Re my.cnf on Mac OS X when using MySQL from the mysql.com dmg package distribution
By default, my.cnf is nowhere to be found.
You need to copy one of /usr/local/mysql/support-files/my*.cnf to /etc/my.cnf and restart mysqld. (Which you can do in the MySQL preference pane if you installed it.)
The fix is to increase the MySQL daemon’s max_allowed_packet. You can do this to a running daemon by logging in as Super and running the following commands.
# mysql -u admin -p
mysql> set global net_buffer_length=1000000;
Query OK, 0 rows affected (0.00 sec)
mysql> set global max_allowed_packet=1000000000;
Query OK, 0 rows affected (0.00 sec)
Then to import your dump:
gunzip < dump.sql.gz | mysql -u admin -p database
In etc/my.cnf try changing the max_allowed _packet and net_buffer_length to
max_allowed_packet=100000000
net_buffer_length=1000000
if this is not working then try changing to
max_allowed_packet=100M
net_buffer_length=100K
On CENTOS 6 /etc/my.cnf , under [mysqld] section the correct syntax is:
[mysqld]
# added to avoid err "Got a packet bigger than 'max_allowed_packet' bytes"
#
net_buffer_length=1000000
max_allowed_packet=1000000000
#
I have resolved my issue by this query
SET GLOBAL max_allowed_packet=1073741824;
and check max_allowed_packet with this query
SHOW VARIABLES LIKE 'max_allowed_packet';
Use a max_allowed_packet variable issuing a command like
mysql --max_allowed_packet=32M
-u root -p database < dump.sql
Slightly unrelated to your problem, so here's one for Google.
If you didn't mysqldump the SQL, it might be that your SQL is broken.
I just got this error by accidentally having an unclosed string literal in my code. Sloppy fingers happen.
That's a fantastic error message to get for a runaway string, thanks for that MySQL!
Error:
ERROR 1153 (08S01) at line 6772: Got a packet bigger than
'max_allowed_packet' bytes Operation failed with exitcode 1
QUERY:
SET GLOBAL max_allowed_packet=1073741824;
SHOW VARIABLES LIKE 'max_allowed_packet';
Max value:
Default Value (MySQL >= 8.0.3) 67108864
Default Value (MySQL <= 8.0.2) 4194304
Minimum Value 1024
Maximum Value 1073741824
Sometimes type setting:
max_allowed_packet = 16M
in my.ini is not working.
Try to determine the my.ini as follows:
set-variable = max_allowed_packet = 32M
or
set-variable = max_allowed_packet = 1000000000
Then restart the server:
/etc/init.d/mysql restart
It is a security risk to have max_allowed_packet at higher value, as an attacker can push bigger sized packets and crash the system.
So, Optimum Value of max_allowed_packet to be tuned and tested.
It is to better to change when required (using set global max_allowed_packet = xxx)
than to have it as part of my.ini or my.conf.
I am working in a shared hosting environment and I have hosted a website based on Drupal. I cannot edit the my.ini file or my.conf file too.
So, I deleted all the tables which were related to Cache and hence I could resolve this issue. Still I am looking for a perfect solution / way to handle this problem.
Edit - Deleting the tables created problems for me, coz Drupal was expecting that these tables should be existing. So I emptied the contents of these tables which solved the problem.
Set max_allowed_packet to the same (or more) than what it was when you dumped it with mysqldump. If you can't do that, make the dump again with a smaller value.
That is, assuming you dumped it with mysqldump. If you used some other tool, you're on your own.