MySQL debian-sys-maint quit and connect - mysql

Hi all I have recently noticed that in my mysql.log file that it is getting flooded with the below messages.
I am not sure why the debian-sys-maint keeps making this connection and then quitting, it has all the right credentials so it is not a permissions issue to my understanding.
Does anyone have any ideas thank you
150903 12:12:17 192 Connect debian-sys-maint#localhost on
192 Quit
150903 12:12:18 193 Connect debian-sys-maint#localhost on
193 Quit
150903 12:12:19 194 Connect debian-sys-maint#localhost on
194 Quit

The log is correct.
Connection to MySql will quit after connecting and executing a query.
You might keep the connection alive, but in case you are executing into something that starts & terminates with a single HTTP request/response (like a PHP script), then it's normal that the connection to MySql gets closed at the end of the script.
Seeing that log just means that global general_log is active (instruction SET global general_log = on; executed in the MySql console).
If you don't want those logs just set general_log to off.
Otherwise, I would wonder more why there is no SQL query logged in between the Connect and the Quit.

Related

Auditing of Mysql Queries

I would like to enable auditing of Mysql Server both on Windows and Linux. I am working on collecting logs from MySQL server for a log analyzer tool. So, I first need to know how to enable auditing of all possible queries like error, warning, success and information.
1) How to enable auditing in MySQL(any version) in Windows and Linux.
2) How to send the logs to syslog(Unix) and EventLog(Windows)?
Can anyone share a step wise solution for the above questions?
I added the below lines in my.ini but I could restart the mysql server due to the below lines. If I remove these lines and restart, the server will restart successfully.
log_output="FILE"
general_log=1
general_log_file="E:\Logs\my-sql-general-log.log"
slow-query-log=1
slow_query_log_file="my-sql-slow-log.log"
long_query_time=10
I tried with only general log and faced the same issue. I also tried general_log=on and general_log=1. Found no change.
I tried the above change on Windows 10 with MySQL Server 5.0
EDIT 1 :
I added the below line in my.ini
log="C:/Program Files/MySQL/MySQL Server 5.0/logs/general-log.log"
After adding the above line, the query logs were written in the file. The errors are still not written and they are not forwarded to Event Viewer.
EDIT 2 :
1) I added the below lines to my.ini in my Windows machine
log="C:/Program Files/MySQL/MySQL Server 5.0/logs/my-sql-general-log.log"
log_bin="C:/Program Files/MySQL/MySQL Server 5.0/logs/my-sql-bin-log.log"
log_error="C:/Program Files/MySQL/MySQL Server 5.0/logs/my-sql-error-log.log"
log_slow_queries
long_query_time = 1
In this case, all the queries are written to the general log but errors are not written to the error log. In error log, only the start/stop of mysql service logs are written.
(i) How to write errors like 'No database selected' or 'Table does not exist' to the error log file?
(ii) If I add log-output=TABLE in my.ini, the mysql service won't restart. What causes this issue? It works fine in Linux.
(iii) How to send these logs to EventViewer?
2) I added the below lines to my.cnf in my Linux machine
[mysqld_safe]
syslog
[mysqld]
general_log_file = /var/log/mysql/mysql.log
general_log = 1
log_error = /var/logs/mysql/error.log
log_slow_queries = /var/log/mysql/mysql-slow.log
long_query_time = 2
log-queries-not-using-indexes
log_bin = /var/log/mysql/mysql-bin.log
After adding these lines, the general logs are sent to the syslog server but not the error logs.
(i) How to write errors like 'No database selected' or 'Table does not exist' to the error log file and send them to syslog server?
(ii) I also tried log-output=TABLE but when logs are written to table, it is not sent to syslog server. How do I send the logs to syslog server if the logs are written to table?
The General Log is written like below -
181017 11:46:41 1 Connect root#localhost on
1 Query set autocommit=1
1 Query SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ
1 Query SHOW SESSION VARIABLES LIKE 'lower_case_table_names'
1 Query SELECT current_user()
1 Quit
Is there a way to make the logs to be written like below ?
Time User ThreadID Command Argument
181017 11:46:41 root#localhost 1 Connect root#localhost on
181017 11:46:41 root#localhost 1 Query set autocommit=1
181017 11:46:41 root#localhost 1 Query SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ
181017 11:46:41 root#localhost 1 Query SHOW SESSION VARIABLES LIKE 'lower_case_table_names'
181017 11:46:41 root#localhost 1 Query SELECT current_user()
181017 11:46:41 root#localhost 1 Quit
EDIT 3 :
I was able to send all the logs to syslog by modifying the rsyslog.conf file but I'm still unable to forward the general and slow query logs to Windows EventLog.

How can I pass command-line arguments when creating a new Database connection in SQLPro?

When I start my MySQL client from the command-line, I do the following:
$ mysql -u root -p -h 127.0.0.1 --init-command="SET SESSION wait_timeout=300"
I set the session wait_timeout to 300 seconds for security purposes. If there is no database activity for 5 minutes, I want the connection to be killed so that it is not actively left open for long periods of time which is a security risk.
However, I really prefer using the Mac desktop application SequelPro to access the database instead of the command-line shell. It's my bread-and-butter. I absolutely love it. Here's what it looks like when I open a DB connection in SequelPro:
So how can I give SequelPro the same `--init-common argument I gave on the command-line above? Or is there any other way for me to achieve the security goal I'm trying for?
If you want to make this global setting for everyone connecting from any tool. You add this to the configuration file, my.cnf (if you're running MySQL on Unix-based OSs) or my.ini (if you're running MySQL on Windows-based OSs).
This is from MySQL documentation about wait_timeout
The number of seconds the server waits for activity on a
noninteractive connection before closing it.
On thread startup, the session wait_timeout value is initialized from
the global wait_timeout value or from the global interactive_timeout
value, depending on the type of client (as defined by the
CLIENT_INTERACTIVE connect option to mysql_real_connect()). See also
interactive_timeout.
So, set this global parameter in [mysqld] section of your configuration file to keep your security in check.
[mysqld]
interactive_timeout=300
wait_timeout=300

mysqldump error 2003 Can't connect to MySQL server ... (110)

I don't think this is a dup question as I have read other posts about error 2003 and none resolve my situation.
I have a bash script that executes mysqldump on a nightly basis against many tables in an Amazon RDS database. It has worked without issue for months, but recently, I've started seeing errors. Example results for a recent week:
Day 1: success
Day 2: success
Day 3:
TBL citygrid_state: export entire table
TBL ci_sessions: export entire table
mysqldump: Got error: 2003: Can't connect to MySQL server on 'blah' (110) when trying to connect
... wrlog crit forced halt
Day 4: success
Day 5:
TBL sparefoot.consumer_lead_action_meta: export entire table
TBL sparefoot.consumer_lead_action_type: export entire table
mysqldump: Got error: 2003: Can't connect to MySQL server on 'blah' (110) when trying to connect
... wrlog crit forced halt
Since the script works completely some nights and the calls to mysqldump work dozens of times before an error occurs, my thought is that I have a timeout problem. But where? Might it be a MySQL setting? Or does the issue lie elsewhere?
Some of the MySQL timeout settings:
connect_timeout 10
interactive_timeout 14400
net_read_timeout 30
net_write_timeout 60
wait_timeout 28800

ERROR 2006 (HY000): MySQL server has gone away

I get this error when I try to source a large SQL file (a big INSERT query).
mysql> source file.sql
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 2
Current database: *** NONE ***
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 3
Current database: *** NONE ***
Nothing in the table is updated. I've tried deleting and undeleting the table/database, as well as restarting MySQL. None of these things resolve the problem.
Here is my max-packet size:
+--------------------+---------+
| Variable_name | Value |
+--------------------+---------+
| max_allowed_packet | 1048576 |
+--------------------+---------+
Here is the file size:
$ ls -s file.sql
79512 file.sql
When I try the other method...
$ ./mysql -u root -p my_db < file.sql
Enter password:
ERROR 2006 (HY000) at line 1: MySQL server has gone away
max_allowed_packet=64M
Adding this line into my.cnf file solves my problem.
This is useful when the columns have large values, which cause the issues, you can find the explanation here.
On Windows this file is located at: "C:\ProgramData\MySQL\MySQL Server
5.6"
On Linux (Ubuntu): /etc/mysql
You can increase Max Allowed Packet
SET GLOBAL max_allowed_packet=1073741824;
http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_max_allowed_packet
The global update and the my.cnf settings didn't work for me for some reason. Passing the max_allowed_packet value directly to the client worked here:
mysql -h <hostname> -u username -p --max_allowed_packet=1073741824 <databasename> < db.sql
In general the error:
Error: 2006 (CR_SERVER_GONE_ERROR) - MySQL server has gone away
means that the client couldn't send a question to the server.
mysql import
In your specific case while importing the database file via mysql, this most likely mean that some of the queries in the SQL file are too large to import and they couldn't be executed on the server, therefore client fails on the first occurred error.
So you've the following possibilities:
Add force option (-f) for mysql to proceed and execute rest of the queries.
This is useful if the database has some large queries related to cache which aren't relevant anyway.
Increase max_allowed_packet and wait_timeout in your server config (e.g. ~/.my.cnf).
Dump the database using --skip-extended-insert option to break down the large queries. Then import it again.
Try applying --max-allowed-packet option for mysql.
Common reasons
In general this error could mean several things, such as:
a query to the server is incorrect or too large,
Solution: Increase max_allowed_packet variable.
Make sure the variable is under [mysqld] section, not [mysql].
Don't afraid to use large numbers for testing (like 1G).
Don't forget to restart the MySQL/MariaDB server.
Double check the value was set properly by:
mysql -sve "SELECT ##max_allowed_packet" # or:
mysql -sve "SHOW VARIABLES LIKE 'max_allowed_packet'"
You got a timeout from the TCP/IP connection on the client side.
Solution: Increase wait_timeout variable.
You tried to run a query after the connection to the server has been closed.
Solution: A logic error in the application should be corrected.
Host name lookups failed (e.g. DNS server issue), or server has been started with --skip-networking option.
Another possibility is that your firewall blocks the MySQL port (e.g. 3306 by default).
The running thread has been killed, so retry again.
You have encountered a bug where the server died while executing the query.
A client running on a different host does not have the necessary privileges to connect.
And many more, so learn more at: B.5.2.9 MySQL server has gone away.
Debugging
Here are few expert-level debug ideas:
Check the logs, e.g.
sudo tail -f $(mysql -Nse "SELECT ##GLOBAL.log_error")
Test your connection via mysql, telnet or ping functions (e.g. mysql_ping in PHP).
Use tcpdump to sniff the MySQL communication (won't work for socket connection), e.g.:
sudo tcpdump -i lo0 -s 1500 -nl -w- port mysql | strings
On Linux, use strace. On BSD/Mac use dtrace/dtruss, e.g.
sudo dtruss -a -fn mysqld 2>&1
See: Getting started with DTracing MySQL
Learn more how to debug MySQL server or client at: 26.5 Debugging and Porting MySQL.
For reference, check the source code in sql-common/client.c file responsible for throwing the CR_SERVER_GONE_ERROR error for the client command.
MYSQL_TRACE(SEND_COMMAND, mysql, (command, header_length, arg_length, header, arg));
if (net_write_command(net,(uchar) command, header, header_length,
arg, arg_length))
{
set_mysql_error(mysql, CR_SERVER_GONE_ERROR, unknown_sqlstate);
goto end;
}
I solved the error ERROR 2006 (HY000) at line 97: MySQL server has gone away and successfully migrated a >5GB sql file by performing these two steps in order:
Created /etc/my.cnf as others have recommended, with the following contents:
[mysql]
connect_timeout = 43200
max_allowed_packet = 2048M
net_buffer_length = 512M
debug-info = TRUE
Appending the flags --force --wait --reconnect to the command (i.e. mysql -u root -p -h localhost my_db < file.sql --verbose --force --wait --reconnect).
Important Note: It was necessary to perform both steps, because if I didn't bother making the changes to /etc/my.cnf file as well as appending those flags, some of the tables were missing after the import.
System used: OSX El Capitan 10.11.5; mysql Ver 14.14 Distrib 5.5.51 for osx10.8 (i386)
Just in case, to check variables you can use
$> mysqladmin variables -u user -p
This will display the current variables, in this case max_allowed_packet, and as someone said in another answer you can set it temporarily with
mysql> SET GLOBAL max_allowed_packet=1072731894
In my case the cnf file was not taken into account and I don't know why, so the SET GLOBAL code really helped.
You can also log into the database as root (or SUPER privilege) and do
set global max_allowed_packet=64*1024*1024;
doesn't require a MySQL restart as well. Note that you should fix your my.cnf file as outlined in other solutions:
[mysqld]
max_allowed_packet=64M
And confirm the change after you've restarted MySQL:
show variables like 'max_allowed_packet';
You can use the command-line as well, but that may require updating the start/stop scripts which may not survive system updates and patches.
As requested, I'm adding my own answer here. Glad to see it works!
The solution is increasing the values given the wait_timeout and the connect_timeout parameters in your options file, under the [mysqld] tag.
I had to recover a 400MB mysql backup and this worked for me (the values I've used below are a bit exaggerated, but you get the point):
[mysqld]
port=3306
explicit_defaults_for_timestamp = TRUE
connect_timeout = 1000000
net_write_timeout = 1000000
wait_timeout = 1000000
max_allowed_packet = 1024M
interactive_timeout = 1000000
net_buffer_length = 200M
net_read_timeout = 1000000
set GLOBAL delayed_insert_timeout=100000
Blockquote
I had the same problem but changeing max_allowed_packet in the my.ini/my.cnf file under [mysqld] made the trick.
add a line
max_allowed_packet=500M
now restart the MySQL service once you are done.
A couple things could be happening here;
Your INSERT is running long, and client is disconnecting. When it reconnects it's not selecting a database, hence the error. One option here is to run your batch file from the command line, and select the database in the arguments, like so;
$ mysql db_name < source.sql
Another is to run your command via php or some other language. After each long - running statement, you can close and re-open the connection, ensuring that you're connected at the start of each query.
If you are on Mac and installed mysql through brew like me, the following worked.
cp $(brew --prefix mysql)/support-files/my-default.cnf /usr/local/etc/my.cnf
Source: For homebrew mysql installs, where's my.cnf?
add max_allowed_packet=1073741824 to /usr/local/etc/my.cnf
mysql.server restart
I had the same problem in XAMMP
Metode-01: I changed max_allowed_packet in the D:\xampp\mysql\bin\my.ini file like that below:
max_allowed_packet=500M
Finally restart the MySQL service once and done.
Metode-02:
the easier way if you are using XAMPP. Open the XAMPP control panel, and click on the config button in mysql section.
Now click on the my.ini and it will open in the editor. Update the max_allowed_packet to your required size.
Then restart the mysql service. Click on stop on the Mysql service click start again. Wait for a few minutes.
Then try to run your Mysql query again. Hope it will work.
I encountered this error when I use Mysql Cluster, I do not know this question is from a cluster usage or not. As the error is exactly the same, so give my solution here.
Getting this error because the data nodes suddenly crash. But when the nodes crash, you can still get the correct result using cmd:
ndb_mgm -e 'ALL REPORT MEMORYUSAGE'
And the mysqld also works correctly.So at first, I can not understand what is wrong. And about 5 mins later, ndb_mgm result shows no data node working. Then I realize the problem. So, try to restart all the data nodes, then the mysql server is back and everything is OK.
But one thing is weird to me, after I lost mysql server for some queries, when I use cmd like show tables, I can still get the return info like 33 rows in set (5.57 sec), but no table info is displayed.
This error message also occurs when you created the SCHEMA with a different COLLATION than the one which is used in the dump. So, if the dump contains
CREATE TABLE `mytab` (
..
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
you should also reflect this in the SCHEMA collation:
CREATE SCHEMA myschema COLLATE utf8_unicode_ci;
I had been using utf8mb4_general_ci in the schema, cause my script came from a fresh V8 installation, now loading a DB on old 5.7 crashed and drove me nearly crazy.
So, maybe this helps you saving some frustating hours... :-)
(MacOS 10.3, mysql 5.7)
Add max_allowed_packet=64M to [mysqld]
[mysqld]
max_allowed_packet=64M
Restart the MySQL server.
If it's reconnecting and getting connection ID 2, the server has almost definitely just crashed.
Contact the server admin and get them to diagnose the problem. No non-malicious SQL should crash the server, and the output of mysqldump certainly should not.
It is probably the case that the server admin has made some big operational error such as assigning buffer sizes of greater than the architecture's address-space limits, or more than virtual memory capacity. The MySQL error-log will probably have some relevant information; they will be monitoring this if they are competent anyway.
This is more of a rare issue but I have seen this if someone has copied the entire /var/lib/mysql directory as a way of migrating their DB to another server. The reason it doesn't work is because the database was running and using log files. It doesn't work sometimes if there are logs in /var/log/mysql. The solution is to copy the /var/log/mysql files as well.
For amazon RDS (it's my case), you can change the max_allowed_packet parameter value to any numeric value in bytes that makes sense for the biggest data in any insert you may have (e.g.: if you have some 50mb blob values in your insert, set the max_allowed_packet to 64M = 67108864), in a new or existing parameter-group. Then apply that parameter-group to your MySQL instance (may require rebooting the instance).
For Drupal 8 users looking for solution for DB import failure:
At end of sql dump file there can commands inserting data to "webprofiler" table.
That's I guess some debug log file and is not really important for site to work so all this can be removed. I deleted all those inserts including LOCK TABLES and UNLOCK TABLES (and everything between). It's at very bottom of the sql file. Issue is described here:
https://www.drupal.org/project/devel/issues/2723437
But there is no solution for it beside truncating that table.
BTW I tried all solutions from answers above and nothing else helped.
I've tried all of above solutions, all failed.
I ended up with using -h 127.0.0.1 instead of using default var/run/mysqld/mysqld.sock.
If you have tried all these solutions, esp. increasing max_allowed_packet up to the maximum supported amount of 1GB and you are still seeing these errors, it might be that your server literally does not have enough free RAM memory available...
The solution = upgrade your server to more RAM memory, and try again.
Note: I'm surprised this simple solution has not been mentioned after 8+ years of discussion on this thread... sometimes we developers tend to overthink things.
Eliminating the errors which triggered Warnings was the final solution for me. I also changed the max_allowed_packet which helped with smaller files with errors. Eliminating the errors also sped up the process incredibly.
if none of this answers solves you the problem, I solved it by removing the tables and creating them again automatically in this way:
when creating the backup, first backup structure and be sure of add:
DROP TABLE / VIEW / PROCEDURE / FUNCTION / EVENT
CREATE PROCEDURE / FUNCTION / EVENT
IF NOT EXISTS
AUTO_INCREMENT
then just use this backup with your db and it will remove and recreate the tables you need.
Then you backup just data, and do the same, and it will work.
How about using the mysql client like this:
mysql -h <hostname> -u username -p <databasename> < file.sql

How to close existing connection in MySQL

I found that some connection got unclosed after the execution of command from Mysql server.
How can I configure my Mysql server so that I can close them all after executing a command?
if you can get the process_id inside mysql you can kill the process. Killing any process should work (though it will create a new connection next time you send a command).
mysql> SHOW PROCESSLIST; -- or SHOW FULL PROCESSLIST
mysql> KILL process_number;
Configure the wait_timeout variable to something soon enough, for example 30 seconds