MySql General Query Log - mysql

Does restarting the MySQL service on Linux, disables the already enabled general query logging?
There was a log file getting generated inside the data folder, and it was growing exponentially. The db has 100 requests per minute.
Initially i guessed, some transaction had broken in between, that made the logs to swell.
So i restarted the service, which stopped adding to that log file.
I checked the global variables and got to know the path was assigned for general_log_file. But now it was showing 'OFF'.
Hence my query

It's a common issue that people will set configuration of MySQL through dynamic (global) variables and forget to set the variables in the options file (my.conf). In that case, when the server is restarted, MySQL will revert to default settings.
The default setting for general_log is OFF. To immediately enable the general query log, set the global variable to ON. Then set the value for general-log in your my.conf file to ensure that the setting is applied whenever you restart MySQL server.

No, Restarting the server doesn't make the binary logs to OFF...
It may be the case that you changed the configuration file before and now restart takes that file...

MySQL will start with whatever options are in the file configuration. If you switched any of the global server variables before, they will be reverted to what is set in the file.

Related

MySQL cannot enable LOAD DATA LOCAL INFILE on server

First I am a novice MySQL user so I would please ask that in answers to keep it very by the numbers if steps are skipped I will probably get lost.
I have tried vigorously to research solutions before asking this, so far I have spent about 3 hours on this. I will explain what steps I have taken to the best of my abilities.
Goal: Allow the use of the LOAD DATA LOCAL INFILE
Challenge: Currently the command is not allowed on the server. All attempts to locate and open some method to modify server options has proved fruitless. On the client side I was able to enable it.
Things I have found
First under the official Tutorial I found
Section 6.1.6, “Security Issues with LOAD DATA LOCAL”
https://dev.mysql.com/doc/refman/8.0/en/load-data-local.html
Unfortunately the explanation is worthless because it skips a huge amount of how to, they state to enable it on the server to take the following action
On the server side:
The local_infile system variable controls server-side LOCAL capability. Depending on the local_infile setting, the server refuses or permits local data loading by clients that have LOCAL enabled on the client side. By default, local_infile is disabled.
To explicitly cause the server to refuse or permit LOAD DATA LOCAL statements (regardless of how client programs and libraries are configured at build time or runtime), start mysqld with local_infile disabled or enabled, respectively. local_infile can also be set at runtime.
So the following actions are not in any way explained
1) How to start mysqld with local_infile disabled or enabled
2) How the local_infile can also be set at runtime
So then I looked at this
5.1.1 Configuring the Server
https://dev.mysql.com/doc/refman/8.0/en/server-configuration.html
Again many skipped steps they show this
shell> mysqld --verbose --help
How do you get to shell? I tried on cmd to input that but I got errors
Also to note a my.ini was not created, and under the MySQL Workbench when looking under Options File it says
"Location of My SQL configuration file (ie: my.cnf) not specified"
It appears I do not have one and I have no idea how to create one
Finally I am running Windows 10 and MySQL version 8.0
As a side note I tried shutting down the server and got and Access denied
Also I tried just doing
mysql> LOAD DATA INFILE 'C:/Users/User/Desktop/pet.txt' INTO TABLE pet;
ERROR 1290 (HY000): The MySQL server is running with the --secure-file-priv option so it cannot execute this statement
Any help would be much appreciated
Thank you
Whew 4 hours later and I finally figured it out!
So here are the steps
Open Window Services
Go to MySQL80 and double click
Go to Service Status and click Stop
Under Start parameters insert --local-infile=1
Open MySQL 8.0 Command Line Client
After you login excute on the command line SET GLOBAL local_infile ='ON;
These steps allowed me to use LOAD DATA LOCAL INFILE

Openshift MySQL event_scheduler

After restarting or updating my Openshift application, the MySQL cartridge starts with event_scheduler off.
I tried adding an environment variable to have it started in "on" mode (as suggested in the my.cnf file), but it still doesn't work
>rhc env-list app-name
RSA 1024 bit CA certificates are loaded due to old openssl compatibility
OPENSHIFT_MYSQL_EVENT_SCHEDULER=on
So my question is, how to make it so the event_scheduler is always on, even after restart?
The OPENSHIFT_MYSQL_EVENT_SCHEDULER does not seem to be supported. The supported configurable values are listed in the my.cnf file below the # Configurable Values: line.
If you's like to see that implemented into the MySQL add-on cartridge, you can add the idea to http://openshift.uservoice.com and get others to vote on it.

Limiting mysql log file size

I have a mysql log file that regularly goes over 30gb, this sucks when you realise that your server is full because of this file. I need a simple solution to limit this file to about 1gb, i don't need logs that run that long, and i'd rather avoid this problem in the future.
Any ideas? Thanks
To specify it in the my.cnf file, backup your current my.cnf file (always recommended), stop slave, stop the MySQL server and place the following option:
# relay log restrictions
relay-log-space-limit=15G
Then save and quit the file and start MySQL. Unless you configured differently, MySQL will automatically start the slave thread.

MySQL Server has gone away when importing large sql file

I tried to import a large sql file through phpMyAdmin...But it kept showing error
'MySql server has gone away'
What to do?
As stated here:
Two most common reasons (and fixes) for the MySQL server has gone away
(error 2006) are:
Server timed out and closed the connection. How to fix:
check that wait_timeout variable in your mysqld’s my.cnf configuration file is large enough. On Debian: sudo nano
/etc/mysql/my.cnf, set wait_timeout = 600 seconds (you can
tweak/decrease this value when error 2006 is gone), then sudo
/etc/init.d/mysql restart. I didn't check, but the default value for
wait_timeout might be around 28800 seconds (8 hours).
Server dropped an incorrect or too large packet. If mysqld gets a packet that is too large or incorrect, it assumes that something has
gone wrong with the client and closes the connection. You can increase
the maximal packet size limit by increasing the value of
max_allowed_packet in my.cnf file. On Debian: sudo nano
/etc/mysql/my.cnf, set max_allowed_packet = 64M (you can
tweak/decrease this value when error 2006 is gone), then sudo
/etc/init.d/mysql restart.
Edit:
Notice that MySQL option files do not have their commands already available as comments (like in php.ini for instance). So you must type any change/tweak in my.cnf or my.ini and place them in mysql/data directory or in any of the other paths, under the proper group of options such as [client], [myslqd], etc. For example:
[mysqld]
wait_timeout = 600
max_allowed_packet = 64M
Then restart the server. To get their values, type in the mysql client:
> select ##wait_timeout;
> select ##max_allowed_packet;
For me this solution didn't work out so I executed
SET GLOBAL max_allowed_packet=1073741824;
in my SQL client.
If not able to change this with MYSql service running, you should stop the service and change the variable in "my.ini" file.
For example:
max_allowed_packet=20M
If you are working on XAMPP then you can fix the MySQL Server has gone away issue with following changes..
open your my.ini file
my.ini location is (D:\xampp\mysql\bin\my.ini)
change the following variable values
max_allowed_packet = 64M
innodb_lock_wait_timeout = 500
If you are running with default values then you have a lot of room to optimize your mysql configuration.
The first step I recommend is to increase the max_allowed_packet to 128M.
Then download the MySQL Tuning Primer script and run it. It will provide recommendations to several facets of your config for better performance.
Also look into adjusting your timeout values both in MySQL and PHP.
How big (file size) is the file you are importing and are you able to import the file using the mysql command line client instead of PHPMyAdmin?
If you are using MAMP on OS X, you will need to change the max_allowed_packet value in the template for MySQL.
You can find it at: File > Edit template > MySQL my.cnf
Then just search for max_allowed_packet, change the value and
save.
I had this error and other related ones, when I imported at 16 GB SQL file. For me, editing my.ini and setting the following (based on several different posts) in the [mysqld] section:
max_allowed_packet = 110M
innodb_buffer_pool_size=511M
innodb_log_file_size=500M
innodb_log_buffer_size = 800M
net_read_timeout = 600
net_write_timeout = 600
If you are running under Windows, go to the control panel, services, and look at the details for MySQL and you will see where my.ini is. Then after you edit and save my.ini, restart the mysql service (or restart the computer).
If you are using HeidiSQL, you can also set some or all of these using that.
I solved my issue with this short /etc/mysql/my.cnf file :
[mysqld]
wait_timeout = 600
max_allowed_packet = 100M
The other reason this can happen is running out of memory. Check /var/log/messages and make sure that your my.cnf is not set up to cause mysqld to allocate more memory than your machine has.
Your mysqld process can actually be killed by the kernel and then re-started by the "safe_mysqld" process without you realizing it.
Use top and watch the memory allocation while it's running to see what your headroom is.
make a backup of my.cnf before changing it.
I got same issue with
$image_base64 = base64_encode(file_get_contents($_FILES['file']['tmp_name']) );
$image = 'data:image/jpeg;base64,'.$image_base64;
$query = "insert into images(image) values('".$image."')";
mysqli_query($con,$query);
In \xampp\mysql\bin\my.ini file of phpmyadmin we get only
[mysqldump]
max_allowed_packet=110M
which is just for mysqldump -u root -p dbname . I resolved my issue by replacing above code with
max_allowed_packet=110M
[mysqldump]
max_allowed_packet=110M
I updated "max_allowed_packet" to 1024M, but it still wasn't working. It turns out my deployment script was running:
mysql --max_allowed_packet=512M --database=mydb -u root < .\db\db.sql
Be sure to explicitly specify a bigger number from the command line if you are donig it this way.
If your data includes BLOB data:
Note that an import of data from the command line seems to choke on BLOB data, resulting in the 'MySQL server has gone away' error.
To avoid this, re-create the mysqldump but with the --hex-blob flag:
http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_hex-blob
which will write out the data file with hex values rather than binary amongst other text.
PhpMyAdmin also has the option "Dump binary columns in hexadecimal notation (for example, "abc" becomes 0x616263)" which works nicely.
Note that there is a long-standing bug (as of December 2015) which means that GEOM columns are not converted:
Back up a table with a GEOMETRY column using mysqldump?
so using a program like PhpMyAdmin seems to be the only workaround (the option noted above does correctly convert GEOM columns).
If it takes a long time to fail, then enlarge the wait_timeout variable.
If it fails right away, enlarge the max_allowed_packet variable; it it still doesn't work, make sure the command is valid SQL. Mine had unescaped quotes which screwed everything up.
Also, if feasible, consider limiting the number of inserts of a single SQL command to, say, 1000. You can create a script that creates multiple statements out of a single one by reintroducing the INSERT... part every n inserts.
i got a similar error.. to solve this just open my.ini file..here at line no 36 change the value of maximum allowed packet size ie. max_allowed_packet = 20M
Make sure mysqld process does not restart because of service managers like systemd.
I had this problem in vagrant with centos 7. Configuration tweaks didn't help. Turned out it was systemd which killed mysqld service every time when it took too much memory.
I had similar error today when duplicating database (MySQL server has gone away...), but when I tried to restart mysql.server restart I got error
ERROR! The server quit without updating PID ...
This is how I solved it:
I opened up Applications/Utilities/ and ran Activity Monitor
quit mysqld
then was able to solve the error problem with
mysql.server restart
I am doing some large calculations which involves the mysql connection to stay long time and with heavy data. i was facing this "Mysql go away issue". So i tried t optimize the queries but that doen't helped me then i increased the mysql variables limit which is set to a lower value by default.
wait_timeout
max_allowed_packet
To the limit what ever suits to you it should be the Any Number * 1024(Bytes). you can login to terminal using 'mysql -u username - p' command and can check and change for these variable limits.
For GoDaddy shared hosting
On GoDaddy shared hosting accounts, it is tricky to tweak the PHP.ini etc files. However, there is another way and it just worked perfectly for me. (I just successfully uploaded a 3.8Mb .sql text file, containing 3100 rows and 145 cols. Using the IMPORT command in phpMyAdmin, I was getting the dreaded MySQL server has gone away error, and no further information.)
I found that Matt Butcher had the right answer. Like Matt, I had tried all kinds of tricks, from exporting MySQL databases in bite-sized chunks, to writing scripts that break large imports into smaller ones. But here is what worked:
(1) CPANEL ---> FILES (group) ---> BACKUP
(2a) Under "Partial Backups" heading...
(2b) Under "Download a MySQL Database Backup"
(2c) Choose your database and download a backup (this step optional, but wise)
(3a) Directly to the right of 2b, under heading "Restore a MySQL Database Backup"
(3b) Choose the .SQL import file from your local drive
(3c) True happiness will be yours (shortly....) Mine took about 5 seconds
I was able to use this method to import a single table. Nothing else in my database was affected -- but that is what step (2) above is intended to protect against.
Notes:
a. If you are unsure how to create a .SQL import file, use phpMyAdmin to export a table and modify that file structure.
SOURCE:
Matt Butcher 2010 Article
If increasing max_allowed_packet doesn't help.
I was getting the same error as you when importing a .sql file into my database via Sequel Pro.
The error still persisted after upping the max_allowed_packet to 512M so I ran the import in the command line instead with:
mysql --verbose -u root -p DatabaseName < MySQL.sql
It gave the following error:
ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled
I found a couple helpful StackOverflow questions:
Enable binary mode while restoring a Database from an SQL dump
Mysql ERROR: ASCII '\0' while importing sql file on linux server
In my case, my .sql file was a little corrupt or something. The MySQL dump we get comes in two zip files that need to be concatenated together and then unzipped. I think the unzipping was interrupted initially, leaving the file with some odd characters and encodings. Getting a fresh MySQL dump and unzipping it properly worked for me.
Just wanted to add this here in case others find that increasing the max_allowed_packet variable was not helping.
None of the solutions regarding packet size or timeouts made any difference for me. I needed to disable ssl
mysql -u -p -hmyhost.com --disable-ssl db < file.sql
https://dev.mysql.com/doc/refman/5.7/en/encrypted-connections.html

com.mysql.jdbc.PacketTooBigException

I am storing images in MYSQL.
I have table as
CREATE TABLE myTable (id INT, myImage BLOB);
When I am trying to insert 4.7MB file, I am getting exception as
com.mysql.jdbc.PacketTooBigException: Packet for query is too large (4996552 > 1048576). You can change this value on the server by setting the max_allowed_packet' variable.
I believe this is related to image size only. Is there any other variable type that I can use?
Update 1
As per older SO question, I also tried with MEDIUMBLOB but still I am getting same error.
Adding Image to a database in Java
Update 2
At the start of the project, I execute below query and everything is working now
SET GLOBAL max_allowed_packet = 1024*1024*14;
As the error says, it has nothing to do with variable type but rather the max_allowed_packet variable:
You must increase this value if you are using large BLOB columns or long strings. It should be as big as the largest BLOB you want to use. The protocol limit for max_allowed_packet is 1GB. The value should be a multiple of 1024; nonmultiples are rounded down to the nearest multiple.
But, generally speaking, don't store files in your database - store them in your filesystem and record the path to the file in the database.
For Windows users:
mysql_home points to your mysql/mariadb installation folder.
open cmd, cd to %mysql_home%\bin and run mysqladmin > temp.txt This will spit out a lot of information about the usage of the tool. Somewhere among all that output you will find this information:
Default options are read from the following files in the given order:
C:\windows\my.ini C:\windows\my.cnf C:\my.ini C:\my.cnf c:\mariadb-5.5.29-w
inx64\my.ini c:\mariadb-5.5.29-winx64\my.cnf
This shows that you could have, if you don't have it already, a file called my.ini or my.conf in the %mysql_home% directory.
create my.ini and add the lines:
[mysqld]
#allow larger BLOBs to be stored
max_allowed_packet = 10M
make sure to include the settings group which is [mysqld] otherwise it will fail to start (and for me it ended up hanging in limbo).
You will now need to restart the MySQL daemon, this is done either by killing and starting the currently running mysqld process or by restarting the MySQL service (run services.msc, locate MySQL, press the restart button; or from cmd, net stop MySQL followed by net start MySQL).
Following worked for me
edit my.cnf file ( mine was in /etc/mysql )
Then modify the max_allowed_packet value
I set it to
max_allowed_packet=200M
Make sure you restart MySQL for change to take effect
If working with AWS RDS, max_allowed_packet can be modified using DB Parameter Groups
The max_allowed_packet variable has a default value set in the configuration file (my.ini in my case). If your application tries to execute a query whose packet size exceeds this value, this exception is thrown.
My setup is on a Windows 10 machine with MySQL Server 8.0.
I copied the my.ini file with my desired value (64M) for max_allowed_packet to the various possible locations (C:\my.ini, C:\Windows\my.ini). Then restarted the mysql server. It didn't work. When I queried the database for the max_allowed_packet variable, the value remained unchanged.
The following steps worked:
I discovered that there is a my.ini file at a different location.
Open the my.ini file at the following location C:\ProgramData\MySQL\MySQL Server 8.0. This has lots of entries, besides max_allowed_packet.
Locate the entry for max_allowed_packet, and set it to the desired value (for e.g. 64M).
Save and close the my.ini file.
Restart the MySQL80 service.
Log into to the mysql server prompt and run the query
show variables like 'max_allowed_packet';
You should see the value set to your desired value.