Best way to get full mysql query of a specific process ID? - mysql

We have a MySQL slow query killer that kills process IDs after a specified number of seconds passes and that works fine. However we'd like in our notifications to also see the full query that was being so slow.
The problem is that while mysqladmin -v processlist|grep process_id should work, it truncates queries that have newlines in them e.g.:
SELECT * FROM table WHERE x=y {
... stuff
};
Here stuff will be cut off and the query thusly truncated. (I realize that that may not be syntactically correct as I'm not a DBA, but I just wanted to give an example of the kind of query flow we sometimes have to deal with in our applications; please don't complain about the format, it wasn't my decision nor is it under my control.)
Doing a query in information_schema would solve this, I believe, but the team does not want to do this due to the performance impact queries against that database often involve. So is there a better way to approach this then grepping mysqladmin?

I would advise to activate the slow log. It displays the statement that is taking long time,, how long and with extra details. Unless its in a STORED PROCEDURE.
You first need to activate the slow log in case it isnt allready.
To enable the slow query log, start mysqld with the --log-slow-queries[=file_name] option or change the values in your config file and restart the service, see below.
If the slow query log file is enabled but no name is specified, the
default name is host_name-slow.log and the server creates the file in
the same directory where it creates the PID file. If a name is given,
the server creates the file in the data directory unless an absolute
path name is given to specify a different directory.
You can set the output (where the data can be read) to FILE or TABLE
Change your my.ini file and change/add them
log-output = TABLE
slow-query-log=1
long_query_time = 10
log-queries-not-using-indexes
This will log any query that takes longer than 10 seconds to complete. Plus any query that is not using a index.
If you go with log-output=table then you can simply execute
select * from mysql.slow_log
If you go with log-output=file then you have to open the physical file in the MySql folder.

Related

MySql 5.5; possible to exclude a table from logging?

MySql 5.5 has a few logging option, among which the "Binary Logfile" with Binlog options which I do not want to use and the "query log file" which I want to use.
However, 1 program using 1 table in that database is filling this logfile with 50+Mb per day, so I would like that table to be excluded from this log.
Is that possible, or is the only way to install another MySql version and then to move this 1 table?
Thanks,
Alex
There are options for filtering the binlog by table, but not the query logs.
There are no options for filtering the general query log. It is either enabled for all queries, or else it's disabled.
There are options for filtering the slow query log, but not by table. For example, to log only queries that take longer than N seconds, or queries that don't use an index. Percona Server adds some options to filter the slow query log based on sampling.
You can use a session variable to disable either slow query or general query logging for queries run in a given session. This is a dynamic setting, so you can change it at will. But you would need to change your client code to do this every time you query that specific table.
Another option is to implement log rotation for the slow query log, so it never grows too large. See https://www.percona.com/blog/2013/04/18/rotating-mysql-slow-logs-safely/

What is limiting the number of SQL statements PHP executes before timing out?

I'm trying to import a large number of records into my table using SQL statements written like this:
INSERT INTO itemlist(UPC_Case,Pack,Size,Description,Weight_Case,UPC_Retail,TI,HI,MCL,CM,GSC,FL,HAN) VALUES (<values>);
a long series of these is in a textfile named itemlist insert.sql
I decided to use phpmyadmin to import these but the whole file wouldn't upload so I split it and compressed the pieces and when I ran the import it ran for a few minutes before only got through ~3850 record (~850,000 positions) before timing out and this definitely struck me as taking too long to do too little 3850 records seems like a very small amount of data to be processed in about 5 minutes (I mean that must be what, >1MB?), , so I thought that the php settings relating to script execution must be set too low for this kind of import, so I followed this post and changed the settings they mentioned:
In /etc/php/7.0/apache2/php.ini:
post_max_size = 30M (was 8M)
upload_max_filesize = 30M (was 2M)
memory_limit = 1G (was 128M)
max_execution_time = 60 (was 30)
max_input_time = 120 (was 60)
Then I restarted apache: sudo systemctl restart apache2
I know the settings have been applied because the max filesize did change and that meant I wouldn't have to compress my files, which I thought, combined with the other changes, would help the SQL statements not only be processed quicker, but also that it would have twice as much time before running out; meaning at least twice as many would get processed before the script times out, right?
But there was no improvement at all. phpmyadmin still only gets through ~3850 record (~850,000 positions) before timing out.
Why was there no improvement, what is limiting the number of statements that get processed because it wasn't any of those PHP settings. Does phpmyadmin have some sort of hidden limit?
PHPMyAdmin is not a tool, designed for importing big files into Mysql.
Try using mysql command line tool.
In case of Unix/Mac:
mysql -u {username} -p{password} {database_name} < import_file.sql
For all OS:
mysql -u {username} -p{password} -e "\. import_file.sql" {database_name}
Regarding possible reasons of slow import, it's ussually: table already has many records, unique indexes on this table, triggers, slow server, slow connection.
It may be worth noting, though I don't know that this is your problem, that MySQL also has a timeout setting.
If you exceed the MySQL timeout then your queries will also fail.
Here's a stack overflow answer that should explain what you need in timeouts.
Since you have command line access and, if the exported data is in a SQL format, you can import at command line. This approach may sidestep the problems you're experiencing, (basically you run the exported data as a script).
Hope that helps!

MySQL can take more than an hour to start

I have a mysql (Percona) 5.7 instance with over 1Million tables.
When I start the database, it can take more than an hour to start.
Errorlog doesn't show anything, but when I trace mysqld_safe, I found out that MySQL is getting a stat on every file in the DB.
Any idea why this may happen?
Also, please no suggestion to fix my schema, this is a blackbox.
Thanks
This turned out to be 2 issues (other than millions of tables)!
When MySQL start, and a crash recovery is needed, as of 5.7.17, it needs to traverse your datadir to build it's dictionary. This will be lifted in future releases(8.0), as MySQL will have it's own catalog, and will not rely on datadir content anymore. Doc states that this isn't done anymore. It's true and false. It doesn't read the 1st page of ibd files, but It does a file stat. Filed Bug
Once it finished (1), it starts a new process, "Executing 'SELECT * FROM INFORMATION_SCHEMA.TABLES;' to get a list of tables using the deprecated partition engine.". That of course open all the files again. Use disable-partition-engine-check if you think you don't need it. Doc
All this can be observed using sysdig. very powerful handy dtrace like tool.
sysdig proc.name=mysqld | grep "open fd="
Ok, now, it's time to reduce the number of files.

MySQL Query Cache Not Working on Certain Tables - Correct Settings

I was having a issue getting MySQL query cache to work. No matter what settings, I couldn't get the queries to cache on certain tables.
Once investigated. It turns out that MySQL 5.5 won't cache a query that has a table with a "dash" in it like.
Select id FROM `table-name` WHERE `id` = 1;
However you will see Qcache_queries_in_cache, and Qcache_hits works as desired when you rename your table without the dash.
Select id FROM `tablename` WHERE `id` = 1;
Underscores also works.
I've no idea what "Current RAM usage" means, but both MongoDB and MySQL will try to grab as much resources as they can. The best way to see what's going on is to look at the output of "top" for MySQL and MongoDB. In your "free" output, you see however that your machine has reserved 12417480 (12GB) for caches. Which is likely what the Operating System has reserved for MongoDB's memory mapped files. I don't know your query load or data access patterns so can't quite say what goes on here. You could also check in the mongodb.log file to see whether you have any slow queries.

Database dumping in mysql after certain checkpoints

I want to get mysqldump after certain checkpoint e.g. if i take the mysqldump now then next time when i will take the dump it should give me only the commands which executed between this time interval. is there anyway to get this using mysqldump.
One more thing how to show the commands delete, update in the mysqldump files.
Thanks
I dont think this is possible from a MySQLdump, however that feature exists as part of MySQL core - its called Binlogging or binary logging.
The binary log contains “events” that describe database changes such as table creation operations or changes to table data. It also contains events for statements that potentially could have made changes (for example, a DELETE which matched no rows). The binary log also contains information about how long each statement took that updated data
Check this out http://dev.mysql.com/doc/refman/5.0/en/binary-log.html
Word of warning, binlogs can slow down the performance of your server.