I am executing a query remotely using mysql utility and also through mysqlsh utility. I see that with mysql, entire dataset is loaded in memory at once, whereas in case of mysqlsh, it appears to be fetching a single row or a subset of rows at a time. I want to achieve the same behaviour as mysql in mysqlsh. Any configurations available for this in mysqlsh ?
mysql -h <> -p <> database < query.sql
I've dumped a table on a remote server from one database (MySQL 5.5) to a file. It took the server about 2 seconds to perform the operation. Now I'm trying to undump data from the file into another DB (same version) on the server.
The server outputs the data being processed on the screen in spite of the fact I didn't specify --verbose parameter. How can I prevent the output?
It takes the server some 10 minutes to perform the operation. Is that time acceptable or can I make it much faster? If yes, how can I do this?
Loading (undumping) is via the mysql commandline tool:
mysql -u user -p dbname < mydump.sql
I am trying to use pt-upgrade from the Percona Toolkit to test running a load on a MySQL 5.1 and a MySQL 5.6 database server. I want to see if any queries I captured from a MySQL 5.1 slow log will fail on a MySQL 5.6 system. I read over the documentation at https://www.percona.com/doc/percona-toolkit/2.2/pt-upgrade.html and created the following command:
pt-upgrade h=IPADDRESS1 -uUSERNAME -pPASSWORD h=IPADDRESS2 uUSERNAME -pPASSWORD --type='slowlog' --max-class-size=1 --max-examples=1 --run-time=1m 'slow_log_mysqld.log' 1>report.txt 2>err.txt &
I restored a copy of all the databases where the slow log was taken onto two separate servers.
My command works fine and I've set it to only run for 1 minute for testing. The problem is all I see in the report is that queries fail on both hosts over and over again.
On both hosts:
DBD::mysql::st execute failed: No database selected [for Statement "....
It appears that pt-upgrade is not changing databases.
I've reviewed the slow query log and I clearly see statements like this before each SELECT statement:
4 9640337 Query USE database1
9 9640337 Query USE database2
I have over 100 hundred databases on the server where I got the slow log. Is there some limitation where pt-upgrade cannot switch between databases? How do I get pt-upgrade to work with multiple databases?
It seems that something is odd with the format of the slow log on my system.
I have to first "massage" my log with pt-query-digest before I can run pt-upgrade. Here is how I run the massage on my slow log using pt-query-digest:
pt-query-digest --filter '$event->{arg} =~ m/^select/i' --sample 5 --no-report --output slowlog mysql_slow.log > massaged_mysql_slow.log
Now I can run this:
pt-upgrade h=IPADDRESS1 -uUSERNAME -pPASSWORD h=IPADDRESS2 uUSERNAME -pPASSWORD --type='slowlog' --max-class-size=1 --max-examples=1 --run-time=1m 'massaged_mysql_slow.log' 1>report.txt 2>err.txt &
I'm trying to run following query inside a bash script.
When it is executed from mysql command promt, execution time was 0.06sec.
mysql> delete from assign_history where offer_id not in
->('7','8','9','10','11','12','13','14','32','157','211','240','241','242','273',
->'274','275','310','312','313','314','326','328','329','333','334','335','336',
->'337','342','343','355','362','374','375','376','378','379','383','384','409','411')
->and date(action_date) < "2015-06-25" order by id limit 1000;
Query OK, 1000 rows affected (0.06 sec)
But when run it inside a bash script, it takes more than 2 minutes.
[root#localhost umap]# cat ./history_del.sh
#! /bin/bash
echo $(date)
mysql -uroot -ppassword db_offers -e "delete from assign_history where offer_id not in ('7','8','9','10','11','12','13','14','32','157','211','240','241','242','273','274','275','310','312','313','314','326','328','329','333','334','335','336','337','342','343','355','362','374','375','376','378','379','383','384','409','411') and date(action_date) < "2015-06-25" limit 1000;"
echo $(date)
[root#localhost umap]# ./history_del.sh
Wed Aug 26 19:08:45 IST 2015
Wed Aug 26 19:10:48 IST 2015
I also tried with "mysql -Bse" options. No improvement. Any ideas?
Any ideas?
First, you need to escape double-quotes inside the query string: \"2015-06-25\" (try to output your query with echo and you'll see, why ). I dont know, how your request works without properly specified quotes...
Second, it is better and preferred to place your long-line-request in the file, so your command-line will look like this:
mysql -uroot -ppassword db_offers <YOUR_FILE
Request in YOUR_FILE will be the same as in the mysql prompt (of course, you dont need to escape double-quotes here).
And yes, when you call mysql utility - it can take unpredictably long time to connect to MySQL server, so 2 minutes include this time (but 0.06 sec in mysql prompt doesnt!), so you cant say, how much time does it take to connect to server and how much - to send and execute your query.
To know, how much time does it take to connect to mysql server, try to execute (wait several seconds after previous run of the mysql utility) any empty query, such as:
time mysql -u user -ppassword -Bs <<<'select null'
Is there some configuration that can be done at MySQL side to automatically kill or timeout queries that are extremely slow, say 100 seconds.
You can list all your MySQL queries by the following command:
$ mysqladmin processlist
so you can run some script which will parse that list and it'll kill the specific query.
In example, you can run some script in any language via cron to periodically check for the long queries, e.g.:
$result = mysql_query("SHOW FULL PROCESSLIST");
while ($row=mysql_fetch_array($result)) {
$process_id=$row["Id"];
if ($row["Time"] > 200 ) {
$sql="KILL $process_id";
mysql_query($sql);
}
}
Another example:
mysql> select concat('KILL ',id,';') from information_schema.processlist
where user='root' and time > 200 into outfile '/tmp/a.txt';
mysql> source /tmp/a.txt;
Related:
How do I kill all the processes in Mysql "show processlist"?
Read more:
http://www.mysqlperformanceblog.com/
Starting with MySQL 5.1 you can create a stored procedure to query the information_schmea.PROCESSLIST table for all queries that match your criteria ("long running time" in your case). Upon querying processlist table, you can simply iterate over a cursor to kill the running query processes that exceeds your timeout criteria.
Take a look at the following example: Procedure to find and terminate all non-SUPER and "system account" queries running longer than N seconds
http://forge.mysql.com/tools/tool.php?id=106
You should checkout the command pt-kill of percona toolkit