I' ve searched and searched, but I wasn't able to find an easy way to get this:
Query OK, 50000 rows affected (0.35 sec)
in milliseconds or microseconds.
How can I achieve it?
I came with the same problem, I did my queries from a linux console using time
$ time mysql --user="user" -D "DataBase" -e "SELECT SQL_NO_CACHE COUNT(1) FROM table"
------------
count(1)
------------
750
------------
real 0m0.269s
user 0m0.014s
sys 0m0.015s
or
$ time -f"%e" mysql --user="user" -D "DataBase" -e "SELECT SQL_NO_CACHE COUNT(1) FROM table"
------------
count(1)
------------
750
------------
0.24
It gives different values from "mysql" but at least is something you can work with, for example this script:
#!/bin
temp = 1
while [ $temp -le 1000]
do
/usr/bin/time -f"%e" -o"/home/admin/benchmark.txt" -a mysql --user="user" -D "DataBase" -e "SELECT SQL_NO_CACHE COUNT(1) FROM table" > /dev/null 2> /dev/null
let temp=$temp+1
done
Execute the query 1000 times, -f shows only the real time, -o the output file, -a appends to the output, > /dev/null 2> /dev/null ignores the query output so it doesn't print in console each time.
That time's calculated by the mysql monitor application and isn't done by the mysql server. It's not something you can retrieve programatically by doing (say) select last_query_execution_time() (which would be nice).
You can simulate it in a coarse way by doing the timing in your application, by taking system time before and after calling the query function. Hopefully the client-side overhead would be minimal compared to the mysql portion.
You could time it yourself in the code that runs the query:
Pseudo code:
double StartTime = <now>
Execute SQL Query
double QueryTime = <now> - StartTime
Related
I have a MySQL database running in Amazon RDS, and I want to know how to export an entire table to CSV format.
I currently use MySQL server on Windows to query the Amazon database, but when I try to run an export I get an error, probably because there's no dedicated file server for amazon RDS. Is there a solution to this?
Presumably, you are trying to export from an Amazon RDS database via a SELECT ... INTO OUTFILE query, which yields this indeed commonly encountered issue, see e.g. export database to CSV. The respective AWS team response confirms your assumption of lacking server access preventing an export like so, and suggests an alternative approach as well via exporting your data in CSV format by selecting the data in the MySQL command line client and piping the output to reformat the data as CSV, like so:
mysql -u username -p --database=dbname --host=rdshostname --port=rdsport --batch
-e "select * from yourtable"
| sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > yourlocalfilename
User fpalero provides an alternative and supposedly simpler approach, if you know and specify the fields upfront:
mysql -uroot -ppassword --database=dbtest
-e "select concat(field1,',',field2,',',field3) FROM tabletest" > tabletest.csv
First of all, Steffen's answer works in most cases.
I recently encountered some larger and more complex outputs where "sed" was not enough and decided to come up with a simple utility to do exactly that.
I build a module called sql2csv that can parse the output of the MySQL CLI:
$ mysql my_db -e "SELECT * FROM some_mysql_table"
+----+----------+-------------+---------------------+
| id | some_int | some_str | some_date |
+----+----------+-------------+---------------------+
| 1 | 12 | hello world | 2018-12-01 12:23:12 |
| 2 | 15 | hello | 2018-12-05 12:18:12 |
| 3 | 18 | world | 2018-12-08 12:17:12 |
+----+----------+-------------+---------------------+
$ mysql my_db -e "SELECT * FROM some_mysql_table" | sql2csv
id,some_int,some_str,some_date
1,12,hello world,2018-12-01 12:23:12
2,15,hello,2018-12-05 12:18:12
3,18,world,2018-12-08 12:17:12
You can also use the built in CLI:
sql2csv -u root -p "secret" -d my_db --query "SELECT * FROM some_mysql_table;"
1,12,hello world,2018-12-01 12:23:12
2,15,hello,2018-12-05 12:18:12
3,18,world,2018-12-08 12:17:12
More information in on sql2csv (GitHub).
Assuming MySQL in RDS, an alternative is to use batch mode which outputs TAB-separated values and escapes newlines, tabs and other special characters. I haven't yet struck a CSV import tool that can't handle TAB-separated data. So for example:
$ mysql -h myhost.rds.amazonaws.com -u user -D my_database -p --batch --quick -e "SELECT * FROM my_table" > output.csv
As noted by Halfgaar, the --quick option flushes immediately, so it avoids out-of-memory errors for large tables. To quote strings (recommended), you'll need to do a bit of extra work in your query:
SELECT id, CONCAT('"', REPLACE(text_column, '"', '""'), '"'), float_column
FROM my_table
The REPLACE escapes any double-quote characters in the text_column values. I would also suggest using iso8601 strings for datetime fields, so:
SELECT CONCAT('"', DATE_FORMAT(datetime_column, '%Y%m%dT%T'), '"') FROM my_table
Be aware that CONCAT returns NULL if you have a NULL column value.
I've run this on some fairly large tables with reasonable performance. 600M rows and 23 GB data took ~30 minutes when running the MySQL command in the same VPC as the RDS instance.
There is a new way from AWS how to do it. Just use their DMS (database migration service).
Here is documentation on how to export table(s) to files on S3 storage: Using Amazon S3 as a target for AWS Database Migration Service - AWS Database Migration Service
You will have possibility to export in two formats: CSV or Parquet.
I'm using the Yii framework on EC2 connecting to an RDS MySQL. The key is to use fputcsv(). The following works perfectly, both on my localhost as well as in production.
$file = 'path/to/filename.csv';
$export_csv = "SELECT * FROM table";
$qry = Yii::app()->db->createCommand($export_csv)->queryAll();
$fh = fopen($file, "w+");
foreach ($qry as $row) {
fputcsv($fh, $row, ',' , '"');
}
fclose($fh);
If you use Steffen Opel's solution, you'll notice that it generates a header that includes the 'concat' string literal. Obviously this is not what you want. Most likely you will want the corresponding headers of your data.
This query will work without any modifications, other than substituting column names and table names:
mysql -h xxx.xxx.us-east-2.rds.amazonaws.com
--database=mydb -u admin -p
-e "SELECT 'column1','column2'
UNION ALL SELECT column1,column2
FROM table_name WHERE condition = value" > dataset.csv
I just opened the results in the Numbers OS X app and the output looks perfect.
With a very large table (~500m rows), even with --quick, nothing was being written to my export file and the process never finished (+6 hours). I wrote the following bash script to get around this. Another bonus is you have an indication of progress as each batch file gets written.
This solution works well as long as you have a sequential column of some kind, e.g. an auto incrementing integer PK or a date column. Make sure you have your date column indexed if you have a lot of data!
#!bin/bash
# Maximum number of rows to export/total rows in table, set a bit higher if live data being written
MAX=500000000
# Size of each export batch
STEP=1000000
for (( c=0; c<= $MAX; c = c + $STEP ))
do
mysql --port 3306 --protocol=TCP -h <rdshostname> -u <username> -p<password> --quick --database=<db> -e "select column1, column2, column3 <table> order by <timestamp> ASC limit $STEP offset $c" | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > export$c.csv
done
A slight different approach which may be faster depending on indexing you have in place is step through the data month by month:
#!bin/bash
START_YEAR=2000
END_YEAR=2022
for (( YEAR=$START_YEAR; YEAR<=$END_YEAR; YEAR++ ))
do
for (( MONTH=1; MONTH<=12; MONTH++ ))
do
NEXT_MONTH=1
let NEXT_YEAR=$YEAR+1
if [ $MONTH -lt 12 ]
then
let NEXT_MONTH=$MONTH+1
NEXT_YEAR=$YEAR
fi
mysql --port 3306 --protocol=TCP -h <rdshostname> -u app -p<password> --quick --database=<database> -e "select column1, column2, column3 from <table> where <dateColumn> >= '$YEAR-$MONTH-01 00:00:00' and <dateColumn> < '$NEXT_YEAR-$NEXT_MONTH-01 00:00:00' order by <dateColumn> ASC" | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > export-$YEAR-$MONTH-to-$NEXT_YEAR-$NEXT_MONTH.csv
done
done
Hopefully this helps someone
I'm trying to run following query inside a bash script.
When it is executed from mysql command promt, execution time was 0.06sec.
mysql> delete from assign_history where offer_id not in
->('7','8','9','10','11','12','13','14','32','157','211','240','241','242','273',
->'274','275','310','312','313','314','326','328','329','333','334','335','336',
->'337','342','343','355','362','374','375','376','378','379','383','384','409','411')
->and date(action_date) < "2015-06-25" order by id limit 1000;
Query OK, 1000 rows affected (0.06 sec)
But when run it inside a bash script, it takes more than 2 minutes.
[root#localhost umap]# cat ./history_del.sh
#! /bin/bash
echo $(date)
mysql -uroot -ppassword db_offers -e "delete from assign_history where offer_id not in ('7','8','9','10','11','12','13','14','32','157','211','240','241','242','273','274','275','310','312','313','314','326','328','329','333','334','335','336','337','342','343','355','362','374','375','376','378','379','383','384','409','411') and date(action_date) < "2015-06-25" limit 1000;"
echo $(date)
[root#localhost umap]# ./history_del.sh
Wed Aug 26 19:08:45 IST 2015
Wed Aug 26 19:10:48 IST 2015
I also tried with "mysql -Bse" options. No improvement. Any ideas?
Any ideas?
First, you need to escape double-quotes inside the query string: \"2015-06-25\" (try to output your query with echo and you'll see, why ). I dont know, how your request works without properly specified quotes...
Second, it is better and preferred to place your long-line-request in the file, so your command-line will look like this:
mysql -uroot -ppassword db_offers <YOUR_FILE
Request in YOUR_FILE will be the same as in the mysql prompt (of course, you dont need to escape double-quotes here).
And yes, when you call mysql utility - it can take unpredictably long time to connect to MySQL server, so 2 minutes include this time (but 0.06 sec in mysql prompt doesnt!), so you cant say, how much time does it take to connect to server and how much - to send and execute your query.
To know, how much time does it take to connect to mysql server, try to execute (wait several seconds after previous run of the mysql utility) any empty query, such as:
time mysql -u user -ppassword -Bs <<<'select null'
I have a mysql table with large number of rows (10m)
From the mysql client, I want to run a query but not print results. This is because even though the query runs in 15 seconds, printing the results on to console takes many minutes.
How can I achieve this?
EDIT: My query is the following:
select user_id, count(*) as ct from user_geo_loc group by user_id, lat, lng;
EDIT 2: At the end of the execution, the mysql client prints the following
9950710 rows in set (9.31 sec)
I want to find out this time but not print the results (which takes 15 minutes)
When on Linux, you could redirect the output to /dev/null to prevent the output. Like this:
mysql -u username -p database -e "SELECT * FROM table" > /dev/null
On Windows the equivalent would be:
mysql -u username -p database -e "SELECT * FROM table" > NUL
Please note: The only thing printed on the console will be errors, to prevent this, you would have to redirect stderr to stdout by adding 2>&1 to the end (Linux)
In console, you may redirect output into the null device:
$ mysql -uUSER -pPASSWORD -e"select ..." DATABASE_NAME > /dev/null
or you may redirect into the file to look result later (this is much faster than print output into console):
$ mysql -uUSER -pPASSWORD -e"select ..." DATABASE_NAME > ./output.txt
It seems like you want a pager ?
run the following (in the MySQL console)
pager less
Which will use less and only show the first "screen" of info
i have a busy web server with LAMP installed, and i was wondering, is there any way to count how many queries per second (mysql) are executed in the server ?
Thank you.
SELECT s1.variable_value / s2.variable_value
FROM information_schema.global_status s1, information_schema.global_status s2
WHERE s1.variable_name='queries'
AND s2.variable_name ='uptime';
Try Jeremy Zawodny's excellent utility mytop.
If you have the Perl module Time::HiRes installed, mytop will automatically use it to generate high-resoution query per second information.
There's useful information to be mined from the SHOW GLOBAL STATUS; command, including the number of queries executed (if your MySQL is 5.0.76 or later).
See http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html
You can use:
mysqladmin -u root -p status
which will return output like:
Uptime: 17134 Threads: 2 Questions: 1245 Slow queries: 0 Opens: 49 Flush tables: 1 Open tables: 42 Queries per second avg: 0.072
Here queries per second is: 0.072, which is questions/uptime.
When you use the "STATUS" command (not SHOW STATUS), MySQL will calculate the queries per second since server start for you.
Tested with MySQL 5.1.63.
We can have a small script for this. It will be some thing like the below.
declare -i a
declare -i b
declare -i c
a=`mysql -uroot -pxxxxx -e "show status like 'Queries'" |
tail -1 | awk '{print $2}'`
echo "$a"
sleep 1
b=`mysql -uroot -pxxxxx -e "show status like 'Queries'" |
tail -1 | awk '{print $2}'`
echo "$b"
c=$b-$a
echo "Number of Queries per second is: $c"
Is it possible to timeout a query in MySQL?
That is, if any query exceeds the time I specify, it will be killed by MySQL and it will return an error instead of waiting for eternity.
There is a nice Perl script on CPAN to do just this:
http://search.cpan.org/~rsoliv/mysql-genocide-0.03/mysql-genocide
One only needs to schedule it to run with the proper parameters. Create a CRONtab file /etc/cron.d/mysql_query_timeout to schedule it to run every minute:
* * * * * root /path/to/mysql-genocide -t 7200 -s -K
Where 7200 is the maxiumum allowed execution time in seconds. The -s switch filters out all except SELECT queries. The -K switch instructs the script to kill the matching processes.
The root user should be able to run local mysql tools without authentication otherwise you will need to provide credentials on the command line.
I just set up the following bash script as a cron job to accomplish this with MySQL 5.0 (kills any query that has been executing for more than 30 seconds). Sharing it here in case it proves useful to anyone (apologies if my bash scripting style is inefficient or atrocious, it is not my primary development language):
#!/bin/bash
linecount=0
processes=$(echo "show processlist" | mysql -uroot -ppassword)
oldIfs=$IFS
IFS='
'
echo "Checking for slow MySQL queries..."
for line in $processes
do
if [ "$linecount" -gt 0 ]
then
pid=$(echo "$line" | cut -f1)
length=$(echo "$line" | cut -f6)
query=$(echo "$line" | cut -f8)
#Id User Host db Command Time State Info
if [ "$length" -gt 30 ]
then
#echo "$pid = $length"
echo "WARNING: Killing query with pid=$pid with total execution time of $length seconds! (query=$query)"
killoutput=$(echo "kill query $pid" | mysql -uroot -ppassword)
echo "Result of killing $pid: $killoutput"
fi
fi
linecount=`expr $linecount + 1`
done
IFS=$oldIfs
I thought it has been around a little longer, but according to this,
MySQL 5.7.4 introduces the ability to set server side execution time limits, specified in milliseconds, for top level read-only SELECT statements.
SELECT
MAX_STATEMENT_TIME = 1000 --in milliseconds
*
FROM table;
Note that this only works for read-only SELECT statements.
Starting with MySQL 5.1 you can create a stored procedure to query the information_schmea.PROCESSLIST table for all queries that match your criteria for "long running" then iterate over a cursor to kill them. Then setup that procedure to execute on a recurring basis in the event scheduler.
See: http://forge.mysql.com/tools/tool.php?id=106
The MySQL forum has some threads about this.
This post details how to set up timeouts on the server using innodb_lock_wait_timeout.
Here's a way to do it programmatically, assuming you're using JDBC.
I think this old question needs an updated answer.
You can set a GLOBAL timeout for all your read-only SELECT queries like this:
SET GLOBAL MAX_EXECUTION_TIME=1000;
The time specified is in milliseconds.
If you want the timeout only for a specific query, you can set it inline like this:
SELECT /*+ MAX_EXECUTION_TIME(1000) */ my_column FROM my_table WHERE ...
MySQL returns an error instead of waiting for eternity.
Note that this method only works for read-only SELECTs. If a SELECT statement is determined not to be read-only, then any timer set for it is cancelled and the following NOTE message is reported to the user:
Note 1908 Select is not a read only statement, disabling timer
For statements with subqueries, it limits the top SELECT only. It does not apply to SELECT statements within stored programs. Using the MAX_EXECUTION_TIME hint in SELECT statements within a stored program will be ignored.
I don't think the egrep above would find "2000".
Why not try just selecting the id as well, and avoiding all of that posh shell stuff:
mysql -e 'select id from information_schema.processlist where info is not null and time > 30;'
Since MySQL 5.7.8 there is max_execution_time option that defines the execution timeout for SELECT statements.
Here is my script :
mysql -e 'show processlist\G' |\
egrep -b5 'Time: [6-9]{3,10}' |\
grep 'Id:' |\
cut -d':' -f2 |\
grep -v '155' |\ ## Binary Log PID
sed 's/^ //' |\
while read id
do
mysql -e "kill $id;"
done