I have queries that return thousands of results, is it posible to show only query time without actual results in MySQL console or from command line?
Use SET profiling = 1; at command prompt.
Refer for more details
It's not possible to get the execution time without getting result or getting sql executed.
See why we can not get execution time without actual query execution
If you're using Linux.
Create a file query.sql
Enter the query to test into it. e.g. select * from table1
Run this command:
time mysql your_db_name -u'db_user_name' -p'your_password' < query.sql > /dev/null
The output will look something like this:
real 0m4.383s
user 0m0.022s
sys 0m0.004s
the "real" line is what you're looking at. In the above example, the query took 4.38 seconds.
Obviously, entering your DB password on the command line is not such a great idea, but will do as a quick and dirty workaround.
Related
I need to run a MySQL script that, according to my benchmarking, should take over 14 hours to run. The script is updating every row in a 332715-row table:
UPDATE gene_set SET attribute_fk = (
SELECT id FROM attribute WHERE
gene_set.name_from_dataset <=> attribute.name_from_dataset AND
gene_set.id_from_dataset <=> attribute.id_from_dataset AND
gene_set.description_from_dataset <=> attribute.description_from_dataset AND
gene_set.url_from_dataset <=> attribute.url_from_dataset AND
gene_set.name_from_naming_authority <=> attribute.name_from_naming_authority AND
gene_set.id_from_naming_authority <=> attribute.id_from_naming_authority AND
gene_set.description_from_naming_authority <=> attribute.description_from_naming_authority AND
gene_set.url_from_naming_authority <=> attribute.url_from_naming_authority AND
gene_set.attribute_type_fk <=> attribute.attribute_type_fk AND
gene_set.naming_authority_fk <=> attribute.naming_authority_fk
);
(The script is unfortunate; I need to transfer all the data from gene_set to attribute, but first I must correctly set a foreign key to point to attribute).
I haven't been able to successfully run it using this command:
nohup mysql -h [host] -u [user] -p [database] < my_script.sql
For example, last night, it ran over 10 hours but then the ssh connection broke:
Write failed: Broken pipe
Is there any way to run this script in a way to better ensure that it finishes? I really don't care how long it takes (1 day vs 2 days doesn't really matter) so long as I know it will finish.
The quickest way might be to run it in a screen or tmux session.
Expanding on my comment, you're getting poor performance for a 350k record update statement. This is because you're setting based on the result of a sub query, and not updating as a set. Thus you're running the statement once for each row. Update as such:
UPDATE gene_set g JOIN attribute_fk a ON < all where clauses > SET g.attribute_fk = a.id.
This doesn't answer your question per se, but I'll be interested to know how much faster it runs.
Here is how i did it in past where I ran monolithic alter queries in remote server which take ages sometime :
mysql -h [host] -u [user] -p [database] < my_script.sql > result.log 2>&1 &
This way you don't need to wait for it as it will finish on its own time,You could customize and add select now() at start and end in your my_script.sql to find out how long it took if you interest .
Things also to consider if applicable
Why this query take this long, can we improve it(eg : disable key checks .. , offline prepare the data and update with a temp table ..
Can we break the query to run in batches
What is the impact on rest of the DB
etc
If you have ssh access to the server you could copy it over and run it there with the following lines:
#copy over to tmp dir
scp my_script.sql user#remoteHost:/tmp/
#execute script on remote host
ssh -t user#remoteHost "nohup mysql \
-h localhost -u [user] -p [database] < /tmp/my_script.sql &"
Maybe you can try to do 300k updates with frequent commits instead of one single huge update. Doing that inc ase anything failed at you will maintain the changes already applied.
with some dimacic sql you can get all the lines in one go, later copy the file to your server ...
I'm having a really hard time believing this question has never been asked before, it MUST be! I'm working on a batch file that needs to run some sql commands. All tutorials explaining this DO NOT WORK (referring to this link:Pass parameters to sql script that someone will undoubtedly mention)! I've tried other posts on this site verbatim and still nothing is working.
The way I see it, there are two ways I can approach this:
1. Either figure out how to call my basic MYSQL script and specify a parameter or..
2. Find an equivalent "USE ;" command that works in batch
My Batch file so far:
:START
#ECHO off
:Set_User
set usrCode = 0
mysql -u root SET #usrCode = '0'; \. caller.sql
Simply put, I want to pass 'usrCode' to my MYSQL script 'caller.sql' which looks like this:
USE `my_db`;
CALL collect_mismatch(#usrCode);
I know that procedures are a whole other topic to get into, but assume that the procedure is working just fine. I just can't get my parameter from Batch to MYSQL.
Ideally I would like to have the 'USE' & 'CALL' commands in my Batch file, but I can't find anything that let's me select a database in Batch before CALLing my procedure. That's when I tried the above link which boasts a simple command line entry and you're off to the races, but it isn't the case.
Any help would be greatly appreciated.
This will work;
echo SET #usrCode = '0'; > params.sql
type params.sql caller.sql | mysql -u root dbname
I am performing some MySQL queries that have very large result sets. I would like to see how long they take, but I don't want all the output to be printed on my terminal because it takes up a lot of space and time. I can do this by writing a script that performs and times the query, but I was wondering if there was a way to do this directly through MySQL on the terminal. Thanks.
Change the pager in mysql like indicated here: http://www.mysqlperformanceblog.com/2013/01/21/fun-with-the-mysql-pager-command/
mysql> pager cat > /dev/null will discard the output, and mysql> pager will put it back.
Wrap your query in set #foo = (select count(*) from ( ..... ) foo)
Just run mysql console utility, then enter source file_name (where file_name contains sql commands)
In PowerShell, how do I execute my mysql script so that the results are piped into a csv file? The results of this script is just a small set of columns that I would like copied into a csv file.
I can have it go directly to the shell by doing:
mysql> source myscript.sql
And I have tried various little things like:
mysql> source myscript.sql > mysql.out
mysql> source myscript.sql > mysql.csv
in infinite variation, and I just get errors. My db connections is alright because I can do basic table queries from the command line etc... I haven't been able to find a solution on the web so far either...
Any help would be really appreciated!
You seem to not be running powershell, but the mysql command line tool (perhaps you started it in a powershell console though.)
Note also that the mysql command line tool cannot export directly to csv.
However, to redirect the output to a file just run
mysql mydb < myscript.sql >mysql.out
or e.g.
echo select * from mytable | mysql mydb >mysql.out
(and whatever arguments to mysql you need, like username, hostname)
Are you looking for SELECT INTO OUTFILE ? dev.mysql.com/doc/refman/5.1/en/select.html – Pekka 19 hours ago
Yep. Select into outfile worked! But to make sure you get column names you also need to do something like:
select *
from
(
select
a,
b,
c
)
Union ALL
(Select *
from actual)
I've run into a problem where I run some query and the mysqld process starts using 100% CPU power, without ending. I want to pinpoint this query. The problem is that log/development.log contains only queries that have finished. Any idea?
I think you have a few options for this. The first is really taking a look at your development.log and seeing which actions are causing it. Take a look at the queries you're asking rails to run and try to pinpoint that particular query. If it's taking a large amount of time it probably means you're doing something like returning n+1 queries, missing indexes or some other performance killer.
You say that the dev log only has queries that have finished. Can't you work out what the next query to run would be?
Your other options involve starting mysqld with a log (i think the names of some of these have changed):
mysqld --log[=file_name] --log-slow-queries[=file_name]
Showing the current statement list using processlist from within mysql:
show processlist;
To prevent stuff like this happening again you could also take some time to look at a rails performance monitor like RPM from New Relic (http://www.newrelic.com/).
I hope this helps!
You could take a look at running/unfinished statements via the
show processlist;
command.
If you have assess to MySQL, consider the SQL query
SHOW PROCESSLIST
Or from the command line:
mysqladmin processlist
Alternatively, the most powerful way is to override the 'execute' method of the ActiveRecord::Base connection instance. This article shows the general approach:
http://www.misuse.org/science/2006/12/12/sql-logging-in-rails/
You put this code into application.rb:
# define SQL_LOG_FILE, SQL_LOG_MAX_LINES
connection = ActiveRecord::Base.connection
class << connection
alias :original_exec :execute
def execute(sql, *name)
# try to log sql command but ignore any errors that occur in this block
# we log before executing, in case the execution raises an error
begin
lines = if File::exists?(SQL_LOG_FILE) then IO::readlines(SQL_LOG_FILE) else [] end
log = File.new(SQL_LOG_FILE, "w+")
# keep the log to specified max lines
if lines.length > SQL_LOG_MAX_LINES
lines.slice!(0..(lines.length-SQL_LOG_MAX_LINES))
end
lines << Time.now.strftime("%x %I:%M:%S %p")+": "+sql+"n"
log.write(lines)
log.close
$sql_log = sql
rescue Exception => e
;
end
# execute original statement
original_exec(sql, *name)
end # def execute
end # class <<