I am using MySQL workbench 6.3 CE. I want to take the snapshots of MySQL status variables.
I want to store the values of status variables after every 1 second during the execution of query.
I can simply show the variables using 'show global status'. But I want to execute it automatically after every 1 second.
You can run a procedure and a query at the same time by having two separate connections. Workbench is a handy tool, but you should learn to use the mysql commandline tool, too.
The query is rather simple. INDEX(l_shipdate) is likely to be the best for it.
The real way to speed up the query (assuming that is your ultimate goal) is to build and maintain a "Summary table" of daily or monthly subtotals. Then sum the sums and sum the counts. Avg is (SUM(sums)/SUM(counts)).
More discussion: http://mysql.rjweb.org/doc.php/summarytables
Be cautious about running (via EVENT or cron) any code that might take longer than the interval time. If it gets behind, it is likely to cascade and bring the server down, or at least slow things down severely. For that reason, I much prefer the WHILE loop.
Related
I have to update a row into a table(InnoDB) and then right after select the last registry that I updated and make an insert. If the connection is too slow(for the update statement), can the select statement get the wrong row? Assuming that I'm using two different queries.
Are you using SQL to run your script or are you running it somewhere else? (ex PHP, Python, C#)
A Script from SQL should* always complete one line before moving on to the next but if you're unsure you could call something like the sleep function or the wait delay function to pause before you run your second line.
*I say should as I've seen some extremely rare random cases, usually with longer running queries that don't. If your first job takes a long time to complete it may be worth the effort to schedule the first job in Job Agent, then later that day schedule the second job.
MySQL does not keep records of row insertion order. Any algorithm that's based on last registry that I updated must implement its own means to gather the required information. If it doesn't, it will get the wrong row sooner or later. (Network speed is probably not as relevant as concurrent access.)
I have a MySQL server with many innodb tables.
I have a background script that does A LOT a delete/insert with one request : it deletes many millions of rows from table 2, then insert many millions of rows to table 2 using data from table 1 :
INSERT INTO table 2 (date)
SELECT date from table 1 GROUP BY date
(The request is actually more complex but it is to shown what kind of request I am doing).
At the same time, I am going to run a second background script, that does about a million INSERT or UPDATE requests, but separately (I mean, I execute the first update query, then I execute an insert query, etc...) in table 3.
My issue is that when a script is running, it is fast, like let's say it takes 30minutes each, so 1h total. But when the two scripts are running at the same time, it is VERY slow, like it will take 5h, instead of 1h.
So first, I would like to know what can cause this ? Is it because of IO performance ? (like mysql is writing in two different tables so it is slow to switch between the two ?)
And how could I fix this ? If I could say that the big INSERT query is paused while my second background script is running, it would be great, for example... But I can't find a way to do something like this.
I am not an expert at MySQL administration.. If you need more information, please let me know !
Thank you !!
30 minutes for million INSERT is not fast. Do you have an index on date column? (or whatever column you are using to pivot on)
Regarding your original question.It's difficult to say much without knowing the details of both your scripts and the table structures, but one possible reason why the scripts are running reasonably quickly separately is because you are doing similar kinds of SELECT queries, which might be getting cached by MySQL and then reused for subsequent queries. But if you are running two queries in parallel, then the SELECT's for the corresponding query might not stay in the cache (because there are two concurrent processes which send new queries all the time).
You might want to explicitly disable cache for some queries which you are sure you only run once (using SQL_NO_CACHE modifier) and see if it changes anything. But I'd look into indexing and into your table structure first, because 30 minutes seems to be extremely slow :) E.g. you might also want to introduce partitioning by date for your tables, if you know that you always choose entries in a given period (say by month). The exact tricks depend on your data.
UPDATE: Another issue might be that both your queries work with the same table (table 1), and the default transaction isolation level in MySQL is REPEATABLE READS afair. So it might be that one query is waiting until the other is done with the table to satisfy the transaction isolation level. You might want to lower the transaction isolation level if you are sure that your table 1 is not changed when scripts are working on it.
You can use an event scheduler so you can set mysql to launch this queries at different hours of the day, in another stackoverflow related question you have an exmaple of how to do it: MySQL Event Scheduler on a specific time everyday
Another thing to have in mind is to use the explain plan to see what could be the reason the query is that slow.
I have a routine in MySQL that is very long and has multiple SELECT, INSERT, and UPDATE statements in it with some IFs and REPEATs. It's been running fine until lately, where it's hanging an taking over 20 seconds to complete (which is unacceptable considering it used to take 1 second or so).
What is the quickest and easiest way for me to find out where in the routine the bottleneck is coming from? Basically the routine is getting stopped up and some point... how can I find out where that is without breaking apart the routine and testing one-by-one each section?
If you use Percona Server (a free distribution of MySQL with many enhancements), you can make the slow-query log record times for individual queries, using the log_slow_sp_statements configuration variable. See http://www.percona.com/doc/percona-server/5.5/diagnostics/slow_extended_55.html
If you're using stock MySQL, you can add statements in the stored procedure to set a series of session variables to the value returned by the SYSDATE() function. Use a different session variable at different points in the SP. Then after you run the SP in a test execution, you can inspect the values of these session variables to see what section of the SP took the longest.
To analyze the query can see the execution plan of the same. It is not always an easy task but with a bit of reading will find the solution. I leave some useful links
http://dev.mysql.com/doc/refman/5.5/en/execution-plan-information.html
http://dev.mysql.com/doc/refman/5.0/en/explain.html
http://dev.mysql.com/doc/refman/5.0/en/using-explain.html
http://www.lornajane.net/posts/2011/explaining-mysqls-explain
What is the maximum number of queries that can be run in a dedicated server with 4 GB of RAM in one instance.
I am running a cron job that may contains queries near to one hundred thousand.its queries running in a loop, queries are simple queries selecting 3 fields with integer fields.
please advice
42, of course. The 43rd query breaks it. No, really :-)
There is no upper limit on the number of queries -- the loop can run all day. Unless there is some form of parallel code (i.e. threads), each query from the cron-job will run in series (sends query, processes result, sends query, processes...) and thus the number of total queries is irrelevant in terms of memory requirements.
There is, however, a potential (if absolutely absurd) limit with updates/inserts/deletes that run within a single transaction. This is because the transaction needs to be able to be rolled-back. (I am not sure if this is bound by storage, main memory, or otherwise.)
Happy coding.
Since this is a long-running job, take note: If the cron-job "runs into" the next cron-job (does not complete in time), then serious issues can result as the same "job" may be executing multiple times! This ugly situation can quickly spiral out of control if the cron-jobs keep cascading into each other: each concurrently running "job" will place more burden on the database server.
i have mysql that is used on production server for php webshop application.
sometimes it works very slow. so, i will change indexes for several tables.
but before that, i have to make some kind of "snapshot" of current performances (several times per day). after that, i will change indexes, and create new "performance snapshot". then i will made some more changes in database, and made another "performance snapshot".
how can i make that "performance snapshot"? is it possible to use some kind of tool, or to ckeck some logs, or...?
if you can help me how to do that.
thank you in advance!
If you want to buy a commercial product, there is the MySQL Query Analyzer
Otherwise, you could use the SQL Profiler which is already included with MySQL.
The SQL Profiler is built into the database server and can be dynamically enabled/disabled via the MySQL client utility. To begin profiling one or more SQL queries, simply issue the following command:
mysql> set profiling=1;
Thereafter, you will see the duration of each of your queries as you run them.
Slow query log and queries not using indexes
query cache hit rate
innodb monitor
and of course your database hard-disk I/O, memory usage ...