A program I inherited runs 800 single queries once every minute or so. I was able to grab all these queries and I want to test to see how long it takes to run them in a sequence to see if its an issue that I need to address or if it is ok as is.
The SQL queries are all simple SELECT queries with a few where clauses:
SELECT DISTINCT roles.desc FROM descRoles roles, listUsers users, listUsers mapping WHERE mapping.roleId = roles.roleId AND mapping.idx = users.idx AND users.UserName = 'fakeNameHere';
If there's a typo in my select query please ignore it they run fine. I want to know if there is something I can put before and after all 800 queries to time how long it takes to run all of them. Also, if I could turn off the result tabs on them since after about 40 I get a message that my maximum result tabs are reached, that also seems necessary.
Workbench is not the tool for timing queries. What you want is mysqlslap https://dev.mysql.com/doc/refman/8.0/en/mysqlslap.html
Related
I am currently trying to run a JOIN between two tables in a local MySQL database and it's not working. Below is the query, I am even limiting the query to 10 rows just to run a test. After running this query for 15-20 minutes, it tells me "Error Code" 2013. Lost connection to MySQL server during query". My computer is not going to sleep, and I'm not doing anything to interrupt the connection.
SELECT rd_allid.CreateDate, rd_allid.SrceId, adobe.Date, adobe.Id
FROM rd_allid JOIN adobe
ON rd_allid.SrceId = adobe.Id
LIMIT 10
The rd_allid table has 17 million rows of data and the adobe table has 10 million. I know this is a lot, but I have a strong computer. My processor is an i7 6700 3.4GHz and I have 32GB of ram. I'm also running this on a solid state drive.
Any ideas why I cannot run this query?
"Why I cannot run this query?"
There's not enough information to determine definitively what is happening. We can only make guesses and speculations. And offer some suggestions.
I suspect MySQL is attempting to materialize the entire resultset before the LIMIT 10 clause is applied. For this query, there's no optimization for the LIMIT clause.
And we might guess that there is not a suitable index for the JOIN operation, which is causing MySQL to perform a nested loops join.
We also suspect that MySQL is encountering some resource limitation which is causing the session to be terminated. Possibly filling up all space in /tmp (that usually throws an error, something like "invalid/corrupted myisam table '#tmpNNN'", something of that ilk. Or it could be some other resource constraint. Without doing an analysis, we're just guessing.
It's possible MySQL wrote something to the error log (hostname.err). I'd check there.
But whatever condition MySQL is running into (the answer to the question "Why I cannot run this query")
I'm seriously questioning the purpose of the query. Why is that query being run? Why is returning that particular resultset important?
There are several possible queries we could execute. Some of those will run a long time, and some will be much more performant.
One of the best ways to investigate query performance is to use MySQL EXPLAIN. That will show us the query execution plan, revealing the operations that MySQL will perform, and in what order, and indexes will be used.
We can make some suggestions as to some possible indexes to add, based on the query shown e.g. on adobe (id, date).
And we can make some suggestions about modifications to the query (e.g. adding a WHERE clause, using a LEFT JOIN, incorporate inline views, etc. But we don't have enough of a specification to recommend a suitable alternative.
You can try something like:
SELECT rd_allidT.CreateDate, rd_allidT.SrceId, adobe.Date, adobe.Id
FROM
(SELECT CreateDate, SrceId FROM rd_allid ORDER BY SrceId LIMIT 1000) rd_allidT
INNER JOIN
(SELECT Id FROM adobe ORDER BY Id LIMIT 1000) adobeT ON adobeT.id = rd_allidT.SrceId;
This may help you get a faster response times.
Also if you are not interested in all the relation you can also put some WHERE clauses that will be executed before the INNER JOIN making the query faster also.
Started having the following errors as the size of my database grow. It's at about 4GB now for this table with millions of rows.
Laravel cant handle large tables?
$count = DB::table('table1')->distinct('data')->count(["data"]);
$count2 = DB::table('table2')->distinct('data')->count(["data"]);
SQLSTATE[HY000]: General error: 2014 Cannot execute queries while other unbuffered queries are active. Consider using PDOStatement::fetchAll(). Alternatively, if your code is only ever going to run against mysql, you may enable query buffering by setting the PDO::MYSQL_ATTR_USE_BUFFERED_QUERY attribute. (SQL: select count(distinct data) as aggregate from data)
I think the table is very large and will take some time to finish. You are running another huge query directly after which also needs a long time to fully execute. I think you need to rerun a second query when first finishes.
Please try this:
$count = DB::table('table1')->distinct('data')->count(["data"]);
if($count){
$count2 = DB::table('table2')->distinct('data')->count(["data"]);
}
But this will need a result from first the query.
Try using:
if($count>-1)
or
if(DB::table('table1')->distinct('data')->count(["data"]) >-1){
$count2 = DB::table('table2')->distinct('data')->count(["data"]);
}
For the MySQL statement this might be faster:
DB::table('table1')->select(DB::raw("SELECT COUNT (DISTINCT data)"))->count();
And best solution will be using execute as Laravel allow to use it too
Please tell me if you encounter any problems.
I read some explanations about "sending data" status but I still don't get if the query is running or not. They say "sending data" means server sending some data to client but I really don't which data is sending.
What does it mean when MySQL is in the state "Sending data"?
I run some query using Mysql Workbench and while this query is executing Workbench goes timeout(after 10 min). Then I run "show processlist" command to see if query is continues to executing or not. It says my query status is "sending data".
By the way logs table has 10 million records. So this query must be finish in 10 hours. I just want to know if my query is really executing still?
update logs join user
set logs.userid=user.userid
where logs.log_detail LIKE concat("%",user.userID,"%");
When it's in the process list it is still running. Your query is just running very slow, I assume, cause you're doing a cross join (which means you connect every column of one table to every column of the other table, which can result in quite an enormous amount of data, therefore I further assume, that your query does not do, what you think it does) and no index can be used on the where clause. You're probably doing a full table scan on a very huge amount of data. You can verify this by doing an explain <your query>;.
To avoid the cross join specify the connection in an on clause, like
update logs join user ON logs.userid = user.userid
set logs.whatever = user.whatever
where logs.log_detail LIKE concat("%",user.userID,"%");
I have a query running in vBulletin system, that fetches latest threads that have image attachments, along with their first attachment ID.
Here is the query:
SELECT thread.threadid,
thread.title,
thread.postuserid,
thread.postusername,
thread.dateline,
thread.replycount,
post.pagetext,
(
SELECT attachment.attachmentid
FROM `vb_attachment` AS attachment
LEFT JOIN `vb_filedata` AS data
ON data.filedataid=attachment.filedataid
WHERE attachment.contentid=thread.firstpostid
AND attachment.contenttypeid=1
AND data.extension IN('jpg','gif','png')
AND data.thumbnail_filesize>0
ORDER BY attachmentid ASC
LIMIT 1
) AS firstattachmentid
FROM `vb_thread` AS thread
LEFT JOIN `vb_post` AS post
ON post.postid=thread.firstpostid
WHERE thread.forumid IN(331, 318)
HAVING firstattachmentid>0
ORDER BY thread.dateline DESC
LIMIT 0, 5
The explain results for the query you can see here:
The problem: usually query runs in 0.00001 second, so almost instantly, as it is optimized query overall, however, after creating new thread (even if thread is not from forums IDs 331, 318), it takes 40+ seconds (executed directly from MySQL GUI), and even explain query takes 2+ seconds!. Explain query taking slow shows the same results regarding index usage.
After running the same query two-three times, it is back to usual speed.
If anyone could explain what happens, and how to fix the problem, I would appreciate the help.
Thanks.
MySQL caches the results of queries to allow it to return the results of the same query quicker later.
Adding a new thread is causing MySQL to have to rebuild the query cache the next time the query is run.
I have found MySQL subqueries to perform badly. Some tactics I have used to avoid subqueries:
Restructure the query as a join without subqueries.
Restructure the query as several queries.
Return more data that you need and then do some work with this data in your application.
Recently I was pulled into the boss-man's office and told that one of my queries was slowing down the system. I then was told that it was because my WHERE clause began with 1 = 1. In my script I was just appending each of the search terms to the query so I added the 1 = 1 so that I could just append AND before each search term. I was told that this is causing the query to do a full table scan before proceeding to narrow the results down.
I decided to test this. We have a user table with around 14,000 records. The queries were ran five times each using both phpmyadmin and PuTTY. In phpmyadmin I limited the queries to 500 but in PuTTY there was no limit. I tried a few different basic queries and tried clocking the times on them. I found that the 1 = 1 seemed to cause the query to be faster than just a query with no WHERE clause at all. This is on a live database but it seemed the results were fairly consistent.
I was hoping to post on here and see if someone could either break down the results for me or explain to me the logic for either side of this.
Well, your boss-man and his information source are both idiots. Adding 1=1 to a query does not cause a full table scan. The only thing it does is make query parsing take a miniscule amount longer. Any decent query plan generator (including the mysql one) will realize this condition is a NOP and drop it.
I tried this on my own database (solar panel historical data), nothing interesting out of the noise.
mysql> select sum(KWHTODAY) from Samples where Timestamp >= '2010-01-01';
seconds: 5.73, 5.54, 5.65, 5.95, 5.49
mysql> select sum(KWHTODAY) from Samples where Timestamp >= '2010-01-01' and 1=1;
seconds: 6.01, 5.74, 5.83, 5.51, 5.83
Note I used ajreal's query cache disabling.
First at all, did you set session query_cache_type=off; during both testing?
Secondly, both your testing queries on PHPmyadmin and Putty (mysql client) are so different, how to verify?
You should apply same query on both site.
Also, you can not assume PHPmyadmin is query cache off. The time display on the phpmyadmin is including PHP processing, which you should avoid as well.
Therefore, you should just do the testing on mysql client instead.
This isn't a really accurate way to determine what's going on inside MySQL. Things like caching and network variations could skew your results.
You should look into using "explain" to find out what query plan MySQL is using for your queries with and without your 1=1. A DBA will be more interested in those results. Also, if your 1=1 is causing a full table scan, you will know for sure.
The explain syntax is here: http://dev.mysql.com/doc/refman/5.0/en/explain.html
How to interpret the results are here: http://dev.mysql.com/doc/refman/5.0/en/explain-output.html