I am not sure if this is a mysql question or spring batch
I am using spring batch to read data from mysql database (Using JdbcPagingItemReader)
There are close to 1 million records that I am trying to read with fetch and pageSize of 10000
The issue is that read operation for each batch of 10000 records is very slow. I analysed the sql with explain and it is because even though there is index and all, due to sort by primary key, mysql uses internally filesort.
Has anyone faced similar issue before?
Sorry if the details are not sufficient (I havent provided the query, but its simple query with couple of joins and group by. All the join ids are indexed and sorting is based on primary key)
SELECT ...
GROUP BY ...
ORDER BY ...
LIMIT 10000
Must do the grouping and sorting before imposing the LIMIT.
In some cases the query can be "turned inside out" to locate the 10000 first (in a subquery), then do JOINs, etc This may run faster.
We need to study the actual query, not talk in generalizations. Also, please provide SHOW CREATE TABLE, there may be a missing composite index and/or some datatype issues. Please provide the generated SQL, not Spring's rendition of it.
Related
I am trying to speed up a simple SELECT query on a table that has around 2 million entries, in a MariaDB MySQL database. It took over 1.5s until I created an index for the columns that I need, and running it through PhpMyAdmin showed a significant boost in speed (now takes around 0.09s).
The problem is, when I run it through my PHP server (mysqli), the execution time does not change at all. I'm logging my execution time by running microtime() before and after the query, and it takes ~1.5s to run it, regardless of having the index or not (tried removing/readding it to see the difference).
Query example:
SELECT `pair`, `price`, `time` FROM `live_prices` FORCE INDEX
(pairPriceTime) WHERE `time` = '2022-08-07 03:01:59';
Index created:
ALTER TABLE `live_prices` ADD INDEX pairPriceTime (pair, price, time);
Any thoughts on this? Does PHP PDO ignore indexes? Do I need to restart the server in order for it to "acknowledge" that there is a new index? (Which is a problem since I'm using a shared hosting service...)
If that is really the query, then it needs an INDEX starting with the value tested in the WHERE:
INDEX(time)
Or, to make a "covering index":
INDEX(time, pair, price)
However, I suspect that most of your accesses involve pair? If so, then other queries may need
INDEX(pair, time)
especially if you as for a range of times.
To discuss various options further, please provide EXPLAIN SELECT ...
PDO, mysqli, phpmyadmin -- These all work the same way. (A possible exception deals with an implicit LIMIT on phpmyadmin.)
Try hard to avoid the use of FORCE INDEX -- what helps on today's query and dataset may hurt on tomorrow's.
When you see puzzling anomalies in timings, run the query twice. Caching may be the explanation.
The mysql documenation says
The FORCE INDEX hint acts like USE INDEX (index_list), with the addition that a table scan is assumed to be very expensive. In other words, a table scan is used only if there is no way to use one of the named indexes to find rows in the table.
MariaDB documentation Force Index here says this
FORCE INDEX works by only considering the given indexes (like with USE_INDEX) but in addition, it tells the optimizer to regard a table scan as something very expensive. However, if none of the 'forced' indexes can be used, then a table scan will be used anyway.
Use of the index is not mandatory. Since you have only specified one condition - the time, it can choose to use some other index for the fetch. I would suggest that you use another condition for the select in the where clause or add an order by
order by pair, price, time
I ended up creating another index (just for the time column) and it did the trick, running at ~0.002s now. Setting the LIMIT clause had no effect since I was always getting 423 rows (for 423 coin pairs).
Bottom line, I probably needed a more specific index, although the weird part is that the first index worked great on PMA but not through PHP, but the second one now applies to both approaches.
Thank you all for the kind replies :)
I am currently trying to run a JOIN between two tables in a local MySQL database and it's not working. Below is the query, I am even limiting the query to 10 rows just to run a test. After running this query for 15-20 minutes, it tells me "Error Code" 2013. Lost connection to MySQL server during query". My computer is not going to sleep, and I'm not doing anything to interrupt the connection.
SELECT rd_allid.CreateDate, rd_allid.SrceId, adobe.Date, adobe.Id
FROM rd_allid JOIN adobe
ON rd_allid.SrceId = adobe.Id
LIMIT 10
The rd_allid table has 17 million rows of data and the adobe table has 10 million. I know this is a lot, but I have a strong computer. My processor is an i7 6700 3.4GHz and I have 32GB of ram. I'm also running this on a solid state drive.
Any ideas why I cannot run this query?
"Why I cannot run this query?"
There's not enough information to determine definitively what is happening. We can only make guesses and speculations. And offer some suggestions.
I suspect MySQL is attempting to materialize the entire resultset before the LIMIT 10 clause is applied. For this query, there's no optimization for the LIMIT clause.
And we might guess that there is not a suitable index for the JOIN operation, which is causing MySQL to perform a nested loops join.
We also suspect that MySQL is encountering some resource limitation which is causing the session to be terminated. Possibly filling up all space in /tmp (that usually throws an error, something like "invalid/corrupted myisam table '#tmpNNN'", something of that ilk. Or it could be some other resource constraint. Without doing an analysis, we're just guessing.
It's possible MySQL wrote something to the error log (hostname.err). I'd check there.
But whatever condition MySQL is running into (the answer to the question "Why I cannot run this query")
I'm seriously questioning the purpose of the query. Why is that query being run? Why is returning that particular resultset important?
There are several possible queries we could execute. Some of those will run a long time, and some will be much more performant.
One of the best ways to investigate query performance is to use MySQL EXPLAIN. That will show us the query execution plan, revealing the operations that MySQL will perform, and in what order, and indexes will be used.
We can make some suggestions as to some possible indexes to add, based on the query shown e.g. on adobe (id, date).
And we can make some suggestions about modifications to the query (e.g. adding a WHERE clause, using a LEFT JOIN, incorporate inline views, etc. But we don't have enough of a specification to recommend a suitable alternative.
You can try something like:
SELECT rd_allidT.CreateDate, rd_allidT.SrceId, adobe.Date, adobe.Id
FROM
(SELECT CreateDate, SrceId FROM rd_allid ORDER BY SrceId LIMIT 1000) rd_allidT
INNER JOIN
(SELECT Id FROM adobe ORDER BY Id LIMIT 1000) adobeT ON adobeT.id = rd_allidT.SrceId;
This may help you get a faster response times.
Also if you are not interested in all the relation you can also put some WHERE clauses that will be executed before the INNER JOIN making the query faster also.
I'm running a forum on a VPS, running Percona DB, with PHP 5.5.8, Opcode caching, etc, it's all very speed orientated.
I'm also running New Relic, (yes I have the t-shirt).
As I'm tuning the application, optimising queries the forum is making to the DB for any query at the top of my time consumed list.
Right now, the most time consuming query I have, as it's the most frequently used is a simple hit counter on each topic.
So the query is:
UPDATE topics SET num_views = num_views + 1 WHERE id_topic = ?
I can't think of a simpler way to perform this, or if any of the various other ways might be quicker, and why.
Is there a way of writing this query to be even faster, or an index I can add to a field to aide speed?
Thanks.
Assuming id_topic is indexed, you're not going to get better. The only recommendation I would have is to look at the other indexes on this table and make sure you don't have redundant ones that include num_views in them. That would decrease update speed on this update.
For example if you had the following indexes
( some_column, num_views)
( some_column, num_views, another_column)
Index #1 would be extraneous and just add to the insert/update overhead
Not sure if that is an improvement, but you could check the following:
How about only adding a row for each page hit to the table instead of locking and updating the row?
And then using a count to get the results, and cache them instead of doing the count each time?
(And maybe compacting the table once per day?)
I have a query that is running very slowly. The table is was querying has about 100k records and no indexes on most of the columns used in the where clause. I just added indexes on those columns but the query hasn't gotten any faster.
I think this is because when a column is indexed, it's value is written in the index at the time of insertion. I just added the indexes now after all those records were added. So is there a way to "re-run the indexes" on the table?
Edit
Here is the query and explain result:
Oddly enough when I copy the query and run in directly in my SQL manager tool it runs quite fast so may bye the problem is in my application code and not in the query itself.
Mysql keeps consistent indexes. It does not matter if the data is added first, the index is added first, or the data is changed at any time. The same final index will result (assuming the same final data and index type).
Your slow query is not caused by adding the index later. There will be some other reason.
This is an extremely common problem.
Use MySQL explain http://dev.mysql.com/doc/refman/5.0/en/using-explain.html
When you precede a SELECT statement with the keyword EXPLAIN, MySQL displays information from the optimizer about the query execution plan. That is, MySQL explains how it would process the statement, including information about how tables are joined and in which order.
Using these results... verify the index you created is functioning the way you expected.
If not, you will want to tweak your index until you have it working as expected.
You might want to create a new table, create indexes, then insert all elements from old table to new while testing this. It's easier than dropping and re-adding indices a million times.
I have a table of about 800 000 records. Its basically a log which I query often.
I gave condition to query only queries that were entered last month in attempt to reduce the load on a database.
My thinking is
a) if the database goes only through the first month and then returns entries, its good.
b) if the database goes through the whole database + checking the condition against every single record, it's actually worse than no condition.
What is your opinion?
How would you go about reducing load on a dbf?
If the field containing the entry date is keyed/indexed, and is used by the DB software to optimize the query, that should reduce the set of rows examined to the rows matching that date range.
That said, it's a commonly understood that you are better off optimizing queries, indexes, database server settings and hardware, in that order. Changing how you query your data can reduce the impact of a query a millionfold for a query that is badly formulated in the first place, depending on the dataset.
If there are no obvious areas for speedup in how the query itself is formulated (joins done correctly or no joins needed, or effective use of indexes), adding indexes to help your common queries would by a good next step.
If you want more information about how the database is going to execute your query; you can use the MySQL EXPLAIN command to find out. For example, that will tell you if it's able to use an index for the query.