There is a search page in webapplication(Pagination is used : 10 records per page). Database used : Mysql. Table has around 1000 00records.Query is tuned as in query is using index (checked Explain plan).Result set that fetches around 17000 rows and takes around 5 sec .Can any please suggest how to optimize search Query.(Note : Tried to use limit but query time did not improve).
Query Eg:
Select * from abc
Join def on abc.id=def.id
where date >= '2013-09-03'
and date <='2014-10-01'
and def.state=1
-- id on both table is indexed
-- date and state column cannot be indexed as they have low SI.
Related
I'm working on existing database with millions of inserts per day. Database design itself pretty bad and filtering records from it takes huge amount of time. we are in the process of moving this to ELK cluster but in the mean time I have to filter some records for immediate use.
I have two tables like this
table - log_1
datetime | id | name | ip
2017-01-01 01:01:00 | 12345 | sam | 192.168.100.100
table - log_2
datetime | mobile | id
2017-01-01 01:01:00 | 999999999 | 12345
I need to filter my data using ip and from the log_1 and datetime on both log_1 and log_2. to do that I use below query
SELECT log_1.datetime, log_1.id, log_1.name, log_1.ip, log_2,datetime, log_2.mobile, log_2.id
FROM log_1
INNER JOIN log_2
ON log_1.id = log_2.id AND log_1.datetime = log_2.datetime
where log_1.ip = '192.168.100.100'
limit 100
Needless to say this take forever to retrieve results with such large number of records. is there any better method I can do the same thing without waiting long time mysql to respond ?. In other words how can I optimized my query against such large database.
database is not production and it's for just analytics
First of all, your current LIMIT clause is fairly meaningless, because the query has no ORDER BY clause. It is not clear which 100 records you want to retain. So, you might want to use something like this:
SELECT
l1.datetime,
l1.id,
l1.name,
l1.ip,
l2.datetime,
l2.mobile,
l2.id
FROM log_1 l1
INNER JOIN log_2 l2
ON l1.id = l2.id AND l1.datetime = l2.datetime
WHERE
l1.ip = '192.168.100.100'
ORDER BY
l1.datetime DESC
LIMIT 100;
This would return the 100 most recent matching records. As for speeding up this query, one way to at least make the join faster would be to add the following index on the log_2 table:
CREATE INDEX idx ON log_2 (datetime, id, mobile);
Assuming MySQL chooses to use this index, it should make the join much faster, because each id and datetime value can be looked up in a B-tree instead of doing a manual scan of the entire table. Note that the index also covers the mobile column, which is needed in the select.
Can you try this :
1. Create index on both tables on id column if not already created (this will take time).
Try creating two temp tables log_1_tmp and log_2_tmp with data as below :
Query 1 - insert into log_1_tmp select * from log_1 where log_1.ip = '192.168.100.100'
Query 2 - insert into log_2_tmp select * from log_2 where log_2.ip = '192.168.100.100'
Run your query on above two tables and here you can remove where condition from your query.
See if this works.
I wonder if anyone could help with a MySQL query I am trying to write to return relevant results.
I have a big table of change log data, and I want to retrieve a number of record 'groups'. For example, in this case a group would be where two or more records are entered with the same timestamp.
Here is a sample table.
==============================================
ID DATA TIMESTAMP
==============================================
1 Some text 1379000000
2 Something 1379011111
3 More data 1379011111
3 Interesting data 1379022222
3 Fascinating text 1379033333
If I wanted the first two grouped sets, I could use LIMIT 0,2 but this would miss the third record. The ideal query would return three rows (as two rows have the same timestamp).
==============================================
ID DATA TIMESTAMP
==============================================
1 Some text 1379000000
2 Something 1379011111
3 More data 1379011111
Currently I've been using PHP to process the entire table, which mostly works, but for a table of 1000+ records, this is not very efficient on memory usage!
Many thanks in advance for any help you can give...
Get the timestamps for the filtering using a join. For instance, the following would make sure that the second timestamp is in a completed group:
select t.*
from t join
(select timestamp
from t
order by timestamp
limit 2
) tt
on t.timestamp = tt.timestamp;
The following would get the first three groups, no matter what their size:
select t.*
from t join
(select distinct timestamp
from t
order by timestamp
limit 3
) tt
on t.timestamp = tt.timestamp;
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How can I speed up a MySQL query with a large offset in the LIMIT clause?
In our application we are showing records from MySQL on a web page. Like in most such applications we use paging. So the query looks like this:
select * from sms_message
where account_group_name = 'scott'
and received_on > '2012-10-11' and
received_on < '2012-11-30'
order by received_on desc
limit 200 offset 3000000;
This query takes 104 seconds. If I only change offset to low one or remove it completely, it's only half a second. Why is that?
There is only one compound index, and it's account_group_name, received_on and two other columns. Table is InnoDB.
UPDATE:
Explain returns:
1 SIMPLE sms_message ref all_filters_index all_filters_index 258 const 2190030 Using where
all_filters_index is 4-columns filter mentioned above.
Yes this is true, time increases as offset value increases and the reason is because offset works on the physical position of rows in the table which is not indexed. So to find a row at offset x, the database engine must iterate through all the rows from 0 to x.
I am trying to make more faster searching query on My-Sql database.
I have 800,000 rows in BOOKMARK table.
when I run with this query
SELECT * FROM `BOOKMARK` WHERE `topic` = 'Apple'
Showing rows 0 - 29 ( 501 total, Query took 0.0008 sec)
It's damn fast!
I have total point for each rows and want to find good one first.
SELECT * FROM `BOOKMARK` WHERE `topic` = 'Apple' ORDER BY total DESC
Showing rows 0 - 29 ( 501 total, Query took 0.4770 sec) [b_total: 9.211558703193814 - 1.19674062055568]
It's now 0.5 seconds!!
This is a huge problem for me.
Here are the table information.
* There are 20,000 different topics in this table.
* total number exist between 0-10
* The server calculate total points once a day.
I was thinking that if the table is ordered by total number for each topics, the search query doesn't have to include 'ORDER BY total DESC'.
It will save a lot of time, if the table check the orders once a day.
Is there a way to make this happen?
It was very simple.
I use PhpMyAdmin and changed the setting on Operations menu.
like below image.
After this,
SELECT * FROM `BOOKMARK` WHERE `topic` = 'Apple'
I ran this query and showed me the result with total value DESC.
Perfect!!
I have query like this in mysql
select count(*)
from (
select count(idCustomer)
from customer_details
where ...
group by idCustomer having max(trx_staus) = -1
) as temp
So basically finding customer count that fulfill certain where condition (one or two) with max transaction state = -1 (other can be 2,3,4). but this query takes about 30 min on my local machine and 13 sec on high configuration server (about 20 gb ram and 8 core processor). i have 13 lac rows in table. i know group by and having max function are too costly. what can i do to optimize this query. any suggestion?
The inner query has to inspect all rows to determine the aggregate maximum; if you want to optimize this, add a calculated field that contains the maximum to your customer table and select on that.
The trick is then to keep that field up-to-date :)