Quering sequence of chained data in SQL fast - mysql

Problem:
Given is Table with the following structure:
Table: ID, ID_POINTER, DATA
I want to query a chained data sequence in this Table.
How do I query most fast the following "slow" query:
SELECT * FROM Table 1, ...., Table n WHERE
Table 1.ID = TABLE 2.ID_POINTER and ......... and TABLE n-1.ID = Table n.ID_POINTER
and Table 1.DATA = wish data 1 AND .......... AND Table n.DATA = wish data n
?
My Question:
Is it efficient to replace Table , Table by Table INNER JOIN Table?

First, rewrite the query to use proper join syntax. The "," means "cross join" and is a very expensive operation (or, sometimes worse, can result in no rows if one of the tables is empty).
When using ",", it is very easy to make a mistake in the where clause, having bad performance consequences.
Second, the query optimizers should be choosing the best join path. However, to do this requires having updated statistics on the tables. So, be sure that you have updated statistics (this varies greatly by database).
Third, you should always mention the database being used, especially for optimization.
And, finally, such queries are usually -- but not always -- faster by having indexes on the join columns. This is especially true when you have a filter that selects a small subset of all the rows.

I think you need somthing like this SQLFIDDLE
SELECT ID ,#pv:=ID_POINTER AS IDP, DATA FROM test
JOIN
(SELECT #pv:=1)tmp
WHERE ID=#pv
CREATE TABLE IF NOT EXISTS `test` (
`ID` int(8) unsigned NOT NULL ,
`ID_POINTER` int(8) unsigned NOT NULL,
`DATA` varchar(128) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1 ;
INSERT INTO `test` VALUES(1,2,"data1"),
(2,3,"data2"),
(3,4,"data3"),
(4,5,"data4"),
(5,11,"data5"),
(8,9,"data6"),
(9,10,"data7");
Result: For id = 1( Retrieving a chained data for the id 1 set #pv:=1 in query)
ID IDP DATA
1 2 data1
2 3 data2
3 4 data3
4 5 data4
5 11 data5
For id = 8( Retrieving a chained data for the id 8 set #pv:=8 in query)
SELECT ID ,#pv:=ID_POINTER AS IDP, DATA FROM test
JOIN
(SELECT #pv:=8)tmp
WHERE ID=#pv
Result:
ID IDP DATA
8 9 data6
9 10 data7

Related

Mysql join optimize where clause

There are two tables in Mysql5.7, and each one has 100,000 records.
And each one contains data like this:
id name
-----------
1 name_1
2 name_2
3 name_3
4 name_4
5 name_5
...
The ddl is:
CREATE TABLE `table_a` (
`id` int(11) NOT NULL,
`name` varchar(255) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
CREATE TABLE `table_b` (
`id` int(11) NOT NULL,
`name` varchar(255) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
Now I execute following two queries to see whether the latter will be better.
select SQL_NO_CACHE *
from table_a a inner
join table_b b on a.name = b.name
where a.id between 50000 and 50100;
select SQL_NO_CACHE *
from (
select *
from table_a
where id between 50000 and 50100
) a
inner join table_b b on a.name = b.name;
I think that in the former query, it would iterate up to 100,000 * 100,000 times and then filter the result by where clause; in the latter query, it would first filter the table_a to get 100 intermediate result and then iterate up to 100 * 100,000 times to get final result. So the former would be much faster than the latter.
But the result is that both query spends 1.5 second. And by using explain statement, I can't find any substantial differences
Does the mysql optimize the former query so that it executes like the latter?
For INNER JOIN, ON and WHERE are optimized the same. For LEFT/RIGHT JOIN, the semantics are different, so the optimization is different. (Meanwhile, please use ON for stating the relationship and WHERE for filtering -- it helps humans in understanding the query.)
Both queries can start by fetching 100 rows from a because of a.id between 50000 and 50100, then reach into the other table 100 time. But how it has to do a table scan because of the lack of any useful index. So 100 x 100,000 operations. ("Nested Loop Join" or "NLJ")
The solution to the slowness is to add
INDEX(name)
Add it at least to b. Or, if this is really a lookup table for making "names" to "ids", then UNIQUE(name). With either index, the work should be down to 100 x 100.
Another technique for analyzing queries is
FLUSH STATUS;
SELECT ...
SHOW VARIABLES LIKE 'Handler%';
It counts the actual number of rows (data or index) touched. 100,000 (or multiples of such) indicate a full table/index scan(s) in your case.
More: Index Cookbook
Joins are always faster than sub-queries, so try to use joins instead of sub-queries wherever you can to speed up the process. Whereas in this case, both the queries are equivalent.
Another way to optimize the query would be using partitions. When using partitions, mysql will directly go to the partition according to your specified query which will reduce the time spent on other unrelated records.

how to cache a subset for cascading select queries in mysql

heres another database problem I stubled upon.
I have a date-range partitioned Myisam lookup table with 200M records and ~150 columns.
On this Table I need to perform cascading SELECT-Statements to filter the data.
Output:
filter 126M
filter 110M
filter 40M
filter 5M
filter 100k
Every single SELECT is highly complex with regex (=no index possible) and multiple comparisons, which is why I want them to query the least amount of rows possible.
There are about 500 unique filters and around 200 constant users. Every filter needs to be run for each user, in total around 100k combinations.
Big question:
Is there a way for each subsequent SELECT statement to query only the previous subset?
Example:
Filter #5 should only have to query the 5M rows out of query 4 to get those 100k results. At the moment it has to scan through all 200M records.
EDIT
current approach: cache table
CREATE TABLE IF NOT EXISTS cache
( filter_id int(11) NOT NULL,
user_id int(11) NOT NULL,
lookup_id int(11) NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
ALTER TABLE cache ADD PRIMARY KEY (filter_id,user_id);
This would contain the relation between individual data-rows from the lookup table and the filters. PLUS I'd be able to use the primary index to get all of the lookup_ids from the previous filter.
Query for subsequent filters:
SELECT SUM( column), COUNT(*)
FROM cache c
LEFT JOIN lookup_table l
ON c.lookup_id= l.id
WHERE c.filter_id = 1
AND c. user_id= x
AND l.regex_column = preg_rlike...
May be you should save primary key of selected records to a some kind of temporary table? On next step join that temp table with your main table.

Slow SQL query when grouping by two columns with self join

I have a table rating with slightly less than 300k rows and a SQL query:
SELECT rt1.product_id as id1, rt2.product_id as id2, sum(1), sum(rt1.rate-rt2.rate) as sum
FROM rating as rt1
JOIN rating as rt2 ON rt1.user_id = rt2.user_id AND rt1.product_id != rt2.product_id
group by rt1.product_id, rt2.product_id
LIMIT 1
The problem is.. it's really slow. It takes 36 secs to execute it with limit 1, while I need to execute it without limit.
As I figured out, slowdown it caused by GROUP BY part. It works fine while grouping by one column no matter from which table rt1 or rt2.
I have also tried with indexes, I have created already indexes for user_id, product_id, rate and (user_id, product_id).
EXPLAIN doesn't tell much to me too.
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE rt1 ALL PRIMARY,user_id,user_product NULL NULL NULL 289700 Using temporary; Using filesort
1 SIMPLE rt2 ref PRIMARY,user_id,user_product user_id 4 mgrshop.rt1.user_id 30 Using where
I need this to execute just once to generate some data, so it's not important to achieve optimal time, but reasonable.
Any ideas?
Edit.
Full table schema
CREATE TABLE IF NOT EXISTS `rating` (
`user_id` int(11) NOT NULL,
`product_id` int(11) NOT NULL,
`rate` int(11) NOT NULL,
PRIMARY KEY (`user_id`,`product_id`),
KEY `user_id` (`user_id`),
KEY `product_id` (`product_id`),
KEY `user_product` (`user_id`,`product_id`),
KEY `rate` (`rate`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
Your problem is in the join, specifically AND rt1.product_id != rt2.product_id.
Lets say a user has rated 100 products, for that user, this query will generate 99,000 rows before it does the group by. For each of the 100 ratings, the table gets joined back to itself 99 times.
What is the question you are trying to answer with this query? Depending on that, there may be some more efficient approaches. Its just hard to tell what you are trying to achieve here.
In addition to what Declan_K mentioned about your cross-join result set that could be 100k rows before you know it, you could cut that down significantly by changing to just
rt1.product_id < rt2.product_id
instead of
rt1.product_id != rt2.product_id
Reason... Since they are the same table/records, you will only need to cycle through them once for the RT1.product_ID. With it being less than the highest, you'll already have the high as part of your compare. As it stands, if you did (for a single user) have 5 products (1-5), you would be getting results of
(1,2) (1,3) (1,4) (1,5)
(2,1) (2,3) (2,4) (2,5)
(3,1) (3,2) (3,4) (3,5)
(4,1) (4,2) (4,3) (4,5)
(5,1) (5,2) (5,3) (5,4)
By changing to LESS than, you'll eliminate the duplications such as 1,2 vs 2,1 1,3 vs 3,1
(1,2) (1,3) (1,4) (1,5)
(2,3) (2,4) (2,5)
(3,4) (3,5)
(4,5)
Just a bit of a smaller result set, and this is with only 5 products for one person.
My solution is not the easiest, but it should explain a little and speed up your query time.
When you join in MySQL, a temporary table is created. The more rows that are put into that temporary table, the more likely it is to go to disk. Disk is slow. The new temporary table has no indices. Querying without indices is slow.
The first line in your EXPLAIN statement is showing that the query will join first, creating a whole bunch of rows, and sticking that into a temporary table, and grouping by product ids. The key column is empty, showing that it can't use a key.
My solution is to create another table. This other table will consist of all the relevant columns from the JOIN. You'll need a batch job to update the table in the background. This will lead to slightly stale data, but it will run much faster.
CREATE TABLE `rate_tmp` (
userid ...,
id1 ...,
id2 ...,
rate1 ...,
rate2 ...,
PRIMARY KEY (id1, id2, userid)
)
The order on the primary key is very important. Your query then looks like this:
SELECT userid, id1, id2, sum(1), sum(rate1-rate2) as sum
from rate_tmp
group by id1, id2;
It should run very fast at that point, because, while the table is still persisted to disk, MySQL will not have to write the data to disk at query time. It can also, and more importantly, use the pre-defined indices that you have on the temporary table.
First I did it via temp table.
First selected rows without grouping and put them into a table made just for it. I got over 11kk rows. Then I just grouped them from temp table and put into final table.
Then I also tried to do this without creating any other table and it also worked for me.
SELECT id1, id2, sum(count), sum(sum)
FROM (SELECT rt1.product_id as id1, rt2.product_id as id2, 1 as count, rt1.rate - rt2.rate as sum
FROM rating as rt1
JOIN rating as rt2 ON rt1.user_id = rt2.user_id AND rt1.product_id != rt2.product_id) as temptab
GROUP BY id1, id2
And finally got about 19k rows.
Execution time: 35.8669
Not bad for my case of one-time data generating.

MySQL query takes too long -- what should be the index?

Here is my query:
CREATE TEMPORARY TABLE temptbl (
pibn INT UNSIGNED NOT NULL, page SMALLINT UNSIGNED NOT NULL)
ENGINE=MEMORY;
INSERT INTO temptbl (
SELECT pibn,page FROM mytable
WHERE word1=429907 AND word2=0);
ALTER TABLE temptbl ADD INDEX (pibn,page);
SELECT word1,COUNT(*) AS aaa
FROM mytable a
INNER JOIN temptbl b
ON a.pibn=b.pibn AND a.page=b.page
WHERE word2=0
GROUP BY word1 ORDER BY aaa DESC LIMIT 10;
DROP TABLE temptbl;
The issue is the SELECT word1,COUNT(*) AS aaa, specifically the count. That select statement takes 16 seconds.
EXPLAIN says:
+----+-------------+-------+------+---------------------------------+-------------+---------+-------------------------------------------------------------+-------+---------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+---------------------------------+-------------+---------+-------------------------------------------------------------+-------+---------------------------------+
| 1 | SIMPLE | b | ALL | pibn | NULL | NULL | NULL | 26778 | Using temporary; Using filesort |
| 1 | SIMPLE | a | ref | w2pibnpage1,word21pibn,pibnpage | w2pibnpage1 | 9 | const,db.b.pibn,db.b.page | 4 | Using index |
+----+-------------+-------+------+---------------------------------+-------------+---------+-------------------------------------------------------------+-------+---------------------------------+
The index used (w2pibnpage1) is on:
word2,pibn,page,word1,id
I've been struggling with this for days, trying different combinations of columns for the index (which is annoying as it takes an hour to rebuild - millions of rows).
What should my indexes be, or what should I do to get this query to run in a fraction of a second (as it should)?
Here is a suggestion.
Presumably the temporary table is small. You can remove the index on that table, because a full table scan is fine there. In fact, that is what you want.
You then want indexes used on the big table. First the indexes need to match the join condition, then to match the where condition, and finally the group by condition. So, the suggestion is:
mytable(pibn, page, word2, word1, aaa)
I'm throwing in the order by column, so it doesn't have to fetch the value from the original data.
The query is taking a long time, but the expensive part seems to be accessing 'mytable' (you've not provided the structure of this) however the optimizer seems to think it only needs to fetch 4 rows from this using an index - which should be very fast. i.e. the data appears to be very skewed - how many rows does the last query examine (tally of counts)?
Without having a lok at the exact distribution of data, it's hard to be definitive - certainly you may need to hint the query to get it to work efficiently. The problem with designing indexes is that they should make all the queries faster - or at least give a reasonable tradeoff.
Looking at the predicates in the queries you've provided...
WHERE word1=429907 AND word2=0
Would be best served by an index on word1,word2,.... or word2,word1,.....
ON a.pibn=b.pibn AND a.page=b.page
WHERE a.word2=0
Would be best served by an index on mytable with word2+pibn+page in the leading columns.
How many distinct values are there for mytable.word1 and for mytable.word2? If word2 has a low number of distinct values (less than 20 or so) then it's not adding much selectivity to the index and can be omitted.
An index on word2,pibn,page,word1 gives you a covering index for the second query.
If your temptbl is small you want to first restrict the bigger table (mytable) and then join it (eventually by index) to your temptbl.
Right now, MySQL thinks it is better off by using the index of the bigger table to join.
You can get around this by doing a straight join:
SELECT word1,COUNT(*) AS aaa
FROM mytable a
STRAIGHT_JOIN temptbl b
ON a.pibn=b.pibn AND a.page=b.page
WHERE word2=0
GROUP BY word1
ORDER BY aaa DESC LIMIT 10;
This should use your index in mytable for the where clause and join mytable to temptbl via the index in temptbl.
If MySQL still wants to do it different, you can use FORCE INDEX to make it use the index.
With your data volumes is is not going to work fast no matter what you do, not without changing the schema.
If I understand you right, you're looking for the top words which go along with 429907 on the same pages.
You model as it it now would require counting all those words over an over again each time you run the query.
To speed it up, you would need to create an additional stats table:
CREATE TABLE word_pairs
(
word1_1 INT NOT NULL,
word1_2 INT NOT NULL,
cnt BIGINT NOT NULL,
PRIMARY KEY (word1_1, word1_2),
INDEX (word1_1, cnt),
INDEX (word1_2, cnt)
)
and update it each time you're inserting a record into a large table (increase the cnt for the newly inserted word and all the words it's being on the same page with).
This would probably bee too slow for a single server as such updates would require some time, so you would also need to shard that table across multiple servers.
If you had such a table you could just run:
SELECT *
FROM word_pairs
WHERE word1_1 = 429907
ORDER BY
cnt DESC
LIMIT 10
which would be instant.
I came up with this:
CREATE TEMPORARY TABLE temp1 (
pibn INT UNSIGNED NOT NULL, page SMALLINT UNSIGNED NOT NULL)
ENGINE=MEMORY;
INSERT INTO temp1 (
SELECT pibn,page FROM mytable
WHERE word1=429907 AND word2=0);
CREATE TEMPORARY TABLE temp2 (
word1 MEDIUMINT UNSIGNED NOT NULL)
ENGINE=MEMORY;
INSERT INTO temp2 (
SELECT a.word1
FROM mytable a, temp1 b
WHERE a.word2=0 AND a.pibn=b.pibn AND a.page=b.page);
DROP TABLE temp1;
CREATE INDEX index1 ON temp2 (word1) USING BTREE;
CREATE TEMPORARY TABLE temp3 (
word1 MEDIUMINT UNSIGNED NOT NULL, num INT UNSIGNED NOT NULL)
ENGINE=MEMORY;
INSERT INTO temp3 (SELECT word1,COUNT(*) AS aaa FROM temp2 USE INDEX (index1) GROUP BY word1);
DROP TABLE temp2;
CREATE INDEX index1 ON temp3 (num) USING BTREE;
SELECT word1,num FROM temp3 USE INDEX (index1) ORDER BY num DESC LIMIT 10;
DROP TABLE temp3;
Takes 5 seconds.

Proper Indexing/Optimization of a MySQL GROUP BY and JOIN Query

I've done a lot of reading and Googling on this and I cannot find any satisfactory answer so I'd appreciate any help. Most answers I find come close to my situation but do not address it (and attempting to follow the solutions has not done me any good).
See Edit #2 below for the best example
[This was the original question but is not a great representation of what I'm asking.]
Say I have 2 tables, each with 4 columns:
key (int, auto increment)
c1 (a date)
c2 (a varchar of length 3)
c3 (also a varchar of length 3)
And I want to perform the following query:
SELECT t.c1, t.c2, COUNT(*)
FROM test1 t
LEFT JOIN test2 t2 ON t2.key = t.key
GROUP BY t.c1, t.c2
Both key fields are indexed as primary keys. I want to get the number of rows returned in each grouping of c1, c2.
When I explain this query I get "using temporary; using filesort". The actual table I'm performing this query on is over 500,000 rows, so that means it's a time consuming query.
So my question is (assuming I'm not doing anything wrong in the query): is there a way to index this table to eliminate the temporary/filesort usage?
Thanks in advance for any help.
Edit
Here is the table definition (in this example both tables are identical - in reality they're not but I'm not sure it makes a difference at this point):
CREATE TABLE `test1` (
`key` int(11) NOT NULL auto_increment,
`c1` date NOT NULL,
`c2` varchar(3) NOT NULL,
`c3` varchar(3) NOT NULL,
PRIMARY KEY (`key`),
UNIQUE KEY `c1` (`c1`,`c2`),
UNIQUE KEY `c2_2` (`c2`,`c1`),
KEY `c2` (`c2`,`c3`)
) ENGINE=MyISAM AUTO_INCREMENT=3 DEFAULT CHARSET=utf8
Full EXPLAIN statement:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE t ALL NULL NULL NULL NULL 2 Using temporary; Using filesort
1 SIMPLE t2 eq_ref PRIMARY PRIMARY 4 tracking.t.key 1 Using index
This is just for my sample tables. In my real tables the rows for t says 500,000+ (every row in the table, though that could be related to something else).
Edit #2
Here is a more concrete example to better explain my situation.
Let's say I have data on Little League baseball games. I have two tables. One holds data on the games:
CREATE TABLE `ex_games` (
`game_id` int(11) NOT NULL auto_increment,
`home_team` int(11) NOT NULL,
`date` date NOT NULL,
PRIMARY KEY (`game_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
The other holds data on the at bats in each game:
CREATE TABLE `ex_atbats` (
`ab_id` int(11) NOT NULL auto_increment,
`game` int(11) NOT NULL,
`team` int(11) NOT NULL,
`player` int(11) NOT NULL,
`result` tinyint(1) NOT NULL,
PRIMARY KEY (`hit_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
So I have two questions. Let's start with the simple version: I want to return a list of games with a count of how many at bats are in each game. So I think I would do something like this:
SELECT date, home_team, COUNT(h.ab_id) FROM `ex_atbats` h
LEFT JOIN ex_games g ON g.game_id = h.game
GROUP BY g.game_id
This query uses filesort/temporary. Is there a better way to structure this or to index the tables to get rid of that?
Then, the trickier part: say I now want to not only include a count of the number of at bats, but also include a count of the number of at bats that were preceded by an at bat with the same result by the same team. I assume that would be something like:
SELECT g.date, g.home_team, COUNT(ab.ab_id), COUNT(ab2.ab_id) FROM `ex_atbats` ab
LEFT JOIN ex_games g ON g.game_id = ab.game
LEFT JOIN ex_atbats ab2 ON ab2.ab_id = ab.ab_id - 1 AND ab2.result = ab.result
GROUP BY g.game_id
Is that the correct way to structure that query? This also uses filesort/temporary.
So what is the optimal way to go about accomplishing these tasks?
Thanks again.
Phrases Using temporary/filesort usually are not related to the indexes used in the JOIN operation. There is numerous examples where you can have all indexes set (they show up in key and key_len columns in EXPLAIN) but you still get Using temporary and Using filesort.
Check out what the manual says about Using temporary and Using filesort:
How MySQL Uses Internal Temporary Tables
ORDER BY Optimization
Having a combined index for all columns used in GROUP BY clause may help to get rid of Using filesort in certain circumstances. If you also issue ORDER BY you may need to add more complex indexes.
If you have a huge dataset consider partitioning it using some criteria like date or timestamp by means of actual partitioning or a simple WHERE clause.
First of all, the tables' definitions do matter. It's one thing to join using two primary keys, another to join using a primary key from one side and a non-unique key in the other, etc. It also matters what type of engine the tables use as InnoDB treats Primary Keys differently than MyISAM engine.
What I notice though is that on table test1, the (c1,c2) combination is Unique and the fields are not nullable. This allows your query to be rewritten as:
SELECT t.c1, t.c2, COUNT(*)
FROM test1 t
LEFT JOIN test2 t2 ON t2.key = t.key
GROUP BY t.key
It will give the same results while using the same field for the JOIN and the GROUP BY. Note that MySQL allows you to use in the SELECT list fields that are not in the GROUP BY list, without having aggregate functions on them. This is not allowed in most other systems and is seen as a bug by some. In this situation though it is a very nice feature. Every row can be either identified by (key) or (c1,c2), so it shouldn't matter which of the two is used for the grouping.
Another thing to note is that when you use LEFT JOIN, it's common to use the joining column from the right side for the counting: COUNT(t2.key) and not COUNT(*). Your original query will give 1 in that column for records in test1 that do not mmatch any record in test2 because it counts rows while you probably want to count the related records in test2 - and show 0 in those cases.
So, try this query and post the EXPLAIN:
SELECT t.c1, t.c2, COUNT(t2.key)
FROM test1 t
LEFT JOIN test2 t2 ON t2.key = t.key
GROUP BY t.key
The indexes help with the join, but you still need to do a full sort in order to do the group by. Essentially, it still has to process every record in the set.
Adding a where clause and limiting the set would run faster, of course. It just won't get you the results you want.
There may be other options than doing a group by on the entire table. I notice you're doing a SELECT * - What are you trying to get out of the query?
SELECT DISTINCT c1, c2
FROM test t
LEFT JOIN test2 t2 ON t2.key = t.key
may run faster, for instance. (I realize this was just a sample query, but understand that it's hard to optimize when you don't know what the end goal is!)
EDIT - In doing some reading (http://dev.mysql.com/doc/refman/5.0/en/group-by-optimization.html), I learned that, under the correct circumstances, indexes can help significantly with the group by.
What I'm seeing is that it needs to be a sorted index (like BTREE), not a HASH. Perhaps:
CREATE INDEX c1c2 IN t (c1, c2) USING BTREE;
might help.
For innodb it will work, as the index caries your primary key by default. For myisam you have to have the key as the last column of your index be "key". That will give the optimizers all keys in the same order and he can skip the sort. You cannot do any range queryies on the index prefix theN, puts you right back into filesort. currently struggling with a similiar problem