Query Optimization for Friends Feed - MySQL - mysql

I'm having my weird trouble with a friends feed query - here is the background:
I have 3 tables
checkin - around 13m records
users - around 250k records
friends - around 1.5m records
In the checkin table - it lists activity that are performed by users. (here are numerous indexes, however there is an index on user_id, created_at, and (user_id,created_at).
The users table is just the basic user information There is an index on user_id.
The friends table has a user_id, target_id and is_approved. There is an index on the (user_id, is_approved) fields.
In my query, I am trying to pull down just a basic friends feed of any users - so I have been doing this:
SELECT checkin_id, created_at
FROM checkin
WHERE (user_id IN (SELECT friend_id from friends where user_id = 1 and is_approved = 1) OR user_id = 1)
ORDER by created_at DESC
LIMIT 0, 15
The goal of the query is just to pull the checkin_id and created_at for all the users' friend plus their activity. It's a pretty simple query, but when a user's friends have tons of recent activity, this query is very quick, here is the EXPLAIN:
id select_type table type possible_keys key key_len ref rows Extra
1 PRIMARY checkin index user_id,user_id_2 created_at 8 NULL 15 Using where
2 DEPENDENT SUBQUERY friends eq_ref user_id,friend_id,is_approved,friend_looku... PRIMARY 8 const,func 1 Using where
As an explanation, user_id is a simple index on user_id - while user_id_2 is an index on user_id and created_at. On the friends table, friends_lookup is the index of user_id and is_approved.
This is a very simple query and get's completed in: Showing rows 0 - 14 (15 total, Query took 0.0073 sec).
However when a user's friends activity is not very recent and there isn't a lot of the data, the same query takes around 5-7 seconds and it has the same EXPLAIN as the previous query - but takes longer.
It doesn't seem to have an affect on more friends, it seems to speed up with more recent activity.
Is there any tips that anyone have to optimize these queries to makes sure they run the same speed irregardless of activity?
Server Setup
This is a dedicated MySQL server running 16GB of RAM. It is running Ubuntu 10.10 and the version of MySQL is 5.1.49
UPDATE
So most people have suggested remove the IN piece and move them into a INNER JOIN:
SELECT c.checkin_id, c.created_at
FROM checkin c
INNER JOIN friends f ON c.user_id = f.friend_id
WHERE f.user_id =1
AND f.is_approved =1
ORDER BY c.created_at DESC
LIMIT 0 , 15
This query is 10x worse - as reported in the EXPLAIN:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE f ref PRIMARY,user_id,friend_id,is_approved,friend_looku... friend_lookup 5 const,const 938 Using temporary; Using filesort
1 SIMPLE c ref user_id,user_id_2 user_id 4 untappd_prod.f.friend_id 71 Using where
The goal of this query to get all the friends activity, and yours in the same query (instead of having to create two queries and merge the results together and sort by created_at). I also can't remove the index on user_id as it's important piece of another query.
The interesting part is when I run this query on a user account that doesn't have a lot activity, I get this explain:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE f index_merge PRIMARY,user_id,friend_id,is_approved,friend_looku... user_id,friend_lookup 4,5 NULL 11 Using intersect(user_id,friend_lookup); Using wher...
1 SIMPLE c ref user_id,user_id_2 user_id 4 untappd_prod.f.friend_id 71 Using where
Any advice?

so.. you have a few things going on here..
in the explain plan .. usually the optimizer will choose whats in "key" and not whats in possible_keys. So thats why you experience when it needs to scan more records when the data is not recent.
on checkin table only ( user_id, created_at ) and created_at is necessary.. you dont need another index for user_id.. the optimizer will use (user_id, created_at ) since user_id is the first order.
try this..
use join between friends and checkin and remove the in clause, such that friends becomes the driving table and you should see that first on the execution path of your explain plan.
with 1 done, you should make sure that checkin is using (user_id, created_dt ) index in the execution path.
write another query for the OR condition where user_id from checkin table is 1. I think your data set should be mutually exclusive for these two sets, it should then be ok .. or else you would not need to have the OR condition after the IN clause in the first place.
remove the user_id index thats by it self as you have user_id, created_at index.
-- your goal is that it uses the index under key not just possible keys.
this should take care of older non recent checkins as well as recent ones.

My first suggestion is to remove the dependent subquery and turn it into a join. I've found that MySQL is not good at processing these types of queries. Try this:
SELECT c.checkin_id, c.created_at
FROM checkin c
INNER JOIN friends f
ON c.user_id = f.friend_id
WHERE f.user_id = 1
AND f.is_approved = 1
ORDER by c.created_at DESC
LIMIT 0, 15
My second suggestion, since you have a dedicated server, is to use the InnoDB storage engine for all your tables. Make sure that you tweak default InnoDB settings, especially for innodb_buffer_pool_size: http://www.mysqlperformanceblog.com/2007/11/03/choosing-innodb_buffer_pool_size/

Related

Laravel Join tables and group by sum query too slow

I am using Laravel query builder to get desired results from database. The following query if working perfectly but taking too much time to get results. Can you please help me with this?
select
`amz_ads_sp_campaigns`.*,
SUM(attributedUnitsOrdered7d) as order7d,
SUM(attributedUnitsOrdered30d) as order30d,
SUM(attributedSales7d) as sale7d,
SUM(attributedSales30d) as sale30d,
SUM(impressions) as impressions,
SUM(clicks) as clicks,
SUM(cost) as cost,
SUM(attributedConversions7d) as attributedConversions7d,
SUM(attributedConversions30d) as attributedConversions30d
from
`amz_ads_sp_product_targetings`
inner join `amz_ads_sp_report_product_targetings` on `amz_ads_sp_product_targetings`.`campaignId` = `amz_ads_sp_report_product_targetings`.`campaignId`
inner join `amz_ads_sp_campaigns` on `amz_ads_sp_report_product_targetings`.`campaignId` = `amz_ads_sp_campaigns`.`campaignId`
where
(
`amz_ads_sp_product_targetings`.`user_id` = ?
and `amz_ads_sp_product_targetings`.`profileId` = ?
)
group by
`amz_ads_sp_product_targetings`.`campaignId`
Result of Explain SQL
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE amz_ads_sp_report_product_targetings ALL campaignId NULL NULL NULL 50061 Using temporary; Using filesort
1 SIMPLE amz_ads_sp_campaigns ref campaignId campaignId 8 pr-amz-ppc.amz_ads_sp_report_product_targetings.ca... 1
1 SIMPLE amz_ads_sp_product_targetings ref campaignId campaignId 8 pr-amz-ppc.amz_ads_sp_report_product_targetings.ca... 33 Using where
Your query could benefit from several indices to cover the WHERE clause as well as the join conditions:
CREATE INDEX idx1 ON amz_ads_sp_product_targetings (
user_id, profileId, campaignId);
CREATE INDEX idx2 ON amz_ads_sp_report_product_targetings (
campaignId);
CREATE INDEX idx3 ON amz_ads_sp_campaigns (campaignId);
The first index idx1 covers the entire WHERE clause, which might let MySQL throw away many records on the initial scan of the amz_ads_sp_product_targetings table. It also includes the campaignId column, which is needed for the first join. The second and third indices cover the join columns of each respective table. This might let MySQL do a more rapid lookup during the join process.
Note that selecting amz_ads_sp_campaigns.* is not valid unless the campaignId of that table be the primary key. Also, there isn't much else we can do speed up the query, as SUM, by its nature, requires touching every record in order to come up the result sum.

Optimize query with 1 join, on tables with 10+ millions rows

I am looking at making a request using 2 tables faster.
I have the following 2 tables :
Table "logs"
id varchar(36) PK
date timestamp(2)
more varchar fields, and one text field
That table has what the PHP Laravel Framework calls a "polymorphic many to many" relationship with several other objects, so there is a second table "logs_pivot" :
id unsigned int PK
log_id varchar(36) FOREIGN KEY (logs.id)
model_id varchar(40)
model_type varchar(50)
There is one or several entries in logs_pivot per entry in logs. They have 20+ and 10+ millions of rows, respectively.
We do queries like so :
select * from logs
join logs_pivot on logs.id = logs_pivot.log_id
where model_id = 'some_id' and model_type = 'My\Class'
order by date desc
limit 50;
Obviously we have a compound index on both the model_id and model_type fields, but the requests are still slow : several (dozens of) seconds every times.
We also have an index on the date field, but an EXPLAIN show that this is the model_id_model_type index that is used.
Explain statement:
+----+-------------+-------------+------------+--------+--------------------------------------------------------------------------------+-----------------------------------------------+---------+-------------------------------------------+------+----------+---------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------------+------------+--------+--------------------------------------------------------------------------------+-----------------------------------------------+---------+-------------------------------------------+------+----------+---------------------------------+
| 1 | SIMPLE | logs_pivot | NULL | ref | logs_pivot_model_id_model_type_index,logs_pivot_log_id_index | logs_pivot_model_id_model_type_index | 364 | const,const | 1 | 100.00 | Using temporary; Using filesort |
| 1 | SIMPLE | logs | NULL | eq_ref | PRIMARY | PRIMARY | 146 | the_db_name.logs_pivot.log_id | 1 | 100.00 | NULL |
+----+-------------+-------------+------------+--------+--------------------------------------------------------------------------------+-----------------------------------------------+---------+-------------------------------------------+------+----------+---------------------------------+
In other tables, I was able to make a similar request much faster by including the date field in the index. But in that case they are in a separate table.
When we want to access these data, they are typically a few hours/days old.
Our InnoDB pools are much too small to hold all that data (+ all the other tables) in memory, so the data is most probably always queried on disk.
What would be all the ways we could make that request faster ?
Ideally only with another index, or by changing how it is done.
Thanks a lot !
Edit 17h05 :
Thank you all for your answers so far, I will try something like O Jones suggest, and also to somehow include the date field in the pivot table, so that I can include in the index index.
Edit 14/10 10h.
Solution :
So I ended up changing how the request was really done, by sorting on the id field of the pivot table, which indeed allow to put it in an index.
Also the request to count the total number of rows is changed to only be done on the pivot table, when it is not filtered by date.
Thank you all !
Just a suggestion. Using a compound index is obviously a good thing. Another might be to pre-qualify an ID by date, and extend your index based on your logs_pivot table indexing on (model_id, model_type, log_id ).
If your querying data, and the entire history is 20+ million records, how far back does the data go where you are only dealing with getting a limit of 50 records per given category of model id/type. Say 3-months? vs say your log of 5 years? (not listed in post, just a for-instance). So if you can query the minimum log ID where the date is greater than say 3 months back, that one ID can limit what else is going on from your logs_pivot table.
Something like
select
lp.*,
l.date
from
logs_pivot lp
JOIN Logs l
on lp.log_id = l.id
where
model_id = 'some_id'
and model_type = 'My\Class'
and log_id >= ( select min( id )
from logs
where date >= datesub( curdate(), interval 3 month ))
order by
l.date desc
limit
50;
So, the where clause for the log_id is done once and returns just an ID from as far back as 3 months and not the entire history of the logs_pivot. Then you query with the optimized two-part key of model id/type, but also jumping to the end of its index with the ID included in the index key to skip over all the historical.
Another thing you MAY want to include are some pre-aggregate tables of how many records such as per month/year per given model type/id. Use that as a pre-query to present to users, then you can use that as a drill-down to further get more detail. A pre-aggregate table can be done on all the historical stuff once since it would be static and not change. The only one you would have to constantly update would be whatever the current single month period is, such as on a nightly basis. Or even possibly better, via a trigger that either inserts a record every time an add is done, or updates a count for the given model/type based on year/month aggregations. Again, just a suggestion as no other context on how / why the data will be presented to the end-user.
I see two problems:
UUIDs are costly when tables are huge relative to RAM size.
The LIMIT cannot be handled optimally because the WHERE clauses come from one table, but the ORDER BY column comes from another table. That is, it will do all of the JOIN, then sort and finally peel off a few rows.
SELECT columns FROM big table ORDER BY something LIMIT small number is a notorious query performance antipattern. Why? the server sorts a whole mess of long rows then discards almost all of them. It doesn't help that one of your columns is a LOB -- a TEXT column.
Here's an approach that can reduce that overhead: Figure out which rows you want by finding the set of primary keys you want, then fetch the content of only those rows.
What rows do you want? This subquery finds them.
SELECT id
FROM logs
JOIN logs_pivot
ON logs.id = logs_pivot.log_id
WHERE logs_pivot.model_id = 'some_id'
AND logs_pivot.model_type = 'My\Class'
ORDER BY logs.date DESC
LIMIT 50
This does all the heavy lifting of working out the rows you want. So, this is the query you need to optimize.
It can be accelerated by this index on logs
CREATE INDEX logs_date_desc ON logs (date DESC);
and this three-column compound index on logs_pivot
CREATE INDEX logs_pivot_lookup ON logs_pivot (model_id, model_type, log_id);
This index is likely to be better, since the Optimizer will see the filtering on logs_pivot but not logs. Hence, it will look in logs_pivot first.
Or maybe
CREATE INDEX logs_pivot_lookup ON logs_pivot (log_id, model_id, model_type);
Try one then the other to see which yields faster results. (I'm not sure how the JOIN will use the compound index.) (Or simply add both, and use EXPLAIN to see which one it uses.)
Then, when you're happy -- or satisfied anyway -- with the subquery's performance, use it to grab the rows you need, like this
SELECT *
FROM logs
WHERE id IN (
SELECT id
FROM logs
JOIN logs_pivot
ON logs.id = logs_pivot.log_id
WHERE logs_pivot.model_id = 'some_id'
AND model_type = 'My\Class'
ORDER BY logs.date DESC
LIMIT 50
)
ORDER BY date DESC
This works because it sorts less data. The covering three-column index on logs_pivot will also help.
Notice that both the sub query and main query have ORDER BY clauses, to make sure the returned detail result set is in the order you need.
Edit Darnit, been on MariaDB 10+ and MySQL 8+ so long I forgot about the old limitation. Try this instead.
SELECT *
FROM logs
JOIN (
SELECT id
FROM logs
JOIN logs_pivot
ON logs.id = logs_pivot.log_id
WHERE logs_pivot.model_id = 'some_id'
AND model_type = 'My\Class'
ORDER BY logs.date DESC
LIMIT 50
) id_set ON logs.id = id_set.id
ORDER BY date DESC
Finally, if you know you only care about rows newer than some certain time you can add something like this to your subquery.
AND logs.date >= NOW() - INTERVAL 5 DAY
This will help a lot if you have tonnage of historical data in your table.

Query speed drops on two "=" comparisons in WHERE clause

I have a music database with a table for releases and the release titles. This "releases_view" gets the title/title_id and the alternative title/alternative title_id of a track. This is the code of the view:
SELECT
t1.`title` AS title,
t1.`id` AS title_id,
t2.`title` AS title_alt,
t2.`id` AS title_alt_id
FROM
releases
LEFT JOIN titles t1 ON t1.`id`=`releases`.`title_id`
LEFT JOIN titles t2 ON t2.`id`=`releases`.`title_alt_id`
The title_id and title_alt_id fields in the joined tables are both int(11), title and title_alt are varchars.
The issue
This query will take less than 1 ms:
SELECT * FROM `releases_view` WHERE title_id=12345
This query will take less then 1 ms, too:
SELECT * FROM `releases_view` WHERE title_id=12345 OR title_alt_id!=54321
BUT: This query will take 0,2 s. It's 200 times slower!
SELECT * FROM `releases_view` WHERE title_id=20956 OR title_alt_id=38849
As soon I have two comparisons using "=" in the WHERE clause, things really get slow (although all queries only have a couple of results).
Can you help me to understand what is going on?
EDIT
´EXPLAIN´ shows a USING WHERE for the title_alt_id, but I do not understand why. How can I avoid this?
** EDIT **
Here is the EXPLAIN DUMP.
id select_type table partitions type possible_keys key key_len ref rows Extra
1 SIMPLE releases NULL ALL NULL NULL NULL NULL 76802 Using temporary; Using filesort
1 SIMPLE t1 NULL eq_ref PRIMARY PRIMARY 4 db.releases.title_id 1
1 SIMPLE t2 NULL eq_ref PRIMARY PRIMARY 4 db.releases.title_alt_id 1 Using where
The "really slow" is because the Optimizer does not work well with OR.
Plan A (of the Optimizer): Scan the entire table, evaluating the entire OR.
Plan B: "Index Merge Union" could be used for title_id = 20956 OR title_alt_id = 38849 if you have separate indexes in title_id and title_alt_id: use each index to get two lists of PRIMARY KEYs and "merge" the lists, then reach into the table to get *. Multiple steps, not cheap. So Plan B is rarely used.
title_id = 12345 OR title_alt_id != 54321 is a mystery, since it should return most of the table. Please provide EXPLAIN SELECT....
LEFT JOIN (as opposed to JOIN) needs to assume that the row may be missing in the 'right' table.

Removing all duplicates except one - optimized queries

I have tried the following two queries:
delete from app where not exists
(select a2.app_package, max(a2.id) from (select * from app) as a2
where a2.app_package = app.app_package having max(a2.id) = app.id);
AND
DELETE FROM app
USING app,
(select app_package, max(id) as ID from app
group by app_package
) as A
where A.ID > app.ID AND
A.app_package = app.app_package;
and am really stuck as to which one would execute faster.
SQLFiddles:
http://sqlfiddle.com/#!2/46498/1
http://sqlfiddle.com/#!2/142593/1
Both execution plans are the same:
ID SELECT_TYPE TABLE TYPE POSSIBLE_KEYS KEY KEY_LEN REF ROWS FILTERED EXTRA
1 SIMPLE app ALL 7 100
Are there further optimizations that could be made?
The execution plan you are showing, is not that of the DELETE query but that from the SELECT * FROM app query, which just does a full table scan (as expected as you aren't filtering on anything).
To see the execution plan, you will need to run the explain on the delete statements instead (appearantly not possible in sqlfiddle).
I took the liberty of assuming the you have an index on app_package. If you don't, you should definitely add it.
The first example (simply replace DELETE FROM with SELECT * FROM) shows that you are doing full table scans (bad) and using a DEPENDENT subquery which will be ran for almost every record in the outer table (which is bad as well).
1 PRIMARY app ALL 7 Using where
2 DEPENDENT SUBQUERY <derived3> ALL 7 Using where
3 DERIVED app ALL 7
To see that of the second one, you will have to translate the delete into a SELECT statement, something like this
SELECT * FROM app, (
SELECT app_package, MAX( id ) AS ID
FROM app
GROUP BY app_package
) AS A
WHERE A.ID > app.ID
AND A.app_package = app.app_package
which gives
1 PRIMARY <derived2> ALL 4
1 PRIMARY app ref 1 Using where
2 DERIVED app index 7
As you can see, this is one isn't using dependant subqueries and not doing full table scans. This will definitely run faster when the amount of data in the table grows.

MySQL Query optimization in PHP

I have three tables, all of them can have possibly millions of rows. I have an actions table and a reactions table, that holds reactions related to actions. Then there is a emotes table linked to reactions. What I would like to do with this particular query, is finding the most clicked emote for a certain action. The difficulty for me is that the query includes three tables instead of only two.
Table actions (postings):
PKY id
...
Table reactions (comments, emotes etc.):
PKY id
INT action_id (related to actions table)
...
Table emotes:
PKY id
INT react_id (related to reactions table)
INT emote_id (related to a hardcoded list of available emotes)
...
The SQL query I came up with basically seems to work, but it takes 12 seconds if the tables contain millions rows. The SQL query looks like this:
select emote_id, count(*) as cnt from emotes
where react_id in (
select id from reactions where action_id=2942715
)
group by emote_id order by cnt desc limit 1
MySQL explain says the following:
id select_type table type possible_keys key key_len ref rows Extra
1 PRIMARY emotes index NULL react_id_2 21 NULL 4358594 Using where; Using index; Using temporary; Using f...
2 DEPENDENT SUBQUERY reactions unique_subquery PRIMARY,action_id PRIMARY 8 func 1 Using where
...I am grateful for any tips for improving on the query. Note that I will NOT call this query every time a list of actions is being built, but only when emotes are being added. Therefore it's no problem if the query takes maybe 0.5 seconds to finish. But 12 is too long!
what about this
SELECT
emote_id,
count(*) as cnt
FROM emotes a
INNER JOIN reactions r
ON r.id = a.react_id
WHERE action_id = 2942715
GROUP BY emote_id
ORDER BY cnt DESC
LIMIT 1