I am struggling to find the right answer looking around the internet for how to do this, I have a join and a group. When I add a Limit to the end it limits the groups and not the actual results.
SELECT COUNT(*) AS `numrows`, `people`.`age`
FROM (`events`)
JOIN `people`
ON `events`.`id` = `people`.`id`
WHERE `people`.`priority` = '1'
GROUP BY `people`.`age`
ORDER BY `numrows`
LIMIT 150
The limit always changes so this needs to be dynamic, the idea is to miss out the first 150 or x amount of rows from both tables but not the to limit the groups.
EDIT= I think I have explained this badly, I actually want to start from 150 rows or x, limit is the only way I know to do this dynamically. so the idea is if the last search was retrieved 150 rows, then lets say next time there are 250 results but I want to ignore the first 150 which were found last time etc.. Hope that makes better sense.
there limit or start from needs to be after the WHERE in the join, I think that's the only place it would work.
EDIT SQL =
SELECT COUNT( * ) AS `numrows`, `people`.`age`
FROM (
SELECT `id`, `events`.`pid`
FROM `events`
ORDER BY `id`
LIMIT 1050
)limited
JOIN `people` ON `people`.`age` = limited.id
WHERE `people`.`priority` = '1'
GROUP BY `people`.`age`
ORDER BY `numrows` DESC
Thanks for your help
I suspect you mean something like this?
SELECT COUNT(*) AS numrows, people.age
FROM (
SELECT id FROM events ORDER BY id LIMIT 150
) limited
JOIN people ON people.id = limited.id
GROUP BY people.age
ORDER BY numrows;
I did it using where between on timestamp, it seems like the only sensible way to do it.
Related
I am struggling with a mysql problem.
I have two exact same queries, just the item_id at the end is different, but they return different execution plans when I execute them with analyze/explain.
This results in a huge difference of time needed to return a result.
The query is something like
explain select `orders`.*,
(select count(*) from `items`
inner join `orders_items` on `items`.`id` = `orders_items`.`item_id`
where `orders`.`id` = `orders_items`.`order_id`
and `items`.`deleted_at` is null) as `items_count`,
(select count(*) from `returns`
where `orders`.`id` = `returns`.`order_id`
and `returns`.`deleted_at` is null) as `returns_count`,
(select count(*) from `shippings`
where `orders`.`id` = `shippings`.`order_id`
and `shippings`.`deleted_at` is null) as `shippings_count`,
(select count(*) from `orders` as `laravel_reserved_2`
where `orders`.`id` = `laravel_reserved_2`.`recurred_from_id`
and `laravel_reserved_2`.`deleted_at` is null) as `recurred_orders_count`,
(select COALESCE(SUM(orders_items.amount), 0) from `items`
inner join `orders_items` on `items`.`id` = `orders_items`.`item_id`
where `orders`.`id` = `orders_items`.`order_id`
and `items`.`deleted_at` is null) as `items_sum_orders_itemsamount`,
`orders`.*,
`orders_items`.`item_id` as `pivot_item_id`,
`orders_items`.`order_id` as `pivot_order_id`,
`orders_items`.`amount` as `pivot_amount`,
`orders_items`.`id` as `pivot_id`
from `orders`
inner join `orders_items` on `orders`.`id` = `orders_items`.`order_id`
where `orders_items`.`item_id` = 497
and `import_finished` = 1
and `orders`.`deleted_at` is null
order by `id` desc limit 50 offset 0;
As you can see it is a laravel/eloquent query.
This is the execution plan for the query above:
But when I change the item_id at the end it return the following execution plan
It is absolutely random. 30% of the item_id's return the faster one and 70% return the slower one and I have no idea why. The related data is almost the same for every item we have in our database.
I also flushed the query cache to see if this was causing the different exec plans but no success.
I am googlin' since 4 hours but I can't find anything about this exact problem.
Can someone of you guys tell me why this hapens?
Thanks in advance!!
Edit 01/21/2023 - 19:04:
Seems like mysql don't like to order by columns which are not defined in the related where clause, in this case the pivot table orders_items.
I just replaced the
order by id
with
order by orders_items.order_id
This results in a 10 times faster query.
A query using different execution plans just because of a different parameter can have several reasons. The simplest explanation would be the position of the used item_id in the relevant index. The position in the index may affect the cost of using the index which in turn may affect if it is used at all. (this is just an example)
It is important to note that the explain statement will give you the planned execution plan but maybe not the actually used one.
EXPLAIN ANALYZE is the command which will output the actually used execution plan for you. It may still yield different results for different parameters.
ON `orders`.`id` = `orders_items`.`order_id`
where `orders_items`.`item_id` = 497
and ???.`import_finished` = 1
and `orders`.`deleted_at` is null
order by ???.`id` desc
limit 50 offset 0;
One order maps to many order items, correct?
So, ORDER BY orders.id is different than ORDER BY orders_items.item_id.
Does item_id = 497 show up in many different orders?
So, think about which ORDER BY you really want.
Meanwhile, these may help with performance:
orders_items: INDEX(order_id, item_id)
returns: INDEX(order_id, deleted_at)
shippings: INDEX(order_id, deleted_at)
laravel_reserved_2: INDEX(recurred_from_id, deleted_at)
How can you please help me I have a table in the database with a column called "recioient_id"
The value of its rows is repeated more than once How do I prevent repeating rows while keeping the order descending and displaying the last row
I have tried so much without a find
I did this query and was good at preventing repetition and did not display the last row
Query which was:
SELECT *
FROM `messages`
WHERE `sender_id` = 1
GROUP BY `recioient_id` DESC
HAVING COUNT(*) >= 1
Here is a sample table
I hope to be successful in translation and hope to help
If you want one row per recioient_id, use filtering. I would recommend a correlated subquery:
SELECT m.*
FROM messages m
WHERE m.sender_id = 1 AND
m.messages_id = (SELECT MAX(m2.messages_id)
FROM messages m2
WHERE m2.sender_id = m.sender_id AND
m2.recioient_id = m.recioient_id
)
ORDER BY m.recioient_id;
I have a relatively basic query that fetches the most recent messages per conversation:
SELECT `message`.`conversation_id`, MAX(`message`.`add_time`) AS `max_add_time`
FROM `message`
LEFT JOIN `conversation` ON `message`.`conversation_id` = `conversation`.`id`
WHERE ((`conversation`.`receiver_user_id` = 1 AND `conversation`.`status` != -2)
OR (`conversation`.`sender_user_id` = 1 AND `conversation`.`status` != -1))
GROUP BY `conversation_id`
ORDER BY `max_add_time` DESC
LIMIT 12
The message table contains more than 911000 records, the conversation table contains around 680000. The execution time for this query, varies between 4 and 10 seconds, depending on the load on the server. Which is far too long.
Below is a screenshot of the EXPLAIN result:
The cause is apparently the MAX and/or the GROUP BY, because the following similar query only takes 10ms:
SELECT COUNT(*)
FROM `message`
LEFT JOIN `conversation` ON `message`.`conversation_id` = `conversation`.`id`
WHERE (`message`.`status`=0)
AND (`message`.`user_id` <> 1)
AND ((`conversation`.`sender_user_id` = 1 OR `conversation`.`receiver_user_id` = 1))
The corresponding EXPLAIN result:
I have tried adding different indices to both tables without any improvement, for example: conv_msg_idx(add_time, conversation_id) on message which seems to be used according to the first EXPLAIN result, however the query still takes around 10 seconds to execute.
Any help improving the indices or query to get the execution time down would be greatly appreciated.
EDIT:
I have changed the query to use an INNER JOIN:
SELECT `message`.`conversation_id`, MAX(`message`.`add_time`) AS `max_add_time`
FROM `message`
INNER JOIN `conversation` ON `message`.`conversation_id` = `conversation`.`id`
WHERE ((`conversation`.`receiver_user_id` = 1 AND `conversation`.`status` != -2)
OR (`conversation`.`sender_user_id` = 1 AND `conversation`.`status` != -1))
GROUP BY `conversation_id`
ORDER BY `max_add_time` DESC
LIMIT 12
But the execution time is still ~ 6 seconds.
You should create Multiple-Column Index on the columns which are in your WHERE clause and which you want to SELECT (except conversation_id). (reference)
conversation_id should be an index in both table.
Try to avoid 'Or' in Sql query this will make the fetching slow. Instead use union or any other methods.
SELECT message.conversation_id, MAX(message.add_time) AS max_add_time FROM message INNER JOIN conversation ON message.conversation_id = conversation.id WHERE (conversation.sender_user_id = 1 AND conversation.status != -1)) GROUP BY conversation_id
union
SELECT message.conversation_id, MAX(message.add_time) AS max_add_time FROM message INNER JOIN conversation ON message.conversation_id = conversation.id WHERE ((conversation.receiver_user_id = 1 AND conversation.status != -2) ) GROUP BY conversation_id ORDER BY max_add_time DESC LIMIT 12
Instead of depending on a single table message, have two tables: One for message, as you have, plus another thread that keeps the status of the thread of messages.
Yes, that requires a little more work when adding a new message -- update a column or two in thread.
But it eliminates the GROUP BY and MAX that are causing grief in this query.
While doing this split, see if some other columns would be better off in the new table.
SELECT `message`.`conversation_id`, MAX(`message`.`add_time`) AS `max_add_time`
FROM `message`
INNER JOIN `conversation` ON `message`.`conversation_id` = `conversation`.`id`
WHERE ((`conversation`.`receiver_user_id` = 1 AND `conversation`.`status` != -2)
OR (`conversation`.`sender_user_id` = 1 AND `conversation`.`status` != -1))
GROUP BY `conversation_id`
ORDER BY `max_add_time` DESC
LIMIT 12
You can try with INNER JOIN, if your logic not get affect using it.
you can modify this query by avoiding max() use
select * from(
select row_number() over(partition by conversation_id order by add_time desc)p1
)t1 where t1.p1=1
I have been researching this for hours and the best code that I have come up with is this from an example i found on overstack. I have been through several derivations but the following is the only query that returns the correct data, the problem is it takes over 139s (more than 2 minutes) to return only 30 rows of data. Im stuck. (life_p is a 'likes'
SELECT
logos.id,
logos.in_gallery,
logos.active,
logos.pubpriv,
logos.logo_name,
logos.logo_image,
coalesce(cc.Count, 0) as CommentCount,
coalesce(lc.Count, 0) as LikeCount
FROM logos
left outer join(
select comments.logo_id, count( * ) as Count from comments group by comments.logo_id
) cc on cc.logo_id = logos.id
left outer join(
select life_p.logo_id, count( * ) as Count from life_p group by life_p.logo_id
) lc on lc.logo_id = logos.id
WHERE logos.active = '1'
AND logos.pubpriv = '0'
GROUP BY logos.id
ORDER BY logos.in_gallery desc
LIMIT 0, 30
I'm not sure whats wrong. If i do them singularly meaningremove the coalece and one of the joins:
SELECT
logos.id,
logos.in_gallery,
logos.active,
logos.pubpriv,
logos.logo_name,
logos.logo_image,
count( * ) as lc
FROM logos
left join life_p on life_p.logo_id = logos.id
WHERE logos.active = '1'
AND logos.pubpriv = '0'
GROUP BY logos.id
ORDER BY logos.in_gallery desc
LIMIT 0, 30
that runs in less than half a sec ( 2-300 ms )....
Here is a link to the explain: https://logopond.com/img/explain.png
MySQL has a peculiar quirk that allows a group by clause that does not list all non-aggregating columns. This is NOT a good thing and you should always specify ALL non-aggregating columns in the group by clause.
Note, when counting over joined tables it is useful to know that the COUNT() function ignores NULLs, so for a LEFT JOIN where NULLs can occur don't use COUNT(*), instead use a column from within the joined table and only rows from that table will be counted. From these points I would suggest the following query structure.
SELECT
logos.id
, logos.in_gallery
, logos.active
, logos.pubpriv
, logos.logo_name
, logos.logo_image
, COALESCE(COUNT(cc.logo_id), 0) AS CommentCount
, COALESCE(COUNT(lc.logo_id), 0) AS LikeCount
FROM logos
LEFT OUTER JOIN comments cc ON cc.logo_id = logos.id
LEFT OUTER JOIN life_p lc ON lc.logo_id = logos.id
WHERE logos.active = '1'
AND logos.pubpriv = '0'
GROUP BY
logos.id
, logos.in_gallery
, logos.active
, logos.pubpriv
, logos.logo_name
, logos.logo_image
ORDER BY logos.in_gallery DESC
LIMIT 0, 30
If you continue to have performance issues then use a execution plan and consider adding indexes to suit.
You can create some indexes on the joining fields:
ALTER TABLE table ADD INDEX idx__tableName__fieldName (field)
In your case will be something like:
ALTER TABLE cc ADD INDEX idx__cc__logo_id (logo_id);
I dont really like it because ive always read that sub queries are bad and that joins perform better under stress, but in this particular case subquery seems to be the only way to pull the correct data in under half a sec consistently. Thanks for the suggestions everyone.
SELECT
logos.id,
logos.in_gallery,
logos.active,
logos.pubpriv,
logos.logo_name,
logos.logo_image,
(Select COUNT(comments.logo_id) FROM comments
WHERE comments.logo_id = logos.id) AS coms,
(Select COUNT(life_p.logo_id) FROM life_p
WHERE life_p.logo_id = logos.id) AS floats
FROM logos
WHERE logos.active = '1' AND logos.pubpriv = '0'
ORDER BY logos.in_gallery desc
LIMIT ". $start .",". $pageSize ."
Also you can create a mapping tables to speed up your query try:
CREATE TABLE mapping_comments AS
SELECT
comments.logo_id,
count(*) AS Count
FROM
comments
GROUP BY
comments.logo_id
) cc ON cc.logo_id = logos.id
Then change your code
left outer join(
should become
inner join mapping_comments as mp on mp.logo_id =cc.id
Then each time a new comment are added to the cc table you need to update your mapping table OR you can create a stored procedure to do it automatically when your cc table changes
I was thinking a way to using one query with a subquery instead of using two seperate queries.
But turns out using a subquery is causing multiple requests for each row in result set. Is there a way to limit that count subquery result only one with in a combined query ?
SELECT `ad_general`.`id`,
( SELECT count(`ad_general`.`id`) AS count
FROM (`ad_general`)
WHERE `city` = 708 ) AS count,
FROM (`ad_general`)
WHERE `ad_general`.`city` = '708'
ORDER BY `ad_general`.`id` DESC
LIMIT 15
May be using a join can solve the problem but dunno how ?
SELECT ad_general.id, stats.cnt
FROM ad_general
JOIN (
SELECT count(*) as cnt
FROM ad_general
WHERE city = 708
) AS stats
WHERE ad_general.city = 708
ORDER BY ad_general.id DESC
LIMIT 15;
The explicit table names aren't required, but are used both for clarity and maintainability (the explicit table names will prevent any imbiguities should the schema for ad_general or the generated table ever change).
You can self-join (join the table to itself table) and apply aggregate function to the second.
SELECT `adgen`.`id`, COUNT(`adgen_count`.`id`) AS `count`
FROM `ad_general` AS `adgen`
JOIN `ad_general` AS `adgen_count` ON `adgen_count`.city = 708
WHERE `adgen`.`city` = 708
GROUP BY `adgen`.`id`
ORDER BY `adgen`.`id` DESC
LIMIT 15
However, it's impossible to say what the appropriate grouping is without knowing the structure of the table.