How to optimize group_concat sub query in mysql? - mysql

How can i optimize this query? It's taking a lot of time to execute.
SELECT GROUP_CONCAT(`details_value`), details_key
FROM `correspondence_package_details`
WHERE `correspondence_id` IN (SELECT GROUP_CONCAT(`correspondence_id`)
FROM `correspondence_package_header`
WHERE create_date BETWEEN '2017-11-01' AND '2017-11-30'
GROUP BY `manual_package_id`,account_id)
AND (`details_key` = 'to' OR `details_key` = 'cc')
GROUP BY details_key;

Query optimization is both an art and science. By simply looking at the sql, you are limiting your options.
Run an explain plan on your query. Check for low performing join types and scans.
Add appropriate indexes on all join columns.
For date fields, sort the index based on whether you would normally be querying most recent or oldest records.
Regular run the optimize command on your databases so that the indexes and tablespaces get a quick tune up.
The query plan will be your best tool for this.

Related

How to make a faster query when joining multiple huge tables?

I have 3 tables. All 3 tables have approximately 2 million rows. Everyday 10,000-100,000 new entries are entered. It takes approximately 10 seconds to finish the sql statement below. Is there a way to make this sql statement faster?
SELECT customers.name
FROM customers
INNER JOIN hotels ON hotels.cus_id = customers.cus_id
INNER JOIN bookings ON bookings.book_id = customers.book_id
WHERE customers.gender = 0 AND
customers.cus_id = 3
LIMIT 25 OFFSET 1;
Of course this statement works fine, but its slow. Is there a better way to write this code?
All database servers have a form of an optimization engine that is going to determine how best to grab the data you want. With a simple query such as the select you showed, there isn't going to be any way to greatly improve performance within the SQL. As others have said sub-queries won't helps as that will get optimized into the same plan as joins.
Reduce the number of columns, add indexes, beef up the server if that's an option.
Consider caching. I'm not a mysql expert but found this article interesting and worth a skim. https://www.percona.com/blog/2011/04/04/mysql-caching-methods-and-tips/
Look at the section on summary tables and consider if that would be appropriate. Does pulling every hotel, customer, and booking need to be up-to-the-minute or would inserting this into a summary table once an hour be fine?
A subquery don't help but a proper index can improve the performance so be sure you have proper index
create index idx1 on customers(gender , cus_id,book_id, name )
create index idex2 on hotels(cus_id)
create index idex3 on hotels(book_id)
I find it a bit hard to believe that this is related to a real problem. As written, I would expect this to return the same customer name over and over.
I would recommend the following indexes:
customers(cus_id, gender, book_id, name)
hotels(cus_id)
bookings(book_id)
It is really weird that bookings are not to a hotel.
First, these indexes cover the query, so the data pages don't need to be accessed. The logic is to start with the where clause and use those columns first. Then add additional columns from the on and select clauses.
Only one column is used for hotels and bookings, so those indexes are trivial.
The use of OFFSET without ORDER BY is quite suspicious. The result set is in indeterminate order anyway, so there is no reason to skip the nominally "first" value.

Optimized query to find avg in MySQL with inner join

Currently I am running a query to find average with joining one more table.
Results are as expected but it does not perform very well, taking a lot of time to execute. So need a help to find the better query. Current query is:
SELECT AVG(t2.a),
AVG(t2.b),
AVG(t2.c),
t1.column1,
t1.column2
FROM table1 t1
INNER JOIN table2 t2
ON t1.column = t2.column
GROUP BY t1.column1, t2.column2
In the future when asking questions related to performance please always include an EXPLAIN output. You basically just write "EXPLAIN SELECT ....;" and it'll show you the execution plan of that query which includes detailed information which may hint towards possible optimizations.
Two things:
JOINs on unindexed columns may be very slow.
GROUP BY statements are generally slow kind of queries as they require sorting, especially when grouping multiple columns. GROUP BY can do index scans but this requires that there's tuple indexes on the involved columns which in your case since you're selecting columns from different tables probably doesn't work.
How many rows do you have? If you're grouping hundreds of million of rows you can easily expect query times that are in the range of hours (and I'm dead serious about hours). Grouping is just a horrifically expensive operation. Especially because you have memory limits which means the sorting takes place on-disk which induces an additional slowdown due to disk i/o which is just soo much slower than memory.
There are two possible answers.
The query is wrong -- Because the JOIN occurs before the AVERAGE, hence the average is of too many rows.
The query is right -- in which case, there is a lot of work to do, so it takes time. I have to believe that this is the case, since you GROUP BY columns from both tables.
Please provide real column names; it could help our understanding of the query.
But assuming the first case, let's fix the math and speed it up.
Computer the averages in a "derived" table.
Do the JOIN.
I won't attempt to write the code until I have some assurance that the case is worth pursuing.

Speed of query using FIND_IN_SET on MySql

i have several problems with my query from a catalogue of products.
The query is as follows:
SELECT DISTINCT (cc_id) FROM cms_catalogo
JOIN cms_catalogo_lingua ON ccl_id_prod=cc_id
JOIN cms_catalogo_famiglia ON (FIND_IN_SET(ccf_id, cc_famiglia) != 0)
JOIN cms_catalogo_categoria ON (FIND_IN_SET(ccc_id, cc_categoria) != 0)
JOIN cms_catalogo_sottocat ON (FIND_IN_SET(ccs_id, cc_sottocat) != 0)
LEFT JOIN cms_catalogo_order ON cco_id_prod=cc_id AND cco_id_lingua=1 AND cco_id_sottocat=ccs_id
WHERE ccc_nome='Alpine Skiing' AND ccf_nome='Ski'
I noticed that querying the first time it takes on average 4.5 seconds, then becomes rapid.
I use FIND_IN_SET because in my Database on table "cms_catalogo" I have the column "cc_famiglia" , "cc_categoria" and "cc_sottocat" with inside ID separated by commas (I know it's stupid).
Example:
Table cms_catalogo
Column cc_famiglia: 1,2,3,4,5
Table cms_catalogo_famiglia
Column ccf_id: 3
The slowdown in the query may arise from the use of FIND_IN_SET that way?
If instead of having IDs separated by comma have a table with ID as an index would be faster?
I can not explain, however, why the first execution of the query is very slow and then speeds up
It is better to use constraint connections between tables. So you better connect them by primary key.
If you want just to quick optimisation for this query:
Check explain select ... in mysql to see performance of you query;
Add indexes for columns ccc_id, ccf_id, ccs_id;
Check explain select ... after indexes added.
The first MySQL query takes much more time because it is raw query, the next are cached. So you should rely on first query time.
If it is not complicated report then execution time should be less than 50-100ms, otherwise you can get problems with performance in total. Because I am so sure it is not the only one query for your application.

Optimize slow SQL query using indexes

I have a problem optimizing a really slow SQL query. I think is an index problem, but I can´t find which index I have to apply.
This is the query:
SELECT
cl.ID, cl.title, cl.text, cl.price, cl.URL, cl.ID AS ad_id, cl.cat_id,
pix.file_name, area.area_name, qn.quarter_name
FROM classifieds cl
/*FORCE INDEX (date_created) */
INNER JOIN classifieds_pix pix ON cl.ID = pix.classified_id AND pix.picture_no = 0
INNER JOIN zip_codes zip ON cl.zip_id = zip.zip_id AND zip.area_id = 132
INNER JOIN area_names area ON zip.area_id = area.id
LEFT JOIN quarter_names qn ON zip.quarter_id = qn.id
WHERE
cl.confirmed = 1
AND cl.country = 'DE'
AND cl.date_created <= NOW() - INTERVAL 1 DAY
ORDER BY
cl.date_created
desc LIMIT 7
MySQL takes about 2 seconds to get the result, and start working in pix.picture_no, but if I force index to "date_created" the query goes much faster, and takes only 0.030 s. But the problem is that the "INNER JOIN zip_codes..." is not always in the query, and when is not, the forced index make the query slow again.
I've been thinking in make a solution by PHP conditions, but I would like to know what is the problem with indexes.
These are several suggestions on how to optimize your query.
NOW Function - You're using the NOW() function in your WHERE clause. Instead, I recommend to use a constant date / timestamp, to allow the value to be cached and optimized. Otherwise, the value of NOW() will be evaluated for each row in the WHERE clause. An alternative to a constant value in case you need a dynamic value, is to add the value from the application (for example calculate the current timestamp and inject it to the query as a constant in the application before executing the query.
To test this recommendation before implementing this change, just replace NOW() with a constant timestamp and check for performance improvements.
Indexes - in general, I would suggest adding an index the contains all columns of your WHERE clause, in this case: confirmed, country, date_created. Start with the column that will cut the amount of data the most and move forward from there. Make sure you adjust the WHERE clause to the same order of the index, otherwise the index won't be used.
I used EverSQL SQL Query Optimizer to get these recommendations (disclaimer: I'm a co-founder of EverSQL and humbly provide these suggestions).
I would actually have a compound index on all elements of your where such as
(country, confirmed, date_created)
Having the country first would keep your optimized index subset to one country first, then within that, those that are confirmed, and finally the date range itself. Don't query on just the date index alone. Since you are ordering by date, the index should be able to optimize it too.
Add explain in front of the query and run it again. This will show you the indexes that are being used.
See: 13.8.2 EXPLAIN Statement
And for an explanation of explain see MySQL Explain Explained. Or: Optimizing MySQL: Queries and Indexes

How can i improve mysql join query speed?

I have two tables lets say employees and order, both table have millions of records.
Select orders.*
from orders
INNER JOIN employees
on Employees.id = orders.employeeid
WHERE orders.type='daily'
and orders.date > Employees.registerDate
and orders.date < Employees.regiserDate + Interval 60 Days
The above query is rough query, there may be syntax error, but is just considerations.
The query consuming almost 60 seconds to load, any body know how can i optimize this query
The indexing is one of the best options mentioned, you could also use stored procedure to optimize its performance on retrieving data.
you may would also like to use the ANALYZE statement to optimize the retrieval of data from those tables you may read more about this statement from this site:
http://dev.mysql.com/doc/refman/5.0/en/analyze-table.html
Set your indexes right (which you probably did?) or create views. You could also consider a temp table, but most of the time the sync makes it not worth it.