I have a simple SELECT query that I'm running on a MySQL database, with information stored in the MyISAM storage engine. However, when I INNER JOIN a large table (~ 1.3mil rows, approx. 100 columns), the query fails to execute and I get a hanging process in phpMyAdmin.
Is there any way I can rework this query to pull out the relevant information I need, that uses less resources?
SELECT exhibitor_id, company_name FROM guidein
INNER JOIN webmaster ON exhibitor_id = source_id
WHERE guidein.db_table = '$db_table' // i.e. 'PX14EX'
ORDER BY company_name
exhibitor_id can be found in the guidein table.
company_name can be found in the webmaster table.
source_id can also be found in the webmaster table.
db_table is found in both tables.
Your query would probably benefit from indexes. Try these:
create index idx_guidein_db_table_exhibitor_id on guidein(db_table, exhibitor_id);
create index idx_webmaster_source_id_company_name on webmaster(source_id, company_name);
Of course, if your query is pulling hundreds of thousands or millions of rows, that might be the cause of the performance problem. If so, indexes will not be a big help.
Related
I have 3 tables. All 3 tables have approximately 2 million rows. Everyday 10,000-100,000 new entries are entered. It takes approximately 10 seconds to finish the sql statement below. Is there a way to make this sql statement faster?
SELECT customers.name
FROM customers
INNER JOIN hotels ON hotels.cus_id = customers.cus_id
INNER JOIN bookings ON bookings.book_id = customers.book_id
WHERE customers.gender = 0 AND
customers.cus_id = 3
LIMIT 25 OFFSET 1;
Of course this statement works fine, but its slow. Is there a better way to write this code?
All database servers have a form of an optimization engine that is going to determine how best to grab the data you want. With a simple query such as the select you showed, there isn't going to be any way to greatly improve performance within the SQL. As others have said sub-queries won't helps as that will get optimized into the same plan as joins.
Reduce the number of columns, add indexes, beef up the server if that's an option.
Consider caching. I'm not a mysql expert but found this article interesting and worth a skim. https://www.percona.com/blog/2011/04/04/mysql-caching-methods-and-tips/
Look at the section on summary tables and consider if that would be appropriate. Does pulling every hotel, customer, and booking need to be up-to-the-minute or would inserting this into a summary table once an hour be fine?
A subquery don't help but a proper index can improve the performance so be sure you have proper index
create index idx1 on customers(gender , cus_id,book_id, name )
create index idex2 on hotels(cus_id)
create index idex3 on hotels(book_id)
I find it a bit hard to believe that this is related to a real problem. As written, I would expect this to return the same customer name over and over.
I would recommend the following indexes:
customers(cus_id, gender, book_id, name)
hotels(cus_id)
bookings(book_id)
It is really weird that bookings are not to a hotel.
First, these indexes cover the query, so the data pages don't need to be accessed. The logic is to start with the where clause and use those columns first. Then add additional columns from the on and select clauses.
Only one column is used for hotels and bookings, so those indexes are trivial.
The use of OFFSET without ORDER BY is quite suspicious. The result set is in indeterminate order anyway, so there is no reason to skip the nominally "first" value.
I have a MySQL query for showing the count of data in the Application listing page.
query
SELECT COUNT(*) FROM cnitranr left join cnitrand on cnitrand.tx_no=cnitranr.tx_no
Explain screen shot
Indexes on cnitranr
tx_no (primary )approx 1 crore of data[ENGINE MYISAM]
index on cnitrand
(tx_no secondary)approx 2 crore of data[ENGINE MYISAM]
Profiler output is like this
Can anyone suggest possibilities in optimizing this query or may i want to run a crone job for counting the count .Please help.
You would need to implement a materialized view.
Since MySQL does not support them directly, you would need to create a table like that:
CREATE TABLE totals (cnt INT)
and write a trigger on both tables that would increment and decrement cnt on INSERT, UPDATE and DELETE to each of the tables.
Note that if you have a record with many linked records in either table, the DML affecting such a record would be slow.
On large data volumes, you very rarely need exact counts, especially for pagination. As I said in a comment above, Google, Facebook etc. only show approximate numbers on paginated results.
It's very unlikely that a person would want to browse through 20M+ records on page only able to show 100 or so.
I have a MySQL query:
SELECT DISTINCT
c.id,
c.company_name,
cd.firstname,
cd.surname,
cis.description AS industry_sector
FROM (clients c)
JOIN clients_details cd ON c.id = cd.client_id
LEFT JOIN clients_industry_sectors cis ON cd.industry_sector_id = cis.id
WHERE c.record_type='virgin'
ORDER BY date_action, company_name asc, id desc
LIMIT 30
The clients table has about 60-70k rows and has an index for 'id', 'record_type', 'date_action' and 'company_name' - unfortunately the query still takes 5+ secs to complete. Removing the 'ORDER BY' reduces this to about 30ms since a filesort is not required. Is there any way I can alter this query to improve upon the 5+ sec response time?
See: http://dev.mysql.com/doc/refman/5.0/en/order-by-optimization.html
Especially:
In some cases, MySQL cannot use indexes to resolve the ORDER BY (..). These cases include the following:
(..)
You are joining many tables, and the columns in the ORDER BY are not all from the first nonconstant table that is used to retrieve rows. (This is the first table in the EXPLAIN output that does not have a const join type.)
You have an index for id, record_type, date_action. But if you want to order by date_action, you really need an index that has date_action as the first field in the index, preferably matching the exact fields in the order by. Otherwise yes, it will be a slow query.
Without seeing all your tables and indexes, it's hard to tell. When asking a question about speeding up a query, the query is just part of the equation.
Does clients have an index on id?
Does clients have an index on record_type
Does clients_details have an index on client_id?
Does clients_industry_sectors have an index on id?
These are the minimum you need for this query to have any chance of working quickly.
thanks so much for the input and suggestions. In the end I've decided to create a new DB table which has the sole purpose of existing to return results for this purpose so no joins are required, I just update the table when records are added or deleted to/from the master clients table. Not ideal from a data storage point of view but it solves the problem and means I'm getting results fantastically fast. :)
I have a table called users with roughly 250,000 records in it. I have another table called staging with around 75,000 records in it. Staging only has one column, msisdn. I want to check to see how many rows in staging are not present in users.
I have the following query, which I have tested on a small data subset, and it seems to work fine:
SELECT
s.*
FROM staging s
LEFT OUTER JOIN users u ON u.msisdn=s.msisdn
WHERE u.msisdn IS NULL
The problem however, is when I try to run this query on the full list of 250k users. It ran for an hour before I stopped it. Is there any way I can optimise this query?
I have started running the query on subsets of the data in staging, but this is horribly manual:
SELECT
s.*
FROM staging s
LEFT OUTER JOIN users u ON u.msisdn=s.msisdn
WHERE u.msisdn IS NULL
LIMIT 0,10000
msisdn is the primary key of the staging table, but it's not the primary key of the table users. I don't know if that is significant however.
First, you can see what indices MySQL is using with the EXPLAIN command. Just write EXPLAIN in front of your query, and the results will show what index (if any) it's using. Presumably if it's that slow on so (relatively) small a data set as 250,000 records, it's not exploiting a very effective index, and you'll be able to see where.
It may also help to rewrite the query as a NOT EXISTS like so:
SELECT s.* FROM staging s
WHERE NOT EXISTS (SELECT 1 FROM users WHERE users.misdn = s.misdn)
Put indexes on the msisdn columns of each table. Since it's not a PK on users, you'll need to put a non-clustered index on it. That should speed up your query tremendously.
Things you can do to speed up this query:
make sure msisdn is indexed in both tables
optimize your tables
replace * with msisdn
I'm not sure how much quicker this will be, but you can try something like.
select msisdn
from staging
where msisdn not in (select msisdn from users)
Also, make sure that an index exists for the msisdn column in both tables. That should speed things up tremendously.
Here's the query (the largest table has about 40,000 rows)
SELECT
Course.CourseID,
Course.Description,
UserCourse.UserID,
UserCourse.TimeAllowed,
UserCourse.CreatedOn,
UserCourse.PassedOn,
UserCourse.IssuedOn,
C.LessonCnt
FROM
UserCourse
INNER JOIN
Course
USING(CourseID)
INNER JOIN
(
SELECT CourseID, COUNT(*) AS LessonCnt FROM CourseSection GROUP BY CourseID
) C
USING(CourseID)
WHERE
UserCourse.UserID = 8810
If I run this, it executes very quickly (.05 seconds roughly). It returns 13 rows.
When I add an ORDER BY clause at the end of the query (ordering by any column) the query takes about 10 seconds.
I'm using this database in production now, and everything is working fine. All my other queries are speedy.
Any ideas of what it could be? I ran the query in MySQL's Query Browser, and from the command line. Both places it was dead slow with the ORDER BY.
EDIT: Tolgahan ALBAYRAK solution works, but can anyone explain why it works?
maybe this helps:
SELECT * FROM (
SELECT
Course.CourseID,
Course.Description,
UserCourse.UserID,
UserCourse.TimeAllowed,
UserCourse.CreatedOn,
UserCourse.PassedOn,
UserCourse.IssuedOn,
C.LessonCnt
FROM
UserCourse
INNER JOIN
Course
USING(CourseID)
INNER JOIN
(
SELECT CourseID, COUNT(*) AS LessonCnt FROM CourseSection GROUP BY CourseID
) C
USING(CourseID)
WHERE
UserCourse.UserID = 8810
) ORDER BY CourseID
Is the column you're ordering by indexed?
Indexing drastically speeds up ordering and filtering.
You are selecting from "UserCourse" which I assume is a joining table between courses and users (Many to Many).
You should index the column that you need to order by, in the "UserCourse" table.
Suppose you want to "order by CourseID", then you need to index it on UserCourse table.
Ordering by any other column that is not present in the joining table (i.e. UserCourse) may require further denormalization and indexing on the joining table to be optimized for speed;
In other words, you need to have a copy of that column in the joining table and index it.
P.S.
The answer given by Tolgahan Albayrak, although correct for this question, would not produce the desired result, in cases where one is doing a "LIMIT x" query.
Have you updated the statistics on your database? I ran into something similar on mine where I had 2 identical queries where the only difference was a capital letter and one returned in 1/2 a second and the other took nearly 5 minutes. Updating the statistics resolved the issue
Realise answer is too late, however I have just had a similar problem, adding order by increased the query time from seconds to 5 minutes and having tried most other suggestions for speeding it up, noticed that the /tmp files where getting to be 12G for this query. Changed the query such that a varchar(20000) field being returned was "trim("ed and performance dramatically improved (back to seconds). So I guess its worth checking whether you are returning large varchars as part of your query and if so, process them (maybe substring(x, 1, length(x))?? if you dont want to trim them.
Query was returning 500k rows and the /tmp file indicated that each row was using about 20k of data.
A similar question was asked before here.
It might help you as well. Basically it describes using composite indexes and how order by works.
Today I was running into a same kind of problem. As soon as I was sorting the resultset by a field from a joined table, the whole query was horribly slow and took more than a hundred seconds.
The server was running MySQL 5.0.51a and by chance I noticed that the same query was running as fast as it should have always done on a server with MySQL 5.1. When comparing the explains for that query I saw that obviously the usage and handling of indexes has changed a lot (at least from 5.0 -> 5.1).
So if you encounter such a problem, maybe your resolution is to simply upgrade your MySQL