Running the following works great:
SELECT email FROM User WHERE empNum IN (126,513,74)
However, this takes a very long to reply (no errors) using:
SELECT email FROM table1 WHERE empNum IN (
SELECT empNum FROM table2 WHERE accomp = 'onhold' GROUP BY empNum
)
What is causing this?
How about that one?
SELECT DISTINCT table1.email
FROM table1
INNER JOIN table2 USING(empNum)
WHERE table2.accomp = 'onhold'
You should probably make an index on table2.accomp if you use that query often enough:
CREATE INDEX accomp ON table2 (accomp);
or maybe
CREATE INDEX accomp ON table2 (empNum,accomp);
To perform some crude (but deciding) benchmarks:
log in mysql console
clear the query cache(*):
RESET QUERY CACHE;
run the slow query and write down the timing
create an index
clear the query cache
run the slow query and write down the timing
drop the index
create the other index
clear the cache
run the slow query one more time
compare the timings and keep the best index (by droping the current one and creating the correct one if necessary)
(*) You will need the relevant privileges to run that command
I think the join statement you need is:
SELECT email FROM table1
INNER JOIN table2
ON table1.empNum=table2.empNum
AND table2.accomp = 'onhold'
Related
Why isn't MySQL able to consistently optimize queries in the format of WHERE <indexed_field> IN (<subquery>)?
I have a query as follows:
SELECT
*
FROM
t1
WHERE
t1.indexed_field IN (select val from ...)
AND (...)
The subquery select val from ... runs very quickly. The problem is MySQL is doing a full table scan to get the required rows from t1 -- even though t1.indexed_field is indexed.
I've gotten around this by changing the query to an inner join:
SELECT
*
FROM
t1
INNER JOIN
(select val from ...) vals ON (vals.val = t1.indexed_field)
WHERE
(...)
Explain shows that this works perfectly -- MySQL is now able use the indexed_field index when joining to the subquery table.
My question is: Why isn't MySQL able to optimize the first query? Intuitively, doing where <indexed_field> IN (<subquery>) seems like quite an easy optimization -- do the subquery, use the index to grab the rows.
No or Yes.
Old versions of MySQL did a very poor job of optimizing IN ( SELECT ... ). It seemed to re-evaluate the subquery repeatedly.
New versions are turning it into EXISTS ( SELECT 1 ... ) or perhaps a LEFT JOIN.
Please provide
Version
SHOW CREATE TABLE
EXPLAIN SELECT ...
I have an issue on creating tables by using select keyword (it runs so slow). The query is to take only the details of the animal with the latest entry date. that query will be used to inner join another query.
SELECT *
FROM amusementPart a
INNER JOIN (
SELECT DISTINCT name, type, cageID, dateOfEntry
FROM bigRegistrations
GROUP BY cageID
) r ON a.type = r.cageID
But because of slow performance, someone suggested me steps to improve the performance. 1) use temporary table, 2)store the result and use it and join it the the other statement.
use myzoo
CREATE TABLE animalRegistrations AS
SELECT DISTINCT name, type, cageID, MAX(dateOfEntry) as entryDate
FROM bigRegistrations
GROUP BY cageID
unfortunately, It is still slow. If I only use the select statement, the result will be shown in 1-2 seconds. But if I add the create table, the query will take ages (approx 25 minutes)
Any good approach to improve the query time?
edit: the size of big registration table is around 3.5 million rows
Can you please try the query in the way below to achieve The query is to take only the details of the animal with the latest entry date. that query will be used to inner join another query, the query you are using is not fetching records as per your requirement and it will faster:
SELECT a.*, b.name, b.type, b.cageID, b.dateOfEntry
FROM amusementPart a
INNER JOIN bigRegistrations b ON a.type = b.cageID
INNER JOIN (SELECT c.cageID, max(c.dateOfEntry) dateofEntry
FROM bigRegistrations c
GROUP BY c.cageID) t ON t.cageID = b.cageID AND t.dateofEntry = b.dateofEntry
Suggested indexing on cageID and dateofEntry
This is a multipart question.
Use Temporary Table
Don't use Distinct - group all columns to make distinct (dont forget to check for index)
Check the SQL Execution plans
Here you are not creating a temporary table. Try the following...
CREATE TEMPORARY TABLE IF NOT EXISTS animalRegistrations AS
SELECT name, type, cageID, MAX(dateOfEntry) as entryDate
FROM bigRegistrations
GROUP BY cageID
Have you tried doing an explain to see how the plan is different from one execution to the next?
Also, I have found that there can be locking issues in some DB when doing insert(select) and table creation using select. I ran this in MySQL, and it solved some deadlock issues I was having.
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
The reason the query runs so slow is probably because it is creating the temp table based on all 3.5 million rows, when really you only need a subset of those, i.e. the bigRegistrations that match your join to amusementPart. The first single select statement is faster b/c SQL is smart enough to know it only needs to calculate the bigRegistrations where a.type = r.cageID.
I'd suggest that you don't need a temp table, your first query is quite simple. Rather, you may just need an index. You can determine this manually by studying the estimated execution plan, or running your query in the database tuning advisor. My guess is you need to create an index similar to below. Notice I index by cageId first since that is what you join to amusementParks, so that would help SQL narrow the results down the quickest. But I'm guessing a bit - view the query plan or tuning advisor to be sure.
CREATE NONCLUSTERED INDEX IX_bigRegistrations ON bigRegistrations
(cageId, name, type, dateOfEntry)
Also, if you want the animal with the latest entry date, I think you want this query instead of the one you're using. I'm assuming the PK is all 4 columns.
SELECT name, type, cageID, dateOfEntry
FROM bigRegistrations BR
WHERE BR.dateOfEntry =
(SELECT MAX(BR1.dateOfEntry)
FROM bigRegistrations BR1
WHERE BR1.name = BR.name
AND BR1.type = BR.type
AND BR1.cageID = BR.cageID)
I have two tables that each contain about 500 customer data records. Each record in each of the tables has an email field. Sometimes the same email addresses exist on both tables, sometimes not. I want to retrieve every email address on table1 that doesn't exist on table2. The email field in each table is indexed. I'm doing the select with a sub query that is really slow, 10 to 20 seconds.
select email
from
t1
where
email not in (select email from t2)
There's actually about 30K rows in each table, but I can knock it down to 500 each very quickly with an additional 'where' to filter by category. It's only when I add that subquery that it slows down dramatically. So, I am sure this can be faster, and I know a join should be much faster than the subquery, but can't figure out how to do that. I found a left outer join explanation here on SO, that looked like it should help, but got nowhere with it. Any help is appreciated.
mysql does not optimize a subquery in the WHERE clause (edit: it re-runs the subquery for every row tested)
to convert to a JOIN, try something like
SELECT email FROM t1
LEFT JOIN t2 ON (t1.email = t2.email)
WHERE t2.email IS NULL
this should run very fast, a covering index query.
The query optimizer should walk the email index of t1, check the
email index of t2, and output those emails that are in t1 but not in t2.
Edit: I should add, mysql does optimize a subquery in the JOIN clause: it runs the subquery and puts the results into a "derived table" (temporary table without any indexes), and joins the derived table like any other. The syntax is a bit funny, each derived table must have an alias, ie ... JOIN (SELECT ...) AS derived ON ....
Usually subqueries do more processing than usual query. In your case it first fetches all the emails from t2 and compares it with the email list of t1.
You can try like below, without using a sub query.
SELECT email FROM t1,t2 WHERE t1.email!=t2.email
The best way to improve the performance of SELECT operations is to create indexes on one or more of the columns that are tested in the query. The index entries act like pointers to the table rows, allowing the query to quickly determine which rows match a condition in the WHERE clause, and retrieve the other column values for those rows. All MySQL data types can be indexed.
some tricks for creating mysql tables ..
see this.
I think this should work fine
SELECT email from T1
LEFT JOIN T2
ON T1.email=T2.email
WHERE T2.email!=NULL
Hi i have this query but its giving me an error of Operand should contain 1 column(s) not sure why?
Select *,
(Select *
FROM InstrumentModel
WHERE InstrumentModel.InstrumentModelID=Instrument.InstrumentModelID)
FROM Instrument
according to your query you wanted to get data from instrument and instrumentModel table and in your case its expecting "from table name " after your select * .when the subselect query runs to get its result its not finding table instrument.InstrumentModelId inorder to fetch result from both the table by matching you can use join .or you can also select perticuler fields by tableName.fieldName and in where condition use your condition.
like :
select Instrument.x,InstrumentModel.y
from instrument,instrumentModel
where instrument.x=instrumentModel.y
You can use a join to select from 2 connected tables
select *
from Instrument i
join InstrumentModel m on m.InstrumentModelID = i.InstrumentModelID
When you use subqueries in the column list, they need to return exactly one value. You can read more in the documentation
as a user commented in the documentation, using subqueries like this can ruin your performance:
when the same subquery is used several times, mysql does not use this fact to optimize the query, so be careful not to run into performance problems.
example:
SELECT
col0,
(SELECT col1 FROM table1 WHERE table1.id = table0.id),
(SELECT col2 FROM table1 WHERE table1.id = table0.id)
FROM
table0
WHERE ...
the join of table0 with table1 is executed once for EACH subquery, leading to very bad performance for this kind of query.
Therefore you should rather join the tables, as described by the other answer.
I am running a complicated and costly query to find the MIN() values of a function grouped by another attribute. But I don't just need the value, I need the entry that produces it + the value.
My current pseudoquery goes something like this:
SELECT MIN(COSTLY_FUNCTION(a.att1,a.att2,$v1,$v2)) FROM (prefiltering) as a GROUP BY a.group_att;
but I want a.* and MIN(COSTLY_FUNCTION(a.att1,a.att2,$v1,$v2)) as my result.
The only way I can think of is using this ugly beast:
SELECT a1.*, COSTLY_FUNCTION(a1.att1,a1.att2,$v1,$v2)
FROM (prefiltering) as a1
WHERE COSTLY_FUNCTION(a1.att1,a1.att2,$v1,$v2) =
(SELECT MIN(COSTLY_FUNCTION(a.att1,a.att2,$v1,$v2)) FROM (prefiltering) as a GROUP BY a.group_att)
But now I am executing the prefiltering_query 2 times and have to run the costly function twice. This is ridiculous and I hope that I am doing something seriously wrong here.
Possible solution?:
Just now I realize that I could create a temporary table containing:
(SELECT a1.*, COSTLY_FUNCTION(a1.att1,a1.att2,$v1,$v2) as complex FROM (prefiltering) as a1)
and then run the MIN() as subquery and compare it at greatly reduced cost. Is that the way to go?
A problem with your temporary table solution is that I can't see any way to avoid using it twice in the same query.
However, if you're willing to use an actual permanent table (perhaps with ENGINE = MEMORY), it should work.
You can also move the subquery into the FROM clause, where it might be more efficient:
CREATE TABLE temptable ENGINE = MEMORY
SELECT a1.*,
COSTLY_FUNCTION(a1.att1,a1.att2,$v1,$v2) AS complex
FROM prefiltering AS a1;
CREATE INDEX group_att_complex USING BTREE
ON temptable (group_att, complex);
SELECT a2.*
FROM temptable AS a2
NATURAL JOIN (
SELECT group_att, MIN(complex) AS complex
FROM temptable GROUP BY group_att
) AS a3;
DROP TABLE temptable;
(You can try it without the index too, but I suspect it'll be faster with it.)
Edit: Of course, if one temporary table won't do, you could always use two:
CREATE TEMPORARY TABLE temp1
SELECT *, COSTLY_FUNCTION(att1,att2,$v1,$v2) AS complex
FROM prefiltering;
CREATE INDEX group_att_complex ON temp1 (group_att, complex);
CREATE TEMPORARY TABLE temp2
SELECT group_att, MIN(complex) AS complex
FROM temp1 GROUP BY group_att;
SELECT temp1.* FROM temp1 NATURAL JOIN temp2;
(Again, you may want to try it with or without the index; when I ran EXPLAIN on it, MySQL didn't seem to want to use the index for the final query at all, although that might be just because my test data set was so small. Anyway, here's a link to SQLize if you want to play with it; I used CONCAT() to stand in for your expensive function.)
You can use the HAVING clause to get columns in addition to that MIN value. For example:
SELECT a.*, COSTLY_FUNCTION(a.att1,a.att2,$v1,$v2) FROM (prefiltering) as a GROUP BY a.group_att HAVING MIN(COSTLY_FUNCTION(a.att1,a.att2,$v1,$v2)) = COSTLY_FUNCTION(a.att1,a.att2,$v1,$v2);