MySQL Query Optimization needed - mysql

I'm trying to do a query in mysql
SELECT *
FROM table_name1
WHERE codregister IN
(SELECT register
FROM tablename2
WHERE city LIKE '%paris%')
ORDER BY date DESC
In table_name "codregister" is a primary key but in tablename2 "register" field column is a index (primary key on tablename2 is an autoincrement).
In the table_name1 doesn´t have matches with the tablename2 but the execution time of the query is to slow. Someone can recommend to improve the query?

Check this considerable difference from csf answer
SELECT *
FROM
table_a a
INNER JOIN
(SELECT register
FROM table_b
WHERE city like '%paris%') b
ON(a.codregister=b.register)
ORDER BY a.date
You're only projecting the only field of B that you need, and you are filtering B records before the join.

Use JOIN instead , and make sure joining variables are indexed .
SELECT * FROM table_name1 a
JOIN tablename2 b on a.codregister = b.register
where b.city like '%paris%'
ORDER BY a.date DESC
Also for any query you write try to use 'EXPLAIN' to find more about your query
Ref :
http://dev.mysql.com/doc/refman/5.0/en/explain.html

i agree with 0R10N Answer , filtering before join can make join faster . ThankI

Related

MySql SELECT * FROM Table A and JOIN Column FROM Table B WHERE LOG_ID Match

I need SQL Query i mention details in sample image, I hope you understand
Thank you
Try this query and let us know if it works for you:-
Select a.*, b.degree from table_a as a inner join table_b as b on
b.log_id=a.log_id order by a.log_id asc
In this database query we have performed an inner join between the two tables and matched it based on log_id and at last order it in ascending order based on log_id itself.
This should work:
SELECT a.*, b.degree
FROM a
LEFT JOIN b ON a.s_id = b.n_id
WHERE a.log_id = 102
ORDER BY a.log_id ASC;
Remember to change the table names a and b to the correct table names. More on this topic here.
This code works:
Select a.*, b.degree from table_a as a inner join table_b as b on b.log_id=a.log_id where a.log_id="102" order by a.log_id asc;
Thanks you all

Rewriting a slow SQL (sub) query in JOIN

So I've got massive slow SQL query and I've narrowed it down to a slow sub-query, so I want to rewrite it to a JOIN. But I'm stuck... (due to the MAX and GROUP BY)
SELECT *
FROM local.advice AS aa
LEFT JOIN webdb.account AS oa ON oa.shortname = aa.shortname
WHERE aa.aa_id = ANY (SELECT MAX(dup.aa_id)
FROM local.advice AS dup
GROUP BY dup.shortname)
AND oa.cat LIKE '111'
ORDER BY aa.ram, aa.cpu DESC
LIMIT 0, 30
Here is a different version of your query where the subquery is converted with a join clause
select * from local.advice aa
JOIN webdb.account oa ON oa.shortname = aa.shortname
join(
select max(aa_id) as aa_id,shortname from local.advice
group by shortname
)x on x.aa_id = aa.aa_id
where
oa.cat = '111'
order by aa.ram, aa.cpu DESC
limit 0,30
Also you may need to apply indexes if they are not added already
alter table local.advice add index shortname_idx(shortname);
alter table webdb.account add index cat_shortname_idx(cat,shortname);
alter table local.advice add index ram_idx(ram);
alter table local.advice add index cpu_idx(cpu);
I am assuming aa_id is a primary key so did not add the index
Make sure to take a backup of the tables before applying the indexes

Optimizing mysql query to find all duplicate entries

I am running a query like this:
SELECT DISTINCT `tableA`.`field1`,
`tableA`.`filed2` AS field2Alias,
`tableA`.`field3`,
`tableB`.`field4` AS field4Alias,
`tableA`.`field6` AS field6Alias
FROM (`tableC`)
RIGHT JOIN `tableA` ON `tableC`.`idfield` = `tableA`.`idfield`
JOIN `tableB` ON `tableB`.`idfield` = `tableA`.`idfield`
AND tableA.field2 IN
(SELECT field2
FROM tableA
GROUP BY tableA. HAVING count(*)>1)
ORDER BY tableA.field2
This is to find all the duplicate entries, but now it's taking lot of time for the execution. Any suggestions for optimization?
It looks like you are trying to find all duplicates on field2 in TableA. The first step would be to move the in subquery to the from clause:
SELECT DISTINCT a.`field1`, a.`filed2` AS field2Alias,
a.`field3`, b.`field4` AS field4Alias, a.`field6` AS field6Alias
FROM tableA a left join
tableC c
on c.`idfield` = a`.`idfield` join
`tableB` b
ON b.`idfield` = a.`idfield` join
(SELECT field2
FROM tableA
group by field2
having count(*) > 1
) asum
on asum.field2 = a.field2
ORDER BY tableA.field2
There may be additional optimizations, but it is very hard to tell. Your question "find duplicates" and your query "join a bunch of tables together and filter them" don't quite match. It would also be helpful to know what tables have which indexes and unique/primary keys.

nested query on the same table

do you think a query like this will create problem in the execution of my software?
I need to delete the all the table, except the last 2 groups of entries, grouped by the same time of insert.
delete from tableA WHERE time not in
(
SELECT time FROM
(select distinct time from tableA order by time desc limit 2
) AS tmptable
);
Do you have better solution? I'm using mysql 5.5
I don't see anything wrong with your query, but I prefer using an OUTER JOIN/NULL check (plus it alleviates the need for one of the nested subqueries):
delete a
from tableA a
left join
(
select distinct time
from tableA
order by time desc
limit 2
) b on a.time = b.time
where b.time is null
SQL Fiddle Demo

Eliminating duplicates from SQL query

What would be the best way to return one item from each id instead of all of the other items within the table. Currently the query below returns all manufacturers
SELECT m.name
FROM `default_ps_products` p
INNER JOIN `default_ps_products_manufacturers` m ON p.manufacturer_id = m.id
I have solved my question by using the DISTINCT value in my query:
SELECT DISTINCT m.name, m.id
FROM `default_ps_products` p
INNER JOIN `default_ps_products_manufacturers` m ON p.manufacturer_id = m.id
ORDER BY m.name
there are 4 main ways I can think of to delete duplicate rows
method 1
delete all rows bigger than smallest or less than greatest rowid value. Example
delete from tableName a where rowid> (select min(rowid) from tableName b where a.key=b.key and a.key2=b.key2)
method 2
usually faster but you must recreate all indexes, constraints and triggers afterward..
pull all as distinct to new table then drop 1st table and rename new table to old table name
example.
create table t1 as select distinct * from t2; drop table t1; rename t2 to t1;
method 3
delete uing where exists based on rowid. example
delete from tableName a where exists(select 'x' from tableName b where a.key1=b.key1 and a.key2=b.key2 and b.rowid >a.rowid) Note if nulls are on column use nvl on column name.
method 4
collect first row for each key value and delete rows not in this set. Example
delete from tableName a where rowid not in(select min(rowid) from tableName b group by key1, key2)
note that you don't have to use nvl for method 4
Using DISTINCT often is a bad practice. It may be a sing that there is something wrong with your SELECT statement, or your data structure is not normalized.
In your case I would use this (in assumption that default_ps_products_manufacturers has unique records).
SELECT m.id, m.name
FROM default_ps_products_manufacturers m
WHERE EXISTS (SELECT 1 FROM default_ps_products p WHERE p.manufacturer_id = m.id)
Or an equivalent query with IN:
SELECT m.id, m.name
FROM default_ps_products_manufacturers m
WHERE m.id IN (SELECT p.manufacturer_id FROM default_ps_products p)
The only thing - between all possible queries it is better to select the one with the better execution plan. Which may depend on your vendor and/or physical structure, statistics, etc... of your data base.
I think in most cases EXISTS will work better.