I'm not an expert in SQL, i have an sql statement :
SELECT * FROM articles WHERE article_id IN
(SELECT distinct(content_id) FROM contents_by_cats WHERE cat_id='$cat')
AND permission='true' AND date <= '$now_date_time' ORDER BY date DESC;
Table contents_by_cats has 11000 rows.
Table articles has 2700 rows.
Variables $now_date_time and $cat are php variables.
This query takes about 10 seconds to return the values (i think because it has nested SELECT statements) , and 10 seconds is a big amount of time.
How can i achieve this in another way ? (Views or JOIN) ?
I think JOIN will help me here but i don't know how to use it properly for the SQL statement that i mentioned.
Thanks in advance.
A JOIN is exactly what you are looking for. Try something like this:
SELECT DISTINCT articles.*
FROM articles
JOIN contents_by_cats ON articles.article_id = contents_by_cats.content_id
WHERE contents_by_cats.cat_id='$cat'
AND articles.permission='true'
AND articles.date <= '$now_date_time'
ORDER BY date DESC;
If your query is still not as fast as you would like then check that you have an index on articles.article_id and contents_by_cats.content_id and contents_by_cats.cat_id. Depending on the data you may want an index on articles.date as well.
Do note that if the $cat and $now_date_time values are coming from a user then you should really be preparing and binding the query rather than just dumping these values into the query.
This is the query we are starting with:
SELECT a.*
FROM articles a
WHERE article_id IN (SELECT distinct(content_id)
FROM contents_by_cats
WHERE cat_id ='$cat'
) AND
permission ='true' AND
date <= '$now_date_time'
ORDER BY date DESC;
Two things will help this query. The first is to rewrite it using exists rather than in and to simplify the subquery:
SELECT a.*
FROM articles a
WHERE EXISTS (SELECT 1
FROM contents_by_cats cbc
WHERE cbc.content_id = a.article_id and cat_id = '$cat'
) AND
permission ='true' AND
date <= '$now_date_time'
ORDER BY date DESC;
Second, you want indexes on both articles and contents_by_cats:
create index idx_articles_3 on articles(permission, date, article_id);
create index idx_contents_by_cats_2 on contents_by_cat(content_id, cat_id);
By the way, instead of $now_date_time, you can just use the now() function in MySQL.
Related
How can I optimize this query SQL?
CREATE TABLE table1 AS
SELECT * FROM temp
WHERE Birth_Place IN
(SELECT c.DES_COM
FROM tableCom AS c
WHERE c.COD_PROV IS NULL)
ORDER BY Cod, Birth_Date
I think that the problem is the IN clause
First of all it's not quite valid SQL, since you are selecting and sorting by columns that are not part of the group. What you want to do is called "select top N in group", check out Select first row in each GROUP BY group?
Your query doesn't make sense, because you have SELECT * with GROUP BY. Ignoring that, I would recommend writing the query as:
SELECT t.*
FROM temp t
WHERE EXISTS (SELECT 1
FROM tableCom c
WHERE t.Birth_Place = c.DES_COM AND
c.COD_PROV IS NULL
)
ORDER BY Cod, Birth_Date;
For this, I recommend an index on tableCom(desc_com, cod_prov). Your database might also be able to use an an index on temp(cod, birth_date, birthplace).
This query takes more than 40 seconds to execute on a table that has 200k rows
SELECT
my_robots.*,
(
SELECT count(id)
FROM hpsi_trading
WHERE estado <= 1 and idRobot = my_robots.id
) as openorders,
apikeys.apikey,
apikeys.apisecret
FROM my_robots, apikeys
WHERE estado <= 1
and idRobot = '2'
and ready = '1'
and apikeys.id = my_robots.idApiKey
and (my_robots.id LIKE '%0'
OR my_robots.id LIKE '%1'
OR my_robots.id LIKE '%2')
I know it is because of the count inside the query, but how could i fix this efficiently.
Edit: Explain
Thanks.
Use GROUP BY instead
SELECT my_robots.*,
count(id) as openorders,
apikeys.apikey,
apikeys.apisecret
FROM my_robots
JOIN apikeys ON apikeys.id = my_robots.idApiKey
LEFT JOIN hpsi_trading ON hpsi_trading.idRobot = my_robots.id and estado <= 1
WHERE estado <= 1 and
idRobot = '2' and
ready = '1' and
(
my_robots.id LIKE '%0' OR
my_robots.id LIKE '%1' OR
my_robots.id LIKE '%2'
)
GROUP BY my_robots.id, apikeys.apikey, apikeys.apisecret
Use explicit JOIN syntax. Some indexes will be needed to run it fast, however, the database structure is not clear from your post (and from your query as well).
The explain plan shows that the largest pain is selecting the data from the table hpsi_trading.
The challenge from the database's point of view is that the query contains a correlated subquery in the SELECT clause, which needs to be executed once for each result of the outer query (after filtering).
Replacing this subquery with a JOIN + GROUP BY will require MySQL to join between all these records (inflate) and only then deflate the data using GROUP BY, which might take time.
Instead, I would extract the subquery to a temporary table, which is grouped during creation, index it and join to it. That way, the subquery will run once, using a quick covering index, it will already group the data and only then join it to the other table.
This far, it's all pros. But, the con here is that extracting a subquery to a temporary table might require more effort on the development side.
Please try this version and let me know if it helped (if not, please provide a fresh EXPLAIN plan screenshot):
Creating the temp table:
CREATE TEMPORARY TABLE IF NOT EXISTS temp1 AS
SELECT idRobot, COUNT(id) as openorders
FROM hpsi_trading
WHERE estado <= 1
GROUP BY idRobot;
The modified query:
SELECT
my_robots.*,
temp1.openorders,
apikeys.apikey,
apikeys.apisecret
FROM
my_robots,
apikeys
LEFT JOIN temp1 on temp1.idRobot = my_robots.id
WHERE
estado <= 1 AND idRobot = '2'
AND ready = '1'
AND apikeys.id = my_robots.idApiKey
AND (my_robots.id LIKE '%0'
OR my_robots.id LIKE '%1'
OR my_robots.id LIKE '%2')
The indexes to add for this solution (I assumed from logic that estado, idRobot and ready are from the apikeys table. If that's not the case, let me know and I'll adjust the indexes):
ALTER TABLE `temp1` ADD INDEX `temp1_index_1` (idRobot);
ALTER TABLE `hpsi_trading` ADD INDEX `hpsi_trading_index_1` (idRobot, estado, id);
ALTER TABLE `apikeys` ADD INDEX `apikeys_index_1` (`idRobot`, `ready`, `id`, `estado`);
ALTER TABLE `my_robots` ADD INDEX `my_robots_index_1` (`idApiKey`);
I got specific query:
SELECT *
FROM stats
WHERE mod_name = 'module'
GROUP BY domain
ORDER BY addition_date DESC
I want to retrive, the newest value for every domain. I know there is a domain x.com value with date 2014-02-19.
However, this query returns me row with date: 2014-01-06
That's quite simple query... why it does not take group by domains and take only newest value?
Am I missing something?
The order by takes place after the group by. That is why your query does not work. Here is a way to get what you want:
SELECT s.*
FROM stats s
WHERE mod_name = 'module' and
not exists (select 1
from stats s2
where s2.mod_name = s.mod_name and
s2.addition_date > s.addition_date
)
ORDER BY addition_date DESC;
To get the best performance, create an index on stats(mod_name, addition_date).
I have this query:
select *
from transaction_batch
where id IN
(
select MAX(id) as id
from transaction_batch
where status_id IN (1,2)
group by status_id
);
The inner query runs very fast (less than 0.1 seconds) to get two ID's, one for status 1, one for status 2, then it selects based on primary key so it is indexed. The explain query says that it's searching 135k rows using where only, and I cannot for the life of me figure out why this is so slow.
The inner query is run seperatly for every row of your table over and over again.
As there is no reference to the outer query in the inner query, I suggest you split those two queries and just insert the results of the inner query in the WHERE clause.
select b.*
from transaction_batch b
inner join (
select max(id) as id
from transaction_batch
where status_id in (1, 2)
group by status_id
) bm on b.id = bm.id
my first post here.. sorry about the lack of formatting
I had a performance problem shown below:
90sec: WHERE [Column] LIKE (Select [Value] From [Table]) //Dynamic, slow
1sec: WHERE [Column] LIKE ('A','B','C') //Hardcoded, fast
1sec: WHERE #CSV like CONCAT('%',[Column],'%') //Solution, below
I had tried joining rather than subquerying.
I had also tried a hardcoded CTE.
I had lastly tried a temp table.
None of these standard options worked, and I was not willing to dosp_execute option.
The only solution that worked as:
DECLARE #CSV nvarchar(max) = Select STRING_AGG([Value],',') From [Table];
// This yields #CSV = 'A,B,C'
...
WHERE #CSV LIKE CONCAT('%',[Column],'%')
I would like to count the total number of rows returned by the following query:
SELECT table1.*, COUNT(table2.fk) * (100/18) AS 'number'
FROM table1 INNER JOIN table2 ON table1.pk = table2.fk
WHERE table1.Street LIKE '$Street%'
AND table1.City LIKE '$City%'
AND table1.Zip LIKE '%$Zip'
AND table1.DOBY LIKE '%$DOBY'
AND table1.DOBM LIKE '%$DOBM'
AND table1.DOBD LIKE '%$DOBD'
AND table1.Gender LIKE '$gender%'
AND table2.year>= 2004
AND table2.type IN ('AA', 'AB', 'AC')
GROUP BY table2.fk
HAVING (COUNT(table2.fk) * (100/18)) >= '$activity'
ORDER BY DOBY, DOBM, DOBD ASC
The query counts the number or times the primary key of table1 occurs as the foreign key of table2, and calculates a percentage ('number') based on a fixed amount. It works well enough, but I'm having trouble getting the total amount of records found for my pagination script.
I would appreciate it if anyone can offer some suggestions or solutions.
u can do SQL_CALC_FOUND_ROWS (google for exact syntax)
And then use SELECT FOUND_ROWS() AS total
Going with what Itay Moav says, a programming language should have a function for the found_rows function. As per the function documentation, it returns the number of rows of a SELECT statement with a LIMIT keyword if the LIMIT keyword wasn't there.
If it doesn't, you can just make another SELECT query to the database: SELECT FOUND_ROWS();. It will return the same information.