MySQL View 20x slower than Select - mysql

I have a query that selects ~8000 rows. When I execute the query it takes 0.1 sec.
When I copy the query into a view and execute the view it takes about 2 seconds. In the first row of explain it selects ~570K rows, i dont know why.
I dont understand the first Row and why it shows up only in the view explain
1 PRIMARY ALL NULL NULL NULL NULL
This is the query (yes i know im not a mysql pro and the query is not that efficent, but it works ans 0.1 sek would be ok for me. Does anyone know why it is so slow in a view?
MariaDB 10.5.9
select
`xxxxxxx`.`auftraege`.`Zustandigkeit` AS `Zustandigkeit`,
`xxxxxxx`.`auftraege`.`cms` AS `cms`,
`xxxxxxx`.`auftraege`.`auftrag_id` AS `auftrag_id`,
`xxxxxxx`.`angebot`.`angebot_id` AS `angebot_id`,
`xxxxxxx`.`kunden`.`kunde_id` AS `kid`,
`xxxxxxx`.`angebot`.`kunde_id` AS `kunde_id`,
`xxxxxxx`.`kunden`.`firma` AS `firma`,
`xxxxxxx`.`auftraege`.`gekuendigt` AS `gekuendigt`,
`xxxxxxx`.`kunden`.`ansprechpartnerVorname` AS `ansprechpartnerVorname`,
`xxxxxxx`.`kunden`.`ansprechpartner` AS `ansprechpartner`,
`xxxxxxx`.`auftraege`.`ampstatus` AS `ampstatus`,
`xxxxxxx`.`auftraege`.`autoMahnungen` AS `autoMahnungen`,
`xxxxxxx`.`kunden`.`mail` AS `mail`,
`xxxxxxx`.`kunden`.`ansprechpartnerAnrede` AS `ansprechpartnerAnrede`,
case
`xxxxxxx`.`kunden`.`ansprechpartnerAnrede`
when
'm'
then
concat('Herr ', ifnull(`xxxxxxx`.`kunden`.`ansprechpartnerVorname`, ''), ifnull(`xxxxxxx`.`kunden`.`ansprechpartner`, ''))
else
concat('Frau ', ifnull(`xxxxxxx`.`kunden`.`ansprechpartnerVorname`, ''), ifnull(`xxxxxxx`.`kunden`.`ansprechpartner`, ''))
end
AS `ansprechpartnerfullName`, `xxxxxxx`.`kunden`.`website` AS `website`, `xxxxxxx`.`personal`.`name_betrieb` AS `name_betrieb`, `xxxxxxx`.`kunden`.`prioritaet` AS `prioritaet`, `xxxxxxx`.`auftraege`.`infoemail` AS `infoemail`, `xxxxxxx`.`auftraege`.`keywords` AS `keywords`, `xxxxxxx`.`auftraege`.`ftp_h` AS `ftp_h`, `xxxxxxx`.`auftraege`.`ftp_u` AS `ftp_u`, `xxxxxxx`.`auftraege`.`ftp_pw` AS `ftp_pw`, `xxxxxxx`.`auftraege`.`lgi_h` AS `lgi_h`, `xxxxxxx`.`auftraege`.`lgi_u` AS `lgi_u`, `xxxxxxx`.`auftraege`.`lgi_pw` AS `lgi_pw`, `xxxxxxx`.`auftraege`.`autoRemind` AS `autoRemind`, `xxxxxxx`.`kunden`.`telefon` AS `telefon`, `xxxxxxx`.`kunden`.`mobilfunk` AS `mobilfunk`, `xxxxxxx`.`auftraege`.`kommentar` AS `kommentar`, `xxxxxxx`.`auftraege`.`phase` AS `phase`, `xxxxxxx`.`auftraege`.`datum` AS `datum`, `xxxxxxx`.`angebot`.`typ` AS `typ`,
case
`xxxxxxx`.`auftraege`.`gekuendigt`
when
'1'
then
'Ja'
else
'Nein'
end
AS `Gekuendigt ? `,
(
select
count(`xxxxxxx`.`status`.`aenderung`)
from
`xxxxxxx`.`status`
where
`xxxxxxx`.`status`.`auftrag_id` = `xxxxxxx`.`auftraege`.`auftrag_id`
)
AS `aenderungen`,
`xxxxxxx`.`auftraege`.`vertragStart` AS `vertragStart`,
`xxxxxxx`.`auftraege`.`vertragEnde` AS `vertragEnde`,
case
`xxxxxxx`.`auftraege`.`zahlungsart`
when
'U'
then
'Überweisung'
when
'L'
then
'Lastschrift'
else
'Unbekannt'
end
AS `Zahlungsart`, `xxxxxxx`.`kunden`.`yyyyy_piwik` AS `yyyyy_piwik`,
(
select
max(`xxxxxxx`.`status`.`datum`) AS `mxDTst`
from
`xxxxxxx`.`status`
where
`xxxxxxx`.`status`.`auftrag_id` = `xxxxxxx`.`auftraege`.`auftrag_id`
and `xxxxxxx`.`status`.`typ` = 'SEO'
)
AS `mxDTst`,
(
select
case
`xxxxxxx`.`rechnungen`.`beglichen`
when
'YES'
then
'isOk'
else
'isAffe'
end
AS `neuUwe`
from
(
`xxxxxxx`.`zahlungsplanneu`
join
`xxxxxxx`.`rechnungen`
on(`xxxxxxx`.`zahlungsplanneu`.`rechnungsnummer` = `xxxxxxx`.`rechnungen`.`rechnungsnummer`)
)
where
`xxxxxxx`.`zahlungsplanneu`.`auftrag_id` = `xxxxxxx`.`auftraege`.`auftrag_id`
and `xxxxxxx`.`rechnungen`.`beglichen` <> 'STO' limit 1
)
AS `neuer`,
(
select
group_concat(`xxxxxxx`.`kunden_keywords`.`keyword` separator ',')
from
`xxxxxxx`.`kunden_keywords`
where
`xxxxxxx`.`kunden_keywords`.`kunde_id` = `xxxxxxx`.`kunden`.`kunde_id`
)
AS `keyword`,
(
select
case
count(0)
when
0
then
'Cool'
else
'Uncool'
end
AS `AusfallVor`
from
`xxxxxxx`.`rechnungen`
where
`xxxxxxx`.`rechnungen`.`rechnung_tag` < current_timestamp() - interval 15 day
and `xxxxxxx`.`rechnungen`.`kunde_id` = `xxxxxxx`.`kunden`.`kunde_id`
and `xxxxxxx`.`rechnungen`.`beglichen` = 'NO' limit 1
)
AS `Liquidiert`
from
(
((((`xxxxxxx`.`auftraege`
join
`xxxxxxx`.`angebot`
on(`xxxxxxx`.`auftraege`.`angebot_id` = `xxxxxxx`.`angebot`.`angebot_id`))
join
`xxxxxxx`.`kunden`
on(`xxxxxxx`.`angebot`.`kunde_id` = `xxxxxxx`.`kunden`.`kunde_id`))
left join
`xxxxxxx`.`kunden_keywords`
on(`xxxxxxx`.`angebot`.`kunde_id` = `xxxxxxx`.`kunden_keywords`.`kunde_id`))
join
`xxxxxxx`.`personal`
on(`xxxxxxx`.`kunden`.`bearbeiter` = `xxxxxxx`.`personal`.`personal_id`))
left join
`xxxxxxx`.`status`
on(`xxxxxxx`.`auftraege`.`auftrag_id` = `xxxxxxx`.`status`.`auftrag_id`)
)
group by
`xxxxxxx`.`auftraege`.`auftrag_id`
order by
NULL
UPDATE 1
1. The View Itself (Duration 1.83 sec)
1.1 Create the View: This is the View i created, it only contains the query from above.
1.2 Executing the View: It takes 1.83 sek to execute the view
1.3 Analyze the View: This is the explain of the view
2. The view with added where clause (Duration 1.86 sec)
2.1 Analyze the View with added where clause #rick wanted me to add a where clause to the view, if i understood him correctly. This is the explain of the view, where i added a where clause, takes 1.86 sec.
3. The Query, that is the source of the view (Duration: 0.1 sec)
3.1 Execute the query directly This is the query, that is the source of the view, when i execute it directly to the server. It takes ~0.1 - 0.2 seconds.
3.2 Analyze the direct queryAnd this is the explain of the pure query.
Why the view is so much slower, by only cupsuling the query inside of the view?
Update 2
These are the indexes I have set
ALTER TABLE angebot ADD INDEX angebot_idx_angebot_id (angebot_id);
ALTER TABLE auftraege ADD INDEX auftraege_idx_auftrag_id (auftrag_id);
ALTER TABLE kunden ADD INDEX kunden_idx_kunde_id (kunde_id);
ALTER TABLE kunden_keywords ADD INDEX kunden_keywords_idx_kunde_id (kunde_id);
ALTER TABLE personal ADD INDEX personal_idx_personal_id (personal_id);
ALTER TABLE rechnungen ADD INDEX rechnungen_idx_rechnungsnummer_beglichen (rechnungsnummer,beglichen);
ALTER TABLE rechnungen ADD INDEX rechnungen_idx_beglichen_kunde_id_rechnung (beglichen,kunde_id,rechnung_tag);
ALTER TABLE status ADD INDEX status_idx_auftrag_id (auftrag_id);
ALTER TABLE status ADD INDEX status_idx_typ_auftrag_id_datum (typ,auftrag_id,datum);
ALTER TABLE zahlungsplanneu ADD INDEX zahlungsplanneu_idx_auftrag_id (auftrag_id);

Be consistent between tables. kunde_id, for example, seems to be declared differently between tables. This may be preventing some obvious optimizations. (There are 6 JOINs that say func in EXPLAIN`.)
Remove the extra parentheses in JOINs. They may be preventing what the Optimizer is happy to do -- rearrange the tables in a JOIN.
Turn the query inside out. By this, I mean to do the minimum amount of work to do the main JOIN. Collect mostly id(s). Then do the dependent subqueries in an outer select. Something like:
SELECT ... ( SELECT ... ), ...
FROM ( SELECT a1.id
FROM a AS a1
JOIN b ON ..
JOIN c ON .. )
JOIN a AS a2 ON a2.id = a1.id
JOIN d ON ...
The "inside-out" kludge may eliminate the need for the GROUP BY. (Your query is too complex for me to see for sure.) If so, then I call the problem "explode-implode" -- Your query first JOINs, producing a temp table with lots of rows ("explodes"). Then it does a GROUP BY ("implodes").
More
These indexes will probably help:
status: (auftrag_id, typ, datum, aenderung)
rechnungen: (beglichen, kunde_id, rechnung_tag)
rechnungen: (rechnungsnummer, beglichen)
zahlungsplanneu: (auftrag_id, rechnungsnummer)
kunden_keywords: (kunde_id, keyword) -- (unless `kunde_id` is the PK)
(I see from all 3 EXPLAINs that you probably have sufficient indexes on kunden_keywords and status. Show me what indexes you have, so I can see if the existing indexes are as good as my suggestions.) "Using index" == "covering index".
Near the end is this LEFT JOIN, but I did not spot any use for the table; perhaps it can be removed?
left join `kunden_keywords` on(`angebot`.`kunde_id` = `kunden_keywords`.`kunde_id`))

Related

MySQL update query where in subquery slow

I'm having issues with MySQL query running extremely slow. It takes about 2 min for each UPDATE to process.
This is the query:
UPDATE msn
SET is_disable = 1
WHERE mid IN
(
SELECT mid from link
WHERE rid = ${param.rid}
);
So my question is, I would like to know how the performance of the UPDATE statement will be affected if the result of the subquery is 0 or NULL. Because I think that maybe the process is slow because the result of the subquery is 0 or NULL.
Thanks a lot in advance.
The issue here is that the subquery following IN has to execute, whether or not it returns any records. I would probably express your update using exists logic:
UPDATE msn m
SET is_disable = 1
WHERE EXISTS (SELECT 1 FROM link l WHERE m.mid = l.mid AND l.rid = ${param.rid});
Then, add the following index to the link table:
CREATE INDEX idx ON link (mid, rid);
You could also try and compare against this version of the index:
CREATE INDEX idx ON link (rid, mid);

MYSQL OR query problem (scans full table even when using indexes)

I am using EXPLAIN to get performance analysis of my below query:
SELECT `wf_cart_items` . `id`
FROM `wf_cart_items`
WHERE (`wf_cart_items` . `docket_number` = '405-2844' OR
match( `wf_cart_items` . `multi_docket_number` ) against ( '405-2844' )
)
The problem is that it shows rows to be searched 597151 while individual OR queries examine only 1 row each. How is it possible that when I use OR it is doing a full table scan?
P.S.: I have FULL-TEXT index on multi_docket_number & BTREE index on docket_number
OR is quite tricky for SQL optimizers -- both in the WHERE clause and in ON clauses.
The recommendation is to switch this to union all:
SELECT ci.id
FROM wf_cart_items ci
WHERE ci.docket_number = '405-2844'
UNION ALL
SELECT ci.id
FROM wf_cart_items ci
WHERE MATCH(ci.multi_docket_number) AGAINST ( '405-2844' ) AND
ci.docket_number <> '405-2844';
Based on the naming of your columns, I feat that multi-docket_number actually contains multiple docket numbers. If that is the case, you probably want to fix the data model, but that is another conversation.

Query takes more than 40 seconds to execute

This query takes more than 40 seconds to execute on a table that has 200k rows
SELECT
my_robots.*,
(
SELECT count(id)
FROM hpsi_trading
WHERE estado <= 1 and idRobot = my_robots.id
) as openorders,
apikeys.apikey,
apikeys.apisecret
FROM my_robots, apikeys
WHERE estado <= 1
and idRobot = '2'
and ready = '1'
and apikeys.id = my_robots.idApiKey
and (my_robots.id LIKE '%0'
OR my_robots.id LIKE '%1'
OR my_robots.id LIKE '%2')
I know it is because of the count inside the query, but how could i fix this efficiently.
Edit: Explain
Thanks.
Use GROUP BY instead
SELECT my_robots.*,
count(id) as openorders,
apikeys.apikey,
apikeys.apisecret
FROM my_robots
JOIN apikeys ON apikeys.id = my_robots.idApiKey
LEFT JOIN hpsi_trading ON hpsi_trading.idRobot = my_robots.id and estado <= 1
WHERE estado <= 1 and
idRobot = '2' and
ready = '1' and
(
my_robots.id LIKE '%0' OR
my_robots.id LIKE '%1' OR
my_robots.id LIKE '%2'
)
GROUP BY my_robots.id, apikeys.apikey, apikeys.apisecret
Use explicit JOIN syntax. Some indexes will be needed to run it fast, however, the database structure is not clear from your post (and from your query as well).
The explain plan shows that the largest pain is selecting the data from the table hpsi_trading.
The challenge from the database's point of view is that the query contains a correlated subquery in the SELECT clause, which needs to be executed once for each result of the outer query (after filtering).
Replacing this subquery with a JOIN + GROUP BY will require MySQL to join between all these records (inflate) and only then deflate the data using GROUP BY, which might take time.
Instead, I would extract the subquery to a temporary table, which is grouped during creation, index it and join to it. That way, the subquery will run once, using a quick covering index, it will already group the data and only then join it to the other table.
This far, it's all pros. But, the con here is that extracting a subquery to a temporary table might require more effort on the development side.
Please try this version and let me know if it helped (if not, please provide a fresh EXPLAIN plan screenshot):
Creating the temp table:
CREATE TEMPORARY TABLE IF NOT EXISTS temp1 AS
SELECT idRobot, COUNT(id) as openorders
FROM hpsi_trading
WHERE estado <= 1
GROUP BY idRobot;
The modified query:
SELECT
my_robots.*,
temp1.openorders,
apikeys.apikey,
apikeys.apisecret
FROM
my_robots,
apikeys
LEFT JOIN temp1 on temp1.idRobot = my_robots.id
WHERE
estado <= 1 AND idRobot = '2'
AND ready = '1'
AND apikeys.id = my_robots.idApiKey
AND (my_robots.id LIKE '%0'
OR my_robots.id LIKE '%1'
OR my_robots.id LIKE '%2')
The indexes to add for this solution (I assumed from logic that estado, idRobot and ready are from the apikeys table. If that's not the case, let me know and I'll adjust the indexes):
ALTER TABLE `temp1` ADD INDEX `temp1_index_1` (idRobot);
ALTER TABLE `hpsi_trading` ADD INDEX `hpsi_trading_index_1` (idRobot, estado, id);
ALTER TABLE `apikeys` ADD INDEX `apikeys_index_1` (`idRobot`, `ready`, `id`, `estado`);
ALTER TABLE `my_robots` ADD INDEX `my_robots_index_1` (`idApiKey`);

Retrieving rows that have 2 columns matching and 1 different

Below is my table called 'datapoints'. I am trying to retrieve instances where there are different instances of 'sensorValue' for the same 'timeOfReading' and 'sensorNumber'.
For example:
sensorNumber sensorValue timeOfReading
5 5 6
5 5 6
5 6 10 <----same time/sensor diff value!
5 7 10 <----same time/sensor diff value!
Should output: sensorNumber:5, timeOfReading: 10 as a result.
I understand this is a duplicate question, in fact I have one of the links provided below for references - however none of the solutions are working as my query simply never ends.
Below is my SQL code:
SELECT table1.sensorNumber, table1.timeOfReading
FROM datapoints table1
WHERE (SELECT COUNT(*)
FROM datapoints table2
WHERE table1.sensorNumber = table2.sensorNumber
AND table1.timeOfReading = table1.timeOfReading
AND table1.sensorValue != table2.sensorValue) > 1
AND table1.timeOfReading < 20;
Notice I have placed a bound for timeOfReading as low as 20. I also tried setting a bound for both table1 and table 2 as well but the query just runs until timeout without displaying results no matter what I put...
The database contains about 700mb of data, so I do not think I can just run this on the entire DB in a reasonable amount of time, I am wondering if this is the culprit?
If so how could I properly limit my query to run a search efficiently? If not what am doing wrong that this is not working?
Select rows having 2 columns equal value
EDIT:
Error Code: 2013. Lost connection to MySQL server during query 600.000 sec
When I try to run the query again I get this error unless I restart
Error Code: 2006. MySQL server has gone away 0.000 sec
You can use a self-JOIN to match related rows in the same table.
SELECT DISTINCT t1.sensorNumber, t1.timeOfReading
FROM datapoints AS t1
JOIN datapoints AS t2
ON t1.sensorNumber = t2.sensorNumber
AND t1.timeOfReading = t2.timeOfReading
AND t1.sensorValue != t2.sensorValue
WHERE t1.timeOfReading < 20
DEMO
To improve performance, make sure you have a composite index on sensorNumber and timeOfReading:
CREATE INDEX ix_sn_tr on datapoints (sensorNumber, timeOfReading);
I think you have missed a condition. Add a not condition also to retrieve only instances with different values.
SELECT *
FROM new_table a
WHERE EXISTS (SELECT * FROM new_table b
WHERE a.num = b.num
AND a.timeRead = b.timeRead
AND a.value != b.value);
you can try this query
select testTable.* from testTable inner join (
SELECT sensorNumber,timeOfReading
FROM testTable
group by sensorNumber , timeOfReading having Count(distinct sensorValue) > 1) t
on
t.sensorNumber = testTable.sensorNumber and t.timeOfReading = testTable.timeOfReading;
here is sqlFiddle
This query will return the sensorNumber and the timeOfReading where there are different values of sensorValue:
select sensorNumber, timeOfReading
from tablename
group by sensorNumber, timeOfReading
having count(distinct sensorValue)>1
and this will return the actual records:
select t.*
from
tablename t inner join (
select sensorNumber, timeOfReading
from tablename
group by sensorNumber, timeOfReading
having count(distinct sensorValue)>1
) d on t.sensorNumber=d.sensorNumber and t.timeOfReading=d.timeOfReading
I would suggest you to add an index on sensorNumber, timeOfReading
alter table tablename add index idx_sensor_time (sensorNumber, timeOfReading)

MYSQL how to speed up requests using EXISTS

i have an SQL Requests:
SELECT DISTINCT id_tr
FROM planning_requests a
WHERE EXISTS(
SELECT 1 FROM planning_requests b
WHERE a.id_tr = b.id_tr
AND trainer IS NOT NULL
AND trainer != 'FREE'
)
AND EXISTS(
SELECT 1 FROM planning_requests c
WHERE a.id_tr = c.id_tr
AND trainer IS NULL
)
but this requests take 168.9490 sec to execute for returning 23162 rows of 2545088 rows
should i use LEFT JOIN or NOT IN ? and how can i rewrite it thx
You can speed this up by adding indexes. I would suggest: planning_requests(id_tr, trainer).
You can do this as:
create index planning_requests_id_trainer on planning_requests(id_tr, trainer);
Also, I think you are missing an = in the first subquery.
EDIT:
If you have a lot of duplicate values of id_tr, then in addition to the above indexes, it might make sense to phrase the query as:
select id_tr
from (select distinct id_tr
from planning_requests
) a
where . . .
The where conditions are being run on every row of the original table. The distinct is processed after the where.
I think your query can be simplified to this:
SELECT DISTINCT a.id_tr
FROM planning_requests a
JOIN planning_requests b
ON b.id_tr = a.id_tr
AND b.trainer IS NULL
WHERE a.trainer < 'FREE'
If you index planning_requests(trainer), then MySQL can utilize an index range to get all the rows that aren't FREE or NULL. All numeric strings will meet the < 'FREE' criteria, and it also won't return NULL values.
Then, use JOIN to make sure each record from that much smaller result set has a matching NULL record.
For the JOIN, index planning_requests(id_tr, trainer).
It might be simpler if you don't mix types in a column like FREE and 1.