Convert MySQL query to Laravel query builder code - mysql

I am working with agricultural product management system. I have a question regarding a MySQL query. I would like to know how to create the same query using Laravel query builder:
SELECT
vegitables.name, vegitables.image, vegitables.catagory,
AVG(price_wholesale),
SUM(CASE WHEN rank = 1 THEN price_wholesale ELSE 0 END) today,
SUM(CASE WHEN rank = 2 THEN price_wholesale ELSE 0 END) yesterday
FROM (
SELECT
veg_id, price_wholesale, price_date,
RANK() OVER (PARTITION BY veg_id ORDER BY price_date DESC) as rank
FROM old_veg_prices
) p
INNER JOIN vegitables ON p.veg_id = vegitables.id
WHERE rank in (1,2)
GROUP BY veg_id
This Output result get when run query in database:
Following two table are used to get today price yesterday price and price average get from each product.
CREATE TABLE `vegitables` (
`id` bigint(20) UNSIGNED NOT NULL,
`name` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`image` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`catagory` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`total_area` int(11) NOT NULL COMMENT 'Total area of culativate in Sri Lanka (Ha)',
`total_producation` int(11) NOT NULL COMMENT 'Total production particular product(mt)',
`annual_crop_count` int(11) NOT NULL COMMENT 'how many time can crop pre year',
`short_dis` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`created_at` timestamp NULL DEFAULT NULL,
`updated_at` timestamp NULL DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
ALTER TABLE `vegitables`
ADD PRIMARY KEY (`id`);
ALTER TABLE `vegitables`
MODIFY `id` bigint(20) UNSIGNED NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=3;
COMMIT;
CREATE TABLE `old_veg_prices` (
`id` bigint(20) UNSIGNED NOT NULL,
`veg_id` int(11) NOT NULL,
`price_wholesale` double(8,2) NOT NULL,
`price_retial` double(8,2) NOT NULL,
`price_location` int(11) NOT NULL,
`price_date` date NOT NULL,
`created_at` timestamp NULL DEFAULT NULL,
`updated_at` timestamp NULL DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
ALTER TABLE `old_veg_prices`
ADD PRIMARY KEY (`id`);
ALTER TABLE `old_veg_prices`
MODIFY `id` bigint(20) UNSIGNED NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=6;
COMMIT;
I try this site to convert to MySQL query to query builder code. But it show some error's could find it out. Any Way i want to run this code in Laravel with any method??

Your query will not return the data for yesterday and today; it will return the data for two most recent dates (e.g. if today is 2021-11-01 and most recent two dates for for carrots are 2021-10-25 and 2021-10-20 it will use those two dates). Using RANK() ... IN (1, 2) is also incorrect because it can return ranks such as 1 followed by 3 instead of 2.
To get today and yesterday prices you don't need window functions. Just use appropriate where clause and conditional aggregation:
SELECT vegitables.name
, vegitables.image
, vegitables.catagory
, AVG(old_veg_prices.price_wholesale) AS avgwholesale
, SUM(CASE WHEN old_veg_prices.price_date = CURRENT_DATE - INTERVAL 1 DAY THEN old_veg_prices.price_wholesale END) AS yesterday
, SUM(CASE WHEN old_veg_prices.price_date = CURRENT_DATE THEN old_veg_prices.price_wholesale END) AS today
FROM vegitables
INNER JOIN old_veg_prices ON vegitables.id = old_veg_prices.veg_id
WHERE old_veg_prices.price_date IN (CURRENT_DATE - INTERVAL 1 DAY, CURRENT_DATE)
GROUP BY vegitables.id -- other columns from vegitables table are functionally dependent on primary key
The Laravel equivalent would be:
DB::table('vegitables')
->Join('old_veg_prices', 'old_veg_prices.veg_id', '=', 'vegitables.id')
->whereRaw('old_veg_prices.price_date IN (CURRENT_DATE - INTERVAL 1 DAY, CURRENT_DATE)')
->select(
'vegitables.name',
'vegitables.image',
'vegitables.catagory',
DB::raw('AVG(old_veg_prices.price_wholesale) AS avgwholesale'),
DB::raw('SUM(CASE WHEN old_veg_prices.price_date = CURRENT_DATE - INTERVAL 1 DAY THEN old_veg_prices.price_wholesale END) AS yesterday'),
DB::raw('SUM(CASE WHEN old_veg_prices.price_date = CURRENT_DATE THEN old_veg_prices.price_wholesale END) AS today')
)
->groupBy(
'vegitables.id',
'vegitables.name',
'vegitables.image',
'vegitables.catagory'
)
->get();

"Query builder" features of abstraction products often leave out some possible SQL constructs. I recommend you abandon the goal of reverse engineering SQL back to Laravel and simply perform the "raw" query.
Also...
rank() OVER (PARTITION BY veg_id ORDER BY price_date DESC) as rank
requires MySQL 8.0 (MariaDB 10.2).
And suggest you avoid the alias "rank" since that is identical to the name of a function.

Related

Mysql SUM by value in column

I have problem to get the proper result.
I have a table with registered time entries by date and user.
I also have a date table, that only consists of dates.
CREATE TABLE `jobbile_job_record` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`note` text,
`time_type` int(11) DEFAULT NULL,
`created_by` varchar(255) DEFAULT NULL,
`created` date DEFAULT NULL,
`jobbile_job_id` int(11) DEFAULT NULL,
`inserted` datetime DEFAULT CURRENT_TIMESTAMP,
`time_registered` decimal(11,2) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1145 DEFAULT CHARSET=latin1;
SET FOREIGN_KEY_CHECKS = 1;
I would like to get a result of
- date
- total time registered
- total time by user (registered)
I use the following query:
SELECT
date.date,
SUM(jobbile_job_record.time_registered) as 'total time',
SUM(jobbile_job_record.time_registered AND `jobbile_job_record`.`created_by` = '5713') as 'User 5713',
SUM(jobbile_job_record.time_registered AND `jobbile_job_record`.`created_by` = '5714') as 'User 5714'
FROM
date
LEFT JOIN jobbile_job_record
ON date.date = jobbile_job_record.created
WHERE
date.date BETWEEN '2019-11-01' AND '2019-11-30'
GROUP BY
date.date
ORDER BY
date.date ASC
Total_time works fine but the two SUM's with users filtered, is not summarized but counted.
Cant I use this method? Thanks!
I guess you need a case statement here -
SELECT
date.date,
SUM(jobbile_job_record.time_registered) as 'total time',
SUM(CASE WHEN `jobbile_job_record`.`created_by` = '5713' THEN jobbile_job_record.time_registered END) as 'User 5713',
SUM(CASE WHEN `jobbile_job_record`.`created_by` = '5714' THEN jobbile_job_record.time_registered END ) as 'User 5714'
FROM date
LEFT JOIN jobbile_job_record ON date.date = jobbile_job_record.created
WHERE date.date BETWEEN '2019-11-01' AND '2019-11-30'
GROUP BY date.date
ORDER BY date.date ASC

How to improve query speed in mysql query

I'm trying to optimize my query speed as much as possible. A side problem is that I cannot see the exact query speed, because it is rounded to a whole second. The query does get the expected result and takes about 1 second. The final query should be extended even more and for this reason i am trying to improve it. How can this query be improved?
The database is constructed as an electricity utility company. The query should eventually calculate an invoice. I basically have 4 tables, APX price, powerdeals, powerload, eans_power.
APX price is an hourly price, powerload is a quarterly hour volume. First step is joining these two together for each quarter of an hour.
Second step is that I currently select the EAN that is indicated in the table eans_power.
Finally I will join the Powerdeals that currently consist only of a single line and indicates from which hour, until which hour and weekday from/until it should be applicable. It consist of an hourly volume and price. Currently it is only joined on the hours, but it will be extended to weekdays as well.
MYSQL Query:
SELECT l.DATE, l.PERIOD_FROM, a.PRICE, l.POWERLOAD,
SUM(a.PRICE*l.POWERLOAD), SUM(d.hourly_volume/4)
FROM timeseries.powerload l
INNER JOIN timeseries.apxprice a ON l.DATE = a.DATE
INNER JOIN contracts.eans_power c ON l.ean = c.ean
LEFT OUTER JOIN timeseries.powerdeals d ON d.period_from <= l.period_from
AND d.period_until >= l.period_until
WHERE l.PERIOD_FROM >= a.PERIOD_FROM
AND l.PERIOD_FROM < a.PERIOD_UNTIL
AND l.DATE >= '2018-01-01'
AND l.DATE <= '2018-12-31'
GROUP BY l.date
Explain:
1 SIMPLE c NULL system PRIMARY,ean NULL NULL NULL 1 100.00 Using temporary; Using filesort
1 SIMPLE l NULL ref EAN EAN 21 const 35481 11.11 Using index condition
1 SIMPLE d NULL ALL NULL NULL NULL NULL 1 100.00 Using where; Using join buffer (Block Nested Loop)
1 SIMPLE a NULL ref DATE DATE 4 timeseries.l.date 24 11.11 Using index condition
Create table queries:
apxprice
CREATE TABLE `apxprice` (
 `apx_id` int(11) NOT NULL AUTO_INCREMENT,
 `date` date DEFAULT NULL,
 `period_from` time DEFAULT NULL,
 `period_until` time DEFAULT NULL,
 `price` decimal(10,2) DEFAULT NULL,
 PRIMARY KEY (`apx_id`),
 KEY `DATE` (`date`,`period_from`,`period_until`)
) ENGINE=MyISAM AUTO_INCREMENT=29664 DEFAULT CHARSET=latin1
powerdeals
CREATE TABLE `powerdeals` (
 `deal_id` int(11) NOT NULL AUTO_INCREMENT,
 `date_deal` date NOT NULL,
 `start_date` date NOT NULL,
 `end_date` date NOT NULL,
 `weekday_from` int(11) NOT NULL,
 `weekday_until` int(11) NOT NULL,
 `period_from` time NOT NULL,
 `period_until` time NOT NULL,
 `hourly_volume` int(11) NOT NULL,
 `price` int(11) NOT NULL,
 `type_deal_id` int(11) NOT NULL,
 `contract_id` int(11) NOT NULL,
 PRIMARY KEY (`deal_id`)
) ENGINE=MyISAM AUTO_INCREMENT=2 DEFAULT CHARSET=latin1
powerload
CREATE TABLE `powerload` (
 `powerload_id` int(11) NOT NULL AUTO_INCREMENT,
 `ean` varchar(18) DEFAULT NULL,
 `date` date DEFAULT NULL,
 `period_from` time DEFAULT NULL,
 `period_until` time DEFAULT NULL,
 `powerload` int(11) DEFAULT NULL,
 PRIMARY KEY (`powerload_id`),
 KEY `EAN` (`ean`,`date`,`period_from`,`period_until`)
) ENGINE=MyISAM AUTO_INCREMENT=61039 DEFAULT CHARSET=latin1
eans_power
CREATE TABLE `eans_power` (
 `ean` char(19) NOT NULL,
 `contract_id` int(11) NOT NULL,
 `invoicing_id` int(11) NOT NULL,
 `street` varchar(255) NOT NULL,
 `number` int(11) NOT NULL,
 `affix` char(11) NOT NULL,
 `postal` char(6) NOT NULL,
 `city` varchar(255) NOT NULL,
 PRIMARY KEY (`ean`),
 KEY `ean` (`ean`,`contract_id`,`invoicing_id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
Sample data tables
apx_prices
apx_id,date,period_from,period_until,price
1,2016-01-01,00:00:00,01:00:00,23.86
2,2016-01-01,01:00:00,02:00:00,22.39
powerdeals
deal_id,date_deal,start_date,end_date,weekday_from,weekday_until,period_from,period_until,hourly_volume,price,type_deal_id,contract_id
1,2019-05-15,2018-01-01,2018-12-31,1,5,08:00:00,20:00:00,1000,50,3,1
powerload
powerload_id,ean,date,period_from,period_until,powerload
1,871688520000xxxxxx,2018-01-01,00:00:00,00:15:00,9
2,871688520000xxxxxx,2018-01-01,00:15:00,00:30:00,11
eans_power
ean,contract_id,invoicing_id,street,number,affix,postal,city
871688520000xxxxxx,1,1,road,14,postal,city
Result, without sum() and group by:
DATE,PERIOD_FROM,PRICE,POWERLOAD,a.PRICE*l.POWERLOAD,d.hourly_volume/4,
2018-01-01,00:00:00,27.20,9,244.80,NULL
2018-01-01,00:15:00,27.20,11,299.20,NULL
Result, with sum() and group by:
DATE, PERIOD_FROM, PRICE, POWERLOAD, SUM(a.PRICE*l.POWERLOAD), SUM(d.hourly_volume/4)
2018-01-01,08:00:00,26.33,21,46193.84,12250.0000
2018-01-02, 08:00:00,47.95,43,90623.98,12250.0000
Preliminary optimizations:
Use InnoDB, not MyISAM.
Use CHAR only for constant-lenght strings
Use consistent datatypes (see ean, for example)
For an alternative to using time-to-the-second, check out the Handler counts .
Because range tests (such as l.PERIOD_FROM >= a.PERIOD_FROM AND l.PERIOD_FROM < a.PERIOD_UNTIL) are essentially impossible to optimize, I recommend you expand the table to have one entry per hour (or 1 per quarter hour, if necessary). Looking up a row via a key is much faster than doing a scan of "ALL" the table. 9K rows for an entire year is trivial.
When you get past these recommendations (and the Comments), I will have more tips on optimizing the indexes, especially InnoDB's PRIMARY KEY.

Trying to get 2 sums within one table with addition of a join

I've got a submissions table and in it are submissions that have the type either tip or request.
I'm trying to grab all the submissions of a particular user (to display as an aggregation of all their activity on their dashboard).
E.g.
You have submitted: 5 requests and 1 tip.
My submissions create table looks like this:
Table: submissions
Create Table: CREATE TABLE `submissions` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`title` varchar(255) NOT NULL,
`slug` varchar(255) NOT NULL,
`description` mediumtext NOT NULL,
`user_id` int(11) NOT NULL,
`created` datetime NOT NULL,
`type` enum('tip','request') NOT NULL,
`thumbnail` varchar(64) CHARACTER SET latin1 DEFAULT NULL,
`removed` tinyint(1) unsigned NOT NULL DEFAULT '0',
`keywords` varchar(255) NOT NULL,
`ip` int(10) unsigned NOT NULL,
PRIMARY KEY (`id`),
KEY `user_id` (`user_id`),
FULLTEXT KEY `search` (`title`,`description`,`keywords`)
) ENGINE=InnoDB AUTO_INCREMENT=22 DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
I came up with one query that works and gives me the amount of the users' submissions, but because each submission (row that comes back) saves the type as either tip or request. So I'm trying to figure out how to aggregate that info now.
My query which returns the user with all tips. I'm trying to do one for requests as well.
SELECT users.*, count(submissions.id)
AS "tipsCount"
FROM users
LEFT JOIN submissions on users.id = submissions.user_id
WHERE username = 'blahbster'
AND submissions.type = 'tip'
ORDER BY submissions.created DESC
LIMIT 1;
Perhaps I could use a sum here? My attempt:
SELECT users.*,
SUM(case when type = 'tip' then 1 else 0 end) as "tipsCount"
SUM(case when type = 'request' then 1 else 0 end) as "requestsCount"
FROM users
LEFT JOIN submissions on users.id = submissions.user_id
WHERE username = 'blahbster'
ORDER BY submissions.created DESC
LIMIT 1;
SELECT a.username, b.type,
SUM(case when b.type = 'tip' then 1 else 0 end) as "tipsCount",
SUM(case when b.type = 'request' then 1 else 0 end) as "requestsCount"
FROM users as a
LEFT JOIN submissions as b
ON a.id = b.user_id
GROUP BY a.username, b.type;
The second query you had was close... but it wasn't aggregating a particular user's totals tips and total requests. The sum didn't compute across anything, ie, there was no GROUP BY. The query above should help. You can obviously add the WHERE filter back in if you need it.

MySQL Query to integrate information from 3 tables (with plenty of obstacles)

Background: In an experiment bees are glued number tags on their backs which and their choices in a lab are recorded. Not having enough number tags (2 digits and a few color options) they need to be reused. However, a tag is only reused after the one carrying it dies. Therefore, in the data structure we occasionally see bee identifiers but the only way to know whether it's from the same bee or not is by looking in another table to see whether the bee died or not.
The Tables:
The choices bees make
CREATE TABLE `exp8` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`bee_id` varchar(255) DEFAULT NULL,
`date_time` datetime DEFAULT NULL,
`choice` varchar(255) DEFAULT NULL,
`hover_duration` int(11) DEFAULT NULL,
`antennate_duration` int(11) DEFAULT NULL,
`land_duration` int(11) DEFAULT NULL,
`landing_position` varchar(255) DEFAULT NULL,
`remarks` longtext,
`validity` int(11) DEFAULT '1',
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=264;
LOCK TABLES `exp8` WRITE;
/*!40000 ALTER TABLE `exp8` DISABLE KEYS */;
INSERT INTO `exp8` (`id`, `bee_id`, `date_time`, `choice`, `hover_duration`, `antennate_duration`, `land_duration`, `landing_position`, `remarks`, `validity`)
VALUES
(1,NULL,'2013-05-14 15:38:31','right',1,0,0,NULL,NULL,1),
(2,NULL,'2013-05-18 10:27:15','left',1,0,0,NULL,NULL,1),
(3,'G5','2013-05-18 11:44:44','left',0,0,4,'yellow',NULL,1),
(4,'G5','2013-06-01 10:00:00','left',0,0,4,'yellow',NULL,1);
The time of birth and death tags
CREATE TABLE `tags` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`bee_id` varchar(255) DEFAULT NULL,
`tag_date` date DEFAULT NULL,
`colony_id` int(11) DEFAULT NULL,
`events` varchar(255) DEFAULT NULL,
`worker_age` varchar(255) DEFAULT NULL,
`tagged_by` varchar(255) DEFAULT NULL,
PRIMARY KEY (`id`)
) TYPE=InnoDB AUTO_INCREMENT=406;
LOCK TABLES `tags` WRITE;
/*!40000 ALTER TABLE `tags` DISABLE KEYS */;
INSERT INTO `tags` (`id`, `bee_id`, `tag_date`, `colony_id`, `events`, `worker_age`, `tagged_by`)
VALUES
(1,'G5','2013-05-08',1,'birth','Adult','ET'),
(2,'G5','2013-05-20',NULL,'death','Adult','ET'),
(3,'G5','2013-05-29',1,'birth','Adult','ET');
And the stimuli that are being displayed in the lab
CREATE TABLE `stimuli_schedule` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`left_side` varchar(255) DEFAULT NULL,
`right_side` varchar(255) DEFAULT NULL,
`start_datetime` datetime DEFAULT NULL,
`scheduled` datetime DEFAULT NULL,
PRIMARY KEY (`id`)
) TYPE=InnoDB AUTO_INCREMENT=50;
LOCK TABLES `stimuli_schedule` WRITE;
/*!40000 ALTER TABLE `stimuli_schedule` DISABLE KEYS */;
INSERT INTO `stimuli_schedule` (`id`, `left_side`, `right_side`, `start_datetime`, `scheduled`)
VALUES
(1,'LS1','LS2','2013-05-14 12:00:00',NULL),
(2,'LS2','LS1','2013-05-15 11:44:00',NULL),
(3,'LS1','LS2','2013-05-30 11:09:00',NULL);
The desired output is something like this:
bee_id CHOICE_DATETIME LEFT_SIDE RIGHT_SIDE CHOICE
===================================================================
NULL 2013-05-14 15:38:31 LS1 LS2 right
G5 2013-05-18 10:27:15 LS2 LS1 left
G5 2013-06-01 10:00:00 LS1 LS2 left
Thanks to the generous help of #GordonLinoff and #jcsanyi there are two related MySQL queries that achieve part of the solution:
This bit shows each individual bee's choice, assuming that a bee's ID is unique:
select bee_id, count(case when choice="left" then 1 else NULL end) as leftCount, count(case when choice="right" then 1 else NULL end) as rightCount
from exp8 e
left join stimuli_schedule ss on ss.start_datetime <= e.date_time
left join stimuli_schedule ss2 on ss2.start_datetime <= e.date_time
where (bee_id IS NOT NULL) AND (ss2.left_side IN ('LA1','HS1') AND ss2.right_side IN('HS1','LA1'))
group by bee_id
This bit is capable of showing a bees length of life, and distinguishes between reused tags:
select t.bee_id, (case when t.death_date is null then 'Alive' else 'Dead' end) as status,
t.tag_date, t.death_date, (case when t.death_date is not null then timediff(t.death_date,t.tag_date) else timediff(NOW(),t.tag_date) end) as age
from (select t.*,
(select t2.tag_date
from tags t2
where t2.bee_id = t.bee_id and
t2.events = 'death' and
t2.tag_date >= t.tag_date
limit 1
) as death_date
from tags t
where t.events = 'birth'
) t
group by t.bee_id, t.tag_date;
I am having trouble combining the two queries to produce the desired output. Here is my attempt:
select t.bee_id, count(case when choice="left" then 1 else NULL end) as leftCount,
count(case when choice="right" then 1 else NULL end) as rightCount,
(case when t.death_date is null then 'Alive' else 'Dead' end) as status,
t.tag_date, t.death_date,
(case when t.death_date is not null
then timediff(t.death_date,t.tag_date)
else timediff(NOW(),t.tag_date) end) as "age (hours)"
from exp8 e, (select t.*,
(select t2.tag_date
from tags t2
where t2.bee_id = t.bee_id and
t2.events = 'death' and
t2.tag_date >= t.tag_date
limit 1
) as death_date
from tags t
where t.events = 'birth'
) t
left join stimuli_schedule ss on ss.start_datetime <= e.date_time
left join stimuli_schedule ss2 on ss2.start_datetime <= e.date_time
where (e.bee_id IS NOT NULL)
group by t.bee_id, t.tag_date;
For reasons beyond my understanding, the left e.date_time portion is causing an "unknown column" error.
Any help would be much appreciated!
The way it stands now the JOIN operators relate to the derived table t, not to exp8 as you apparently intended. That's what you get by mixing two different join syntaxes. You would also want to join t to exp8 on bee_id, I presume.
Your problem is more in the database design itself. Behavour is attributed to a bee. That bee needs to be uniquely identified. As such, a primary key for the bee is needed and you can code the behavour according to that bee id.
The trick is, when you process the tag, then you need to determine which bee currently has that tag. Easily done with a table that lists the tags currently deployed. When a bee dies and the tag is re-assigned or retired, then the active tags list can subsequently be updated.
If you can see where I'm going with this, the selects you're doing in the data analysis phase are overly complex because they're trying to imitate the missing primary key and needlessly apply it to your behavour entries. Correct the design and your data analysis will be many times faster and your queries far simpler.

Using Filesort - Can't avoid it, even using index

I have a table with 40,000,000 rows, and I'm trying to optimize my query, because takes too long.
First, this is my table:
CREATE TABLE resume (
yearMonth char(6) DEFAULT NULL,
type char(1) DEFAULT NULL,
agen_o char(5) DEFAULT NULL,
tar char(2) DEFAULT NULL,
cve_ent char(1) DEFAULT NULL,
cve_mun char(3) DEFAULT NULL,
cve_reg int(1) DEFAULT NULL,
id_ope char(1) DEFAULT NULL,
ope_tip char(2) DEFAULT NULL,
ope_cve char(3) DEFAULT NULL,
cob_u int(9) DEFAULT NULL,
tot_imp bigint(15) DEFAULT NULL,
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
This is my query:
SELECT m.name_ope AS cve,
SUBSTRING(r.yearMonth,5,2) AS period,
COUNT(DISTINCT(CONCAT(r.agen_ope,r.cve_ope))) AS num,
SUM(CASE WHEN r.type='A' THEN r.cob_u ELSE 0 END) AS tot_u,
FROM resume r, media m
WHERE CONCAT(r.id_ope,SUBSTRING(r.ope_cve,3,1))=m.ope_cve AND
r.type IN ('C','D','E') AND
SUBSTRING(r.yearMonth,1,4)='2012' AND
r.id_ope='X' AND
SUBSTRING(r.ope_cve,1,2) IN (SELECT cve_med FROM catNac WHERE numero='0')
GROUP BY SUBSTRING(r.yearMonth,5,2),SUBSTRING(r.ope_cve,3,1)
ORDER BY SUBSTRING(r.yearMonth,5,2),SUBSTRING(r.ope_cve,3,1)
So, I added an index with these fields: id_ope, yearMonth, agen_o, because I have other's queries that have this fields in WHERE, with this order
Now my explain output:
1 PRIMARY r ref indice indice 2 const 14774607 Using where; Using filesort
So i added another index with yearMonth, ope_cve, but I still have "using filesort". How can I optimize this?
Thanks
Without modifying your table structure, if you have an index on yearMonth, you can try this:
SELECT m.name_ope AS cve,
SUBSTRING(r.yearMonth,5,2) AS period,
COUNT(DISTINCT(CONCAT(r.agen_ope,r.cve_ope))) AS num,
SUM(CASE WHEN r.type='A' THEN r.cob_u ELSE 0 END) AS tot_u,
FROM resume r, media m
WHERE CONCAT(r.id_ope,SUBSTRING(r.ope_cve,3,1))=m.ope_cve AND
r.type IN ('C','D','E') AND
r.yearMonth LIKE '2012%' AND
r.id_ope='X' AND
SUBSTRING(r.ope_cve,1,2) IN (SELECT cve_med FROM catNac WHERE numero='0')
GROUP BY r.yearMonth,SUBSTRING(r.ope_cve,3,1)
The changes:
Using r.yearMonth LIKE '2012%' should allow an index to be used for that part of your where clause.
Since you're already filtering out every year but 2012, you can group by GROUP BY r.yearMonth alone.
The ORDER BY clause is not needed since MySQL sorts on GROUP BY, unless you include ORDER BY NULL