Enhancing performance when applying multiple joins in mysql - mysql

I have 3 tables:
Nutritions_facts table which is static table that have all nutrition nutrition_id and nutrition_ename
Products_info table which has products info and the columns are product_id, product_ename, brand
product_nutrition_facts
which is mediator table that links the product with its nutrition facts with their values . The structure is product_id, nutrition_id, nutrition_value .. Each product can have 1 row or more depending on number of nutrition facts it has.
Here is a real testing example
Nutrition_facts table
nutrition_id |nutrition_ename
1 | caloreis
2 | fat
3 | sugar
4 | salt
Products_info table
product_id| product_ename | brand
1 | Nutella Hazelnut Cocoa | Nutella
2 | Nutella Jar | Nutella
product_nutrition_facts table
product_id | nutrition_id | nutrition_value
1 | 1 | 200
1 | 2 | 15
1 | 3 | 2
1 | 4 | 11
2 | 1 | 200
2 | 2 | 15
2 | 3 | 12
2 | 4 | 11
I need to make query that returns me the products' name with value of sugar is less than or equla 15 and salt less than or equal 140
I build a query that return correct values but it takes long time to process. Can someone suggest edits to enhance the performance please..
SELECT DISTINCT p.product_id, p.brand, p.e_name, p.image_low
FROM products_info p
JOIN product_nutrition_facts pn ON p.product_id = pn.product_id
WHERE p.brand = 'AL MARAI'
AND (
(
p.product_id
IN (
SELECT product_id
FROM product_nutrition_facts pn
WHERE pn.nutrition_id =3
AND pn.nutrition_value <=15
)
)
AND (
p.product_id
IN (
SELECT product_id
FROM product_nutrition_facts pn
WHERE pn.nutrition_id =4
AND pn.nutrition_value <=140
)
)
)
AND (
pn.nutrition_id =3
OR pn.nutrition_id =4
)
EDITS
CREATE TABLE `products_info` (
`product_id` int(11) NOT NULL AUTO_INCREMENT,
`image_low` varchar(400) DEFAULT NULL,
`e_name` varchar(200) DEFAULT NULL,
PRIMARY KEY (`product_id`),
UNIQUE KEY `product_id_UNIQUE` (`product_id`)
) ENGINE=InnoDB AUTO_INCREMENT=249292 DEFAULT CHARSET=utf8
CREATE TABLE `product_nutrition_facts` (
`prod_nut_id` int(11) NOT NULL AUTO_INCREMENT,
`product_id` int(11) DEFAULT NULL,
`nutrition_id` int(11) DEFAULT NULL,
`nutrition_value` varchar(25) DEFAULT NULL,
`unit_id` int(11) DEFAULT NULL,
`parent` int(11) DEFAULT NULL,
`serving_size` varchar(145) DEFAULT NULL,
`serving_size_unit` int(11) DEFAULT NULL,
`no_nutrition_facts` int(11) NOT NULL,
`added_by` varchar(145) DEFAULT NULL,
`last_update` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`inserted_time` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`updated_by` varchar(150) NOT NULL,
PRIMARY KEY (`prod_nut_id`),
UNIQUE KEY `prod_nut_id_UNIQUE` (`prod_nut_id`),
KEY `nutrition_id_fk_idx` (`nutrition_id`),
KEY `unit_Fk_idx` (`unit_id`),
KEY `unit_fk1_idx` (`serving_size_unit`),
KEY `product_idFK` (`product_id`)
) ENGINE=InnoDB AUTO_INCREMENT=580809 DEFAULT CHARSET=utf8
CREATE TABLE `nutrition_facts` (
`nutrition_id` int(11) NOT NULL AUTO_INCREMENT,
`nutrition_aname` varchar(145) DEFAULT NULL,
`nutrition_ename` varchar(145) DEFAULT NULL,
`alternative_name` varchar(145) DEFAULT NULL,
`unit` varchar(8) NOT NULL,
`daily_value` int(11) NOT NULL,
`nut_order` int(2) NOT NULL,
`is_child` int(1) NOT NULL,
`last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`nutrition_id`)
) ENGINE=InnoDB AUTO_INCREMENT=53 DEFAULT CHARSET=utf8

Try to add indexes product_nutrition_facts (nutrition_id,nutrition_value,product_id), product_nutrition_facts (product_id,nutrition_id,nutrition_value), products_info (brand) and perfom query
SELECT p.*
FROM products_info p
join product_nutrition_facts pn1 on
pn1.product_id=p.product_id
AND pn1.nutrition_id=3
AND pn1.nutrition_value<=15
join product_nutrition_facts pn2 on
pn2.product_id=p.product_id
AND pn2.nutrition_id=4
AND pn2.nutrition_value<=140
where
p.brand = 'AL MARAI'

IN ( SELECT ... ) -- Does not optimize well; turn into JOIN (as Max suggests)
"Overnormalization" -- Nutrition_facts can be eliminated; simply use the nutrition_names in place of nutrition_id.

Related

MySQL group rows that share at least one value in common on multiple columns

Consider the following table:
CREATE TABLE `customer_identifiers` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`order_id` varchar(45) DEFAULT NULL,
`email` varchar(45) DEFAULT NULL,
`phone` varchar(45) DEFAULT NULL,
`created_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `uniqueness` (`order_id`,`email`,`phone`),
KEY `email` (`email`),
KEY `order` (`order_id`),
KEY `phone` (`phone`),
KEY `CA` (`created_at`),
KEY `UA` (`updated_at`)
) ENGINE=InnoDB AUTO_INCREMENT=7 DEFAULT CHARSET=latin1;
insert into dev.customer_identifiers(order_id,email,phone) values (1,'test#gmail.com','07444226373'),
(2,'test#gmail.com','0744422633'),
(3,'test2#gmail.com','07444226373'),
(4,'test3#gmail.com','07453456373'),
(5,'test4#gmail.com','07955226373');
How could I group of all order ids that share either the same email or the same phone number?
desired output:
+----------+------------------------+--------------------------------+
| order_id | phone | mail |
+----------+------------------------+--------------------------------+
| 1,2,3 | 07444226373,0744422633 | test2#gmail.com,test#gmail.com |
+----------+------------------------+--------------------------------+
| 4 | 07453456373 | test3#gmail.com |
+----------+------------------------+--------------------------------+
| 5 | 07955226373 | test4#gmail.com |
+----------+------------------------+--------------------------------+
SELECT * FROM
(
SELECT ci2.`order_id`,GROUP_CONCAT(ci2.`order_id`) AS `concats`,GROUP_CONCAT(DISTINCT ci2.`phone`) as phones,GROUP_CONCAT(DISTINCT ci2.`email`) as mails
FROM `customer_identifiers` ci1
INNER JOIN `customer_identifiers` ci2 ON ci1.`email` = ci2.`email` OR ci1.`phone` = ci2.`phone`
GROUP BY ci1.`order_id`
) AS tbl1
GROUP BY tbl1.`order_id`;
What you should do is to count the number of duplicated rows:
SELECT
email,
phone,
COUNT(email) AS count_email,
COUNT(phone) AS count_phone
FROM customer_identifiers
GROUP BY email,phone
HAVING
COUNT(email)>1 OR COUNT(phone) > 1
You can personalize to return the columns you need to identify the ids that have the duplicity.
I hope it helps...

How can i optimize this mysql query in social table structure?

Scheme
CREATE TABLE IF NOT EXISTS `content` (
`uid` int(11) NOT NULL AUTO_INCREMENT,
`entity_uid` int(11) NOT NULL,
....
PRIMARY KEY (`uid`),
UNIQUE KEY `insert_at` (`insert_at`),
KEY `fk_entity` (`entity_uid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE IF NOT EXISTS `entity` (
`uid` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`uid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE IF NOT EXISTS `entity_comment` (
`uid` int(11) NOT NULL AUTO_INCREMENT,
`entity_uid` int(11) NOT NULL,
`user_uid` int(11) NOT NULL,
....
PRIMARY KEY (`uid`),
KEY `fk_entity` (`entity_uid`),
KEY `fk_user` (`user_uid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE IF NOT EXISTS `entity_like` (
`uid` int(11) NOT NULL AUTO_INCREMENT,
`entity_uid` int(11) NOT NULL,
`user_uid` int(11) NOT NULL,
....
PRIMARY KEY (`uid`),
KEY `fk_entity` (`entity_uid`),
KEY `fk_user` (`user_uid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE IF NOT EXISTS `entity_share` (
`uid` int(11) NOT NULL AUTO_INCREMENT,
`entity_uid` int(11) NOT NULL,
`user_uid` int(11) NOT NULL,
`share_type` int(2) NOT NULL,
....
PRIMARY KEY (`uid`),
KEY `fk_entity` (`entity_uid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE IF NOT EXISTS `entity_view` (
`uid` int(11) NOT NULL AUTO_INCREMENT,
`entity_uid` int(11) NOT NULL,
`user_uid` int(11) NOT NULL,
....
PRIMARY KEY (`uid`),
KEY `fk_entity` (`entity_uid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE IF NOT EXISTS `user` (
`uid` int(11) NOT NULL AUTO_INCREMENT,
`email` varchar(30) NOT NULL,
....
PRIMARY KEY (`uid`),
UNIQUE KEY `email` (`email`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Query 1 - using Left Join [Query took 16.3032 sec]
SELECT c . * , COUNT(DISTINCT ev.uid) AS view_count, COUNT( DISTINCT el.uid ) AS like_count, COUNT( DISTINCT ec.uid ) AS reply_count, COUNT( DISTINCT es.uid ) AS share_count
FROM content AS c
LEFT JOIN entity_view AS ev ON ev.entity_uid = c.entity_uid
LEFT JOIN entity_like AS el ON el.entity_uid = c.entity_uid
LEFT JOIN entity_share AS es ON es.entity_uid = c.entity_uid
LEFT JOIN entity_comment AS ec ON ec.entity_uid = c.entity_uid
GROUP BY c.uid
EDIT - Explain
Query 2 - using Sub query [Query took 0.0069 sec]
SELECT c.*,
(SELECT COUNT(*) FROM entity_view WHERE entity_uid = c.entity_uid) AS view_count ,
(SELECT COUNT(*) FROM entity_like WHERE entity_uid = c.entity_uid) AS like_count ,
(SELECT COUNT(*) FROM entity_comment WHERE entity_uid = c.entity_uid) AS reply_count ,
(SELECT COUNT(*) FROM entity_share WHERE entity_uid = c.entity_uid) AS share_count
FROM content AS c
EDIT - Explain
Result
uid | data of content | view_count | like_count | reply_count | share_count |
-----------------------------------------------------------------------------
1 | ..... | 100 | 10 | 5 | 6 |
-----------------------------------------------------------------------------
2 | ..... | 200 | 20 | 20 | 3 |
-----------------------------------------------------------------------------
3 | ..... | 300 | 10 | 10 | 2 |
-----------------------------------------------------------------------------
Explain
Storage Engine : InnoDB
entity_{action} : Insert occurs when user {action} occurs.(e.g) entity_view is insertion occurs when user sees the content.
Question
How can I optimize more in the above mysql query?
I run the query in two ways and got the results above.
This proved that subquery is much better.
Is there a way to a get better performance than subquery like this table structure? And why is left join so bad?

mysql query using the wrong indexes

I have some optimization problems with some of my queries in a mysql database. After I build my application I am trying to optimize using mysqltuner and explain, to find non indexed queries. This is a query that is running often and reports that is not using the index :
SELECT count(*) AS rangedandselling
FROM
( SELECT DISTINCT `store_formats`.`Store Name`
FROM (`eds_sales`
JOIN `store_formats`
ON (`eds_sales`.`Store Nbr` = `store_formats`.`Store Nbr`)
)
WHERE `eds_sales`.`Prime Item Nbr` = '4'
AND `eds_sales`.`Date` BETWEEN CAST('2016-07-14' AS DATETIME)
AND CAST('2016-07-21' AS DATETIME)
AND `store_formats`.`Format Name` IN ('format1','format2')
AND `store_formats`.`Store Name` IN (
SELECT DISTINCT `store_formats`.`Store Name`
FROM (`eds_stock`
JOIN `store_formats`
ON (`eds_stock`.`Store Nbr` = `store_formats`.`Store Nbr`)
)
WHERE `eds_stock`.`Prime Item Nbr` = '4'
AND `eds_stock`.`Date` BETWEEN CAST('2016-07-14' AS DATETIME)
AND CAST('2016-07-21' AS DATETIME)
AND `store_formats`.`Format Name` IN ('format1','format2')
AND `eds_stock`.`Curr Traited Store/Item Comb.` = '1' )
) t
This is the explain output : https://tools.mariadb.org/ea/pyb3h
Although I have indexed the columns involved in the joins and lookups, it looks like it is picking another index. this other index is called uniqness, and is composed of 6 different columns in the source columns that I use for inserts (the combination of those columns is the only thing that makes a row unique, hence the name I gave.). I then made sure I have indexes for the other columns and I can see them in the explain. I am not sure why this happens, can someone help?
Any ideas on optimizing this query?
Here is the explain for those that the link above does not work :
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+---+---+---+---+---+---+---+---+---+---+
| 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 167048 | |
| 2 | DERIVED | eds_sales | ref | uniqness,Prime Item Nbr,Store Nbr | uniqness | 4 | const | 23864 | Using where; Using index; Using temporary |
| 2 | DERIVED | store_formats | ref | Store Nbr,Store Name,Format Name | Store Nbr | 5 | equidata.eds_sales.Store Nbr | 1 | Using where |
| 2 | DERIVED | <subquery3> | eq_ref | distinct_key | distinct_key | 84 | func | 1 | Distinct |
| 3 | MATERIALIZED | store_formats | ALL | Store Nbr,Store Name,Format Name | NULL | NULL | NULL | 634 | Using where; Distinct |
| 3 | MATERIALIZED | eds_stock | ref | uniqness,Prime Item Nbr,Store Nbr | uniqness | 8 | const,equidata.store_formats.Store Nbr | 7 | Using where; Distinct |
+---+---+---+---+---+---+---+---+---+---+
I am also posting the related tables structure :
--
-- Table structure for table `eds_sales`
--
CREATE TABLE `eds_sales` (
`id` int(12) NOT NULL,
`Prime Item Nbr` int(12) NOT NULL,
`Prime Item Desc` varchar(255) NOT NULL,
`Prime Size Desc` varchar(255) NOT NULL,
`Variety` varchar(255) NOT NULL,
`WHPK Qty` int(5) NOT NULL,
`SUPPK Qty` int(5) NOT NULL,
`Depot Nbr` int(5) NOT NULL,
`Depot Name` varchar(255) NOT NULL,
`Store Nbr` int(5) NOT NULL,
`Store Name` varchar(255) NOT NULL,
`EPOS Quantity` int(5) NOT NULL,
`EPOS Sales` float(4,2) NOT NULL,
`Date` date NOT NULL,
`Client` varchar(255) NOT NULL,
`Retailer` varchar(255) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
ALTER TABLE `eds_sales`
ADD PRIMARY KEY (`id`),
ADD UNIQUE KEY `uniqness` (`Prime Item Nbr`,`Prime Item Desc`,`Prime Size Desc`,`Variety`,`WHPK Qty`,`SUPPK Qty`,`Depot Nbr`,`Depot Name`,`Store Nbr`,`Store Name`,`Date`,`Client`) USING BTREE,
ADD KEY `Prime Item Nbr` (`Prime Item Nbr`),
ADD KEY `Store Nbr` (`Store Nbr`);
Table structure for table eds_stock
CREATE TABLE `eds_stock` (
`Prime Item Nbr` int(12) NOT NULL,
`Prime Item Desc` varchar(255) NOT NULL,
`Prime Size Desc` varchar(255) NOT NULL,
`Variety` varchar(255) NOT NULL,
`Curr Valid Store/Item Comb.` int(12) NOT NULL,
`Curr Traited Store/Item Comb.` int(12) NOT NULL,
`Store Nbr` int(12) NOT NULL,
`Store Name` varchar(255) NOT NULL,
`Curr Str On Hand Qty` int(12) NOT NULL,
`Curr Str In Transit Qty` int(12) NOT NULL,
`Curr Str On Order Qty` int(12) NOT NULL,
`Curr Str In Depot Qty` int(12) NOT NULL,
`Curr Instock %` int(12) NOT NULL,
`Max Shelf Qty` int(12) NOT NULL,
`On Hand Qty` int(12) NOT NULL,
`Date` date NOT NULL,
`Client` varchar(255) NOT NULL,
`Retailer` varchar(255) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
ALTER TABLE `eds_stock`
ADD UNIQUE KEY `uniqness` (`Prime Item Nbr`,`Store Nbr`,`Date`,`Client`,`Retailer`),
ADD KEY `Prime Item Nbr` (`Prime Item Nbr`),
ADD KEY `Store Nbr` (`Store Nbr`),
ADD KEY `Curr Valid Store/Item Comb.` (`Curr Valid Store/Item Comb.`);
Table structure for table store_formats
CREATE TABLE `store_formats` (
`id` int(12) NOT NULL,
`Store Nbr` int(4) DEFAULT NULL,
`Store Name` varchar(27) DEFAULT NULL,
`City` varchar(19) DEFAULT NULL,
`Post Code` varchar(9) DEFAULT NULL,
`Region #` int(2) DEFAULT NULL,
`Region Name` varchar(10) DEFAULT NULL,
`Distr #` int(3) DEFAULT NULL,
`Dist Name` varchar(26) DEFAULT NULL,
`Square Footage` varchar(7) DEFAULT NULL,
`Format` int(1) DEFAULT NULL,
`Format Name` varchar(23) DEFAULT NULL,
`Store Type` varchar(20) DEFAULT NULL,
`TV Region` varchar(12) DEFAULT NULL,
`Pharmacy` varchar(3) DEFAULT NULL,
`Optician` varchar(3) DEFAULT NULL,
`Home Shopping` varchar(3) DEFAULT NULL,
`Retailer` varchar(15) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
ALTER TABLE `store_formats`
ADD PRIMARY KEY (`id`),
ADD KEY `Store Nbr` (`Store Nbr`),
ADD KEY `Store Name` (`Store Name`),
ADD KEY `Format Name` (`Format Name`);
CAST('2016-07-14' AS DATETIME) -- the CAST is not needed; '2016-07-14' works fine. (Especially since you are comparing against a DATE.)
IN ( SELECT ... ) is inefficient. Change to a JOIN.
On eds_stock, instead of
INDEX(`Prime Item Nbr`)
have these two:
INDEX(`Prime Item Nbr`, `Date`)
INDEX(`Prime Item Nbr`, `Curr Traited Store/Item Comb.`, `Date`)
INT is always a 4-byte number, even if you say int(2). Consider switching to TINYINT UNSIGNED (and other sizes of INT).
float(4,2) -- Do not use (m,n); it causes an extra rounding and my cause undesired truncation. Either use DECIMAL(4,2) (for money), or plain FLOAT.
Bug?? Did you really want 8 days, not just a week in
AND `Date` BETWEEN CAST('2016-07-14' AS DATETIME) AND CAST('2016-07-21' AS DATETIME)
I like this pattern:
AND `Date` >= '2016-07-14'
AND `Date` < '2016-07-14' + INTERVAL 1 WEEK
Instead of two selects
SELECT count(*) AS rangedandselling
FROM ( SELECT DISTINCT `store_formats`.`Store Name` ...
One select will probably work (and be faster):
SELECT COUNT(DISTINCT `store_formats`.`Store Name`) AS rangedandselling ...
Once you have cleaned up most of that, we can get back to your question about 'wrong index', if there is still an issue. (Please start a new Question if you need further help.)

unique records in mysql one-to-many join without DISTINCT or GROUP BY

Here's the basic query:
SELECT
some_columns
FROM
d
JOIN
m ON d.id = m.d_id
JOIN
s ON s.id = m.s_id
JOIN
p ON p.id = s.p_id
WHERE
some_criteria
ORDER BY
d.date DESC
LIMIT 25
Table m can contain multiple s_ids per each d_id. Here's a super simple example:
+--------+--------+------+
| id | d_id | s_id |
+--------+--------+------+
| 317354 | 291220 | 642 |
| 317355 | 291220 | 32 |
+--------+--------+------+
2 rows in set (0.00 sec)
Which we want. But, obviously, it's producing duplicate d records in this particular query.
These tables have lots of columns, and I need to edit these down due to the sensitive nature of the data, but here's the basic structure as it pertains to this query:
| d | CREATE TABLE `d` (
`id` bigint(22) unsigned NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`),
KEY `date` (`date`)
) ENGINE=InnoDB |
| m | CREATE TABLE `m` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`d_id` bigint(20) NOT NULL,
`s_id` bigint(20) NOT NULL,
`is_king` binary(1) DEFAULT '0',
PRIMARY KEY (`id`),
KEY `d_id` (`d_id`),
KEY `is_king` (`is_king`),
KEY `s_id` (`s_id`)
) ENGINE=InnoDB |
| s | CREATE TABLE `s` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`p_id` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `p_id` (`p_id`)
) ENGINE=InnoDB |
| p | CREATE TABLE `p` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`)
) ENGINE=InnoDB |
Now, previously, we had a GROUP BY d.id in place to grab uniques. The data here are now huge, so we can no longer realistically do that. SELECT DISTINCT d.id is even slower.
Any ideas? Everything I come up with creates a problem elsewhere.
Does changing "JOIN m ON d.id = m.d_id" to "LEFT JOIN m ON d.id = m.d_id" accomplish what you're looking for here?
I'm not sure I understand your goal clearly, but "table m contains many rows per each d" immediately has me wondering if you should be using some other type of join to accomplish your ends.

mySQL update column based on multiple rows of another table

I have two tables, dma_projects and projectsteps:
dma_projects has the following fields:
projectID
projectName
projectInstructions
CREATE TABLE IF NOT EXISTS `dma_projects` (
`projectID` int(11) NOT NULL DEFAULT '0',
`projectName` varchar(100) DEFAULT NULL,
`projectDescription` text,
`projectImage` varchar(255) DEFAULT NULL,
`projectThumb` varchar(255) DEFAULT NULL,
`projectCategory` varchar(50) DEFAULT NULL,
`projectTheme` varchar(50) DEFAULT NULL,
`projectInstructions` text,
`projectAuthorID` int(11) DEFAULT NULL,
`projectViews` int(11) DEFAULT NULL,
`projectDifficulty` varchar(20) DEFAULT NULL,
`projectTimeNeeded` varchar(40) DEFAULT NULL,
`projectDateAdded` int(11) DEFAULT NULL,
`projectStatus` varchar(20) DEFAULT NULL,
`projectVisible` varchar(1) DEFAULT NULL,
PRIMARY KEY (`projectID`),
KEY `user_id` (`projectAuthorID`),
KEY `date` (`projectDateAdded`),
FULLTEXT KEY `image` (`projectImage`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
projectsteps has:
stepID
stepno
stepdesc
project_id
CREATE TABLE IF NOT EXISTS `projectsteps` (
`projectStep_id` int(11) NOT NULL AUTO_INCREMENT,
`stepno` int(11) DEFAULT '0',
`stepdesc` text CHARACTER SET utf8 COLLATE utf8_bin,
`project_id` int(11) NOT NULL,
PRIMARY KEY (`projectStep_id`)
)
I want to update dma_projects.projectInstructions with the values of any rows that are found in the projectsteps table that have the same projectID.
I.e.
projectID 1 in dma_projects has 5 records in projectsteps, the stepdesc of those five records should be joined (separated by a ) and then updated to the projectInstructions field of the dma_projects table.
I'm scratching my head as how to write the query. Here is where I am so far but I can't get it working. The error it says is:
Unknown column 'projectsteps.stepno' in 'field list'
Here is the query:
UPDATE `dma_projects` t1
SET t1.`projectInstructions` =
(
SELECT
`projectsteps`.`stepno`,
group_concat(`projectsteps`.`stepdesc` separator '<br/>')
FROM `projectsteps`
AS somevar
INNER JOIN `projectsteps` t2
ON t1.projectID=t2.project_id
ORDER BY t2.stepno ASC
)
UPDATED
UPDATE dma_projects p JOIN
(
SELECT project_id, GROUP_CONCAT(CONCAT('<li>', stepdesc, '</li>') SEPARATOR '') instructions
FROM
(
SELECT project_id, stepdesc
FROM projectsteps
ORDER BY project_id, stepdesc
) a
GROUP BY project_id
) d ON d.project_id = p.projectid
SET p.projectInstructions = d.instructions
Sample output:
| PROJECTID | PROJECTNAME | PROJECTINSTRUCTIONS |
------------------------------------------------------------------------------------------
| 1 | project1 | <li>step11</li><li>step12</li><li>step13</li><li>step14</li> |
| 2 | project1 | <li>step21</li><li>step22</li><li>step23</li> |
Here is SQLFiddle demo