I am sure that this question has already been answered, I just don't know what I am looking for.
What I've got are price lists that update every month and I am looking for a query that lists all items (tnr) that increase in price by over 20%
In this case I'd like to the the "tnr"
136234194430
832124069830
183078059150
I could loop through all items that I know that there is a smarter, faster, more elegant way to do it
The dummy table to test stuff on
CREATE TABLE `pricelist` (
`tnr` bigint(64) NOT NULL,
`price` double NOT NULL,
`discount` int(8) NOT NULL,
`date` date NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
INSERT INTO `pricelist` (`tnr`, `price`, `discount`, `date`) VALUES
(183078059150, 33.89, 26, '2018-08-01'),
(514780535750, 78.73, 19, '2018-08-01'),
(121475122010, 521.54, 16, '2018-08-01'),
(726576581300, 168.36, 10, '2018-08-01'),
(832124069830, 22.69, 50, '2018-08-01'),
(342122275110, 131.5, 26, '2018-08-01'),
(345067567690, 6.34, 26, '2018-08-01'),
(121113618790, 195.5, 16, '2018-08-01'),
(511681969780, 291.74, 23, '2018-08-01'),
(411372385450, 129.75, 23, '2018-08-01'),
(15097806600, 46.68, 19, '2018-08-01'),
(613592995940, 259.47, 19, '2018-08-01'),
(135414163780, 17, 19, '2018-08-01'),
(726076671410, 68.91, 11, '2018-08-01'),
(136234194430, 36.86, 23, '2018-08-01'),
(541122685800, 10.25, 16, '2018-08-01'),
(514722202230, 83.19, 23, '2018-08-01'),
(125177976530, 257.12, 26, '2018-08-01'),
(114377922120, 19.18, 23, '2018-08-01'),
(642169317400, 2.54, 26, '2018-08-01'),
(14085256200, 16.44, 14, '2018-08-01'),
(114313045460, 22.46, 16, '2018-08-01'),
(331014284930, 1042.02, 19, '2018-08-01'),
(183078059150, 53.89, 26, '2018-09-01'),
(514780535750, 78.73, 19, '2018-09-01'),
(121475122010, 521.54, 16, '2018-09-01'),
(726576581300, 168.36, 10, '2018-09-01'),
(832124069830, 42.69, 50, '2018-09-01'),
(342122275110, 131.5, 26, '2018-09-01'),
(345067567690, 6.34, 26, '2018-09-01'),
(121113618790, 195.5, 16, '2018-09-01'),
(511681969780, 291.74, 23, '2018-09-01'),
(411372385450, 129.75, 23, '2018-09-01'),
(15097806600, 46.68, 19, '2018-09-01'),
(613592995940, 259.47, 19, '2018-09-01'),
(135414163780, 17, 19, '2018-09-01'),
(726076671410, 68.91, 11, '2018-09-01'),
(136234194430, 66.86, 23, '2018-09-01'),
(541122685800, 10.25, 16, '2018-09-01'),
(514722202230, 83.19, 23, '2018-09-01'),
(125177976530, 257.12, 26, '2018-09-01'),
(114377922120, 19.18, 23, '2018-09-01'),
(642169317400, 2.54, 26, '2018-09-01'),
(14085256200, 16.44, 14, '2018-09-01'),
(114313045460, 22.46, 16, '2018-09-01'),
(331014284930, 1042.02, 19, '2018-09-01');
Thanks a lot
Determine a "forward price" for each tnr and date (new price on a next date), utilizing Correlated subquery.
Using Derived tables and grouping on tnr, filter (using Having clause) out the tnr having "growth" greater than 20%
Use the following query (SQL Fiddle Demo):
SELECT inner_nest.tnr, 100*(inner_nest.forward_price - inner_nest.price)/inner_nest.price as growth
FROM
(
SELECT t1.*, (SELECT t2.price
FROM pricelist AS t2
WHERE t2.tnr = t1.tnr
AND t2.date > t1.date
ORDER BY t2.date ASC LIMIT 1) AS forward_price
FROM pricelist AS t1
) AS inner_nest
GROUP BY inner_nest.tnr
HAVING growth > 20
Related
I am dealing with the post-procession of CSV logs arranged in the multi-column format in the following order: the first column corresponds to the line number (ID), the second one contains its population (POP, the number of the samples fell into this ID) and the third column (dG) represent some inherent value of this ID (which is always negative):
ID, POP, dG
1, 7, -9.6000
2, 3, -8.7700
3, 6, -8.6200
4, 4, -8.2700
5, 6, -8.0800
6, 10, -8.0100
7, 9, -7.9700
8, 8, -7.8400
9, 16, -7.8100
10, 2, -7.7000
11, 1, -7.5600
12, 2, -7.5200
13, 9, -7.5100
14, 1, -7.5000
15, 2, -7.4200
16, 1, -7.3300
17, 1, -7.1700
18, 4, -7.1300
19, 3, -6.9200
20, 1, -6.9200
21, 2, -6.9100
22, 2, -6.8500
23, 10, -6.6900
24, 2, -6.6800
25, 1, -6.6600
26, 20, -6.6500
27, 1, -6.6500
28, 5, -6.5700
29, 3, -6.5500
30, 2, -6.4600
31, 2, -6.4500
32, 1, -6.3000
33, 7, -6.2900
34, 1, -6.2100
35, 1, -6.2000
36, 3, -6.1800
37, 1, -6.1700
38, 4, -6.1300
39, 1, -6.1000
40, 2, -6.0600
41, 3, -6.0600
42, 8, -6.0200
43, 2, -6.0100
44, 1, -6.0100
45, 1, -5.9800
46, 2, -5.9700
47, 1, -5.9300
48, 6, -5.8800
49, 4, -5.8300
50, 4, -5.8000
51, 2, -5.7800
52, 3, -5.7200
53, 1, -5.6600
54, 1, -5.6500
55, 4, -5.6400
56, 2, -5.6300
57, 1, -5.5700
58, 1, -5.5600
59, 1, -5.5200
60, 1, -5.5000
61, 3, -5.4200
62, 4, -5.3600
63, 1, -5.3100
64, 5, -5.2500
65, 5, -5.1600
66, 1, -5.1100
67, 1, -5.0300
68, 1, -4.9700
69, 1, -4.7700
70, 2, -4.6600
In order to reduce the number of the lines I filtered this CSV with the aim to search for the line with the highest number in the second column (POP), using the following AWK expression:
# search CSV for the line with the highest POP and save all lines before it, while keeping minimal number of the lines (3) in the case if this line is found at the beginning of CSV.
awk -v min_lines=3 -F ", " 'a < $2 {for(idx=0; idx < i; idx++) {print arr[idx]} print $0; a=int($2); i=0; printed=NR} a > $2 && NR > 1 {arr[i]=$0; i++}END{if(printed <= min_lines) {for(idx = 0; idx <= min_lines - printed; idx++){print arr[idx]}}}' input.csv > output.csv
thus obtaining the following reduced output CSV, which still has many lines since the search string (with highest POP) is located on 26th line:
ID, POP, dG
1, 7, -9.6000
2, 3, -8.7700
3, 6, -8.6200
4, 4, -8.2700
5, 6, -8.0800
6, 10, -8.0100
7, 9, -7.9700
8, 8, -7.8400
9, 16, -7.8100
10, 2, -7.7000
11, 1, -7.5600
12, 2, -7.5200
13, 9, -7.5100
14, 1, -7.5000
15, 2, -7.4200
16, 1, -7.3300
17, 1, -7.1700
18, 4, -7.1300
19, 3, -6.9200
20, 1, -6.9200
21, 2, -6.9100
22, 2, -6.8500
23, 10, -6.6900
24, 2, -6.6800
25, 1, -6.6600
26, 20, -6.6500
How it would be possible to further customize my filter via modifying my AWK expression (or pipe it to something else) in order to consider additionally only the lines with small difference in the negative value of the third column, dG compared to the first line (which has the value most negative)? For example to consider only the lines different no more then 20% in terms of dG compared to the first line, while keeping all rest conditions the same:
ID, POP, dG
1, 7, -9.6000
2, 3, -8.7700
3, 6, -8.6200
4, 4, -8.2700
5, 6, -8.0800
6, 10, -8.0100
7, 9, -7.9700
8, 8, -7.8400
9, 16, -7.8100
10, 2, -7.7000
Both tasks can be done in a single awk:
awk -F ', ' 'NR==1 {next} FNR==NR {if (max < $2) {max=$2; n=FNR}; if (FNR==2) dg = $3 * .8; next} $3+0 == $3 && (FNR == n+1 || $3 > dg) {exit} 1' file file
ID, POP, dG
1, 7, -9.6000
2, 3, -8.7700
3, 6, -8.6200
4, 4, -8.2700
5, 6, -8.0800
6, 10, -8.0100
7, 9, -7.9700
8, 8, -7.8400
9, 16, -7.8100
10, 2, -7.7000
To make it more readable:
awk -F ', ' '
NR == 1 {
next
}
FNR == NR {
if (max < $2) {
max = $2
n = FNR
}
if (FNR == 2)
dg = $3 * .8
next
}
$3 + 0 == $3 && (FNR == n+1 || $3 > dg) {
exit
}
1' file file
I am dealing with the post-processing of multi-column CSV arranged in fixed format: the first column corresponds to the line number (ID), the second one contains its population (POP, the number of the samples fell into this ID) and the third column (dG) represent some inherent value of this ID (always negative):
ID, POP, dG
1, 7, -9.6000
2, 3, -8.7700
3, 6, -8.6200
4, 4, -8.2700
5, 6, -8.0800
6, 10, -8.0100
7, 9, -7.9700
8, 8, -7.8400
9, 16, -7.8100
10, 2, -7.7000
11, 1, -7.5600
12, 2, -7.5200
13, 9, -7.5100
14, 1, -7.5000
15, 2, -7.4200
16, 1, -7.3300
17, 1, -7.1700
18, 4, -7.1300
19, 3, -6.9200
20, 1, -6.9200
21, 2, -6.9100
22, 2, -6.8500
23, 10, -6.6900
24, 2, -6.6800
25, 1, -6.6600
26, 20, -6.6500
27, 1, -6.6500
28, 5, -6.5700
29, 3, -6.5500
30, 2, -6.4600
31, 2, -6.4500
32, 1, -6.3000
33, 7, -6.2900
34, 1, -6.2100
35, 1, -6.2000
36, 3, -6.1800
37, 1, -6.1700
38, 4, -6.1300
39, 1, -6.1000
40, 2, -6.0600
41, 3, -6.0600
42, 8, -6.0200
43, 2, -6.0100
44, 1, -6.0100
45, 1, -5.9800
46, 2, -5.9700
47, 1, -5.9300
48, 6, -5.8800
49, 4, -5.8300
50, 4, -5.8000
51, 2, -5.7800
52, 3, -5.7200
53, 1, -5.6600
54, 1, -5.6500
55, 4, -5.6400
56, 2, -5.6300
57, 1, -5.5700
58, 1, -5.5600
59, 1, -5.5200
60, 1, -5.5000
61, 3, -5.4200
62, 4, -5.3600
63, 1, -5.3100
64, 5, -5.2500
65, 5, -5.1600
66, 1, -5.1100
67, 1, -5.0300
68, 1, -4.9700
69, 1, -4.7700
70, 2, -4.6600
In order to reduce the number of the lines I filtered this CSV with the aim to search for the line with the highest number in the second column (POP), using the following AWK expression:
# search CSV for the line with the highest POP and save all linnes before it, while keeping minimal number of the linnes (3) in the case if this line is found at the begining of CSV.
awk -v min_lines=3 -F ", " 'a < $2 {for(idx=0; idx < i; idx++) {print arr[idx]} print $0; a=int($2); i=0; printed=NR} a > $2 && NR > 1 {arr[i]=$0; i++}END{if(printed <= min_lines) {for(idx = 0; idx <= min_lines - printed; idx++){print arr[idx]}}}' input.csv > output.csv
For simple case when the string with maximum POP is located on the first line, the script will save this line (POP max) +2 lines after it(=min_lines=3).
For more complicated case, if the line with POP max is located in the middle of the CSV, the script detect this line + all the precedent lines from the begining of the CSV and list them in the new CSV keeping the original order. However, in that case output.csv would contain too many lines since the search string (with highest POP) is located on 26th line:
ID, POP, dG
1, 7, -9.6000
2, 3, -8.7700
3, 6, -8.6200
4, 4, -8.2700
5, 6, -8.0800
6, 10, -8.0100
7, 9, -7.9700
8, 8, -7.8400
9, 16, -7.8100
10, 2, -7.7000
11, 1, -7.5600
12, 2, -7.5200
13, 9, -7.5100
14, 1, -7.5000
15, 2, -7.4200
16, 1, -7.3300
17, 1, -7.1700
18, 4, -7.1300
19, 3, -6.9200
20, 1, -6.9200
21, 2, -6.9100
22, 2, -6.8500
23, 10, -6.6900
24, 2, -6.6800
25, 1, -6.6600
26, 20, -6.6500
In order to reduce the total number of the lines up to 3-5 lines in the output CSV, how it would be possible to customize my filter in order to save only the lines with a minor difference (e.g. the values in the pop column should match (POP >0.5 max(POP)) ), while comparing each line with the line having bigest value in the POP column? Finally, I need always to keep the first line as well as the line with the maximal value in the output. So the AWK solution should filter multi-string CSV in the following manner (please ignore coments in #):
ID, POP, dG
1, 7, -9.6000
9, 16, -7.8100
26, 20, -6.6500 # this is POP max detected over all lines
This 2 phase awk should work for you:
awk -F ', ' -v n=2 'NR == 1 {next}
FNR==NR { if (max < $2) {max=$2; if (FNR==n) n++} next}
FNR <= n || $2 > (.5 * max)' file file
ID, POP, dG
1, 7, -9.6000
9, 16, -7.8100
26, 20, -6.6500
I need to find sum of specific columns as per matched some of columns value in database table.
Please check mysql table that i use :
CREATE TABLE IF NOT EXISTS `plant_production_items` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`plant_production_id` int(11) NOT NULL,
`materialid` int(11) NOT NULL,
`packaging_id` int(11) NOT NULL,
`grade_id` int(11) NOT NULL,
`slabs` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `material_purchase_id` (`plant_production_id`),
KEY `grade_id` (`grade_id`),
KEY `packaging_id` (`packaging_id`),
KEY `slabs` (`slabs`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=104 ;
Value are as under :
INSERT INTO `plant_production_items` (`id`, `plant_production_id`, `materialid`, `packaging_id`, `grade_id`, `slabs`) VALUES
(5, 4, 22, 85, 29, 4444),
(6, 5, 22, 14, 25, 3234),
(8, 6, 27, 21, 60, 4444),
(11, 8, 22, 85, 29, 44444),
(19, 7, 22, 84, 29, 434),
(75, 10, 26, 0, 51, 1233),
(76, 10, 24, 17, 34, 251),
(78, 10, 26, 0, 46, 3234),
(91, 9, 27, 21, 57, 1000),
(92, 9, 27, 21, 57, 2000),
(93, 3, 23, 16, 32, 5000),
(94, 3, 27, 21, 54, 3233),
(101, 3, 27, 21, 0, 700),
(103, 3, 29, 27, 0, 6666);
I want total sum of 'slabs' columns as per unique value founded in following column :
plant_production_id
materialid
packaging_id
grade_id
In short we need to find combination of all 4 values of above and need to show total sum of 'slabs' column.
For example :
there are two records which are same:
(91, 9, 27, 21, 57, 1000),
(92, 9, 27, 21, 57, 2000),
so here i want to get total sum i.e 1000+2000 = 3000
Out put should be all columns with total slabs. It is not required we need to match all above 4 columns. Actually we need to find all total slabs as per total records found same with above 4 column.
If still not clear then let me know.
You can use GROUP BY like this:
SELECT
plant_production_id,
materialidm,
packaging_id,
grade_id,
SUM(`slabs`) slabs_sum
FROM plant_production_items
GROUP BY plant_production_id, materialidm, packaging_id, grade_id;
So it gives the sum of the slabs for rows with the same values for columns grouped by.
I have a query that tries to find all shopping carts containing a set of given packages.
For each package I join the corresponding cartitem table once, because I am only interested in carts containing all given packages.
When I reach more than 15 packages(joins) the query performance rapidly drops.
I have two indeces on the corresponding foreign columns and am aware that mysql uses only one of them. When I add an index over the 2 columns(cartitem_package_id,cartitem_cart_id) it works, but is this the only way to solve this situation?
I would like to know why MYSQL suddently stucks in this situation and what may be the mysql internal problem, because I do not see any deeper problem with this definition and query? Does that may be an issue with the query optimizer and can I do something(e.g. adding brackets) to support or force a specific query execution? Or has anyone a different approach here, using another query?
The query looks something like this:
SELECT cart_id
FROM cart
INNER JOIN cartitem as c1 ON cart_id=c1.cartitem_cart_id AND c1.cartitem_package_id= 7
INNER JOIN cartitem as c2 ON cart_id=c2.cartitem_cart_id AND c2.cartitem_package_id= 8
INNER JOIN cartitem as c3 ON cart_id=c3.cartitem_cart_id AND c3.cartitem_package_id= 9
INNER JOIN cartitem as c4 ON cart_id=c4.cartitem_cart_id AND c4.cartitem_package_id= 10
INNER JOIN cartitem as c5 ON cart_id=c5.cartitem_cart_id AND c5.cartitem_package_id= 11
INNER JOIN cartitem as c6 ON cart_id=c6.cartitem_cart_id AND c6.cartitem_package_id= 12
INNER JOIN cartitem as c7 ON cart_id=c7.cartitem_cart_id AND c7.cartitem_package_id= 13
INNER JOIN cartitem as c8 ON cart_id=c8.cartitem_cart_id AND c8.cartitem_package_id= 14
INNER JOIN cartitem as c9 ON cart_id=c9.cartitem_cart_id AND c9.cartitem_package_id= 15
INNER JOIN cartitem as c10 ON cart_id=c10.cartitem_cart_id AND c10.cartitem_package_id= 16
INNER JOIN cartitem as c11 ON cart_id=c11.cartitem_cart_id AND c11.cartitem_package_id= 17
INNER JOIN cartitem as c12 ON cart_id=c12.cartitem_cart_id AND c12.cartitem_package_id= 18
INNER JOIN cartitem as c13 ON cart_id=c13.cartitem_cart_id AND c13.cartitem_package_id= 19
INNER JOIN cartitem as c14 ON cart_id=c14.cartitem_cart_id AND c14.cartitem_package_id= 20
INNER JOIN cartitem as c15 ON cart_id=c15.cartitem_cart_id AND c15.cartitem_package_id= 21
INNER JOIN cartitem as c16 ON cart_id=c16.cartitem_cart_id AND c16.cartitem_package_id= 22
INNER JOIN cartitem as c17 ON cart_id=c17.cartitem_cart_id AND c17.cartitem_package_id= 23
Output:
No result.
Consider the following sample structure:
CREATE TABLE IF NOT EXISTS `cart` (
`cart_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`cart_state` smallint(20) DEFAULT NULL,
PRIMARY KEY (`cart_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=80 ;
INSERT INTO `cart` (`cart_id`, `cart_state`) VALUES
(1, 0),(2, 5),(3, 0),(4, 0),(5, 0),(6, 0),(7, 0),(8, 0),(9, 0),(10, 0),(11, 0),(12, 0),(13, 0),(14, 5),(15, 5),(16, 10),(17, 0),(18, 10),(19, 40),(20, 10),(21, 5),(22, 0),(23, 10),(24, 10),(25, 0),(26, 10),(27, 5),(28, 5),(29, 0),(30, 5),(31, 0),(32, 0),(33, 0),(34, 0),(35, 0),(36, 0),(37, 0),(38, 0),(39, 0),(40, 0),(41, 0),(42, 0),(43, 0),(44, 0),(45, 40),(46, 0),(47, 0),(48, 1),(49, 0),(50, 5),(51, 0),(52, 0),(53, 5),(54, 5),(55, 0),(56, 0),(57, 10),(58, 0),(59, 0),(60, 5),(61, 0),(62, 0),(63, 10),(64, 0),(65, 5),(66, 5),(67, 10),(68, 10),(69, 0),(70, 0),(71, 10),(72, 0),(73, 10),(74, 0),(75, 10),(76, 0),(77, 10),(78, 0),(79, 10);
CREATE TABLE IF NOT EXISTS `cartitem` (
`cartitem_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`cartitem_package_id` int(10) unsigned DEFAULT NULL,
`cartitem_cart_id` int(10) unsigned DEFAULT NULL,
`cartitem_price` decimal(7,2) NOT NULL DEFAULT '0.00',
PRIMARY KEY (`cartitem_id`),
KEY `cartitem_package_id` (`cartitem_package_id`),
KEY `cartitem_cart_id` (`cartitem_cart_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=89 ;
INSERT INTO `cartitem` (`cartitem_id`, `cartitem_package_id`, `cartitem_cart_id`, `cartitem_price`) VALUES
(1, 4, 2, 200.00),(2, 7, 3, 30.00),(3, 14, 9, 255.00),(4, 14, 9, 255.00),(5, 22, 9, 120.00),(6, 22, 9, 120.00),(7, 13, 13, 300.00),(8, 13, 13, 300.00),(9, 7, 14, 450.00),(10, 13, 14, 250.00),(11, 17, 14, 150.00),(12, 7, 15, 450.00),(13, 13, 15, 250.00),(14, 18, 15, 127.50),(15, 7, 16, 450.00),(16, 17, 16, 150.00),(17, 7, 18, 450.00),(18, 7, 19, 450.00),(19, 17, 19, 150.00),(20, 21, 19, 25.00),(21, 13, 20, 300.00),(22, 7, 21, 550.00),(23, 19, 21, 105.00),(24, 22, 21, 120.00),(25, 17, 22, 150.00),(26, 7, 23, 550.00),(27, 11, 24, 245.00),(31, 7, 26, 450.00),(32, 21, 26, 25.00),(33, 21, 26, 25.00),(34, 22, 26, 120.00),(35, 23, 26, 120.00),(36, 10, 27, 382.50),(37, 22, 27, 120.00),(38, 13, 27, 250.00),(39, 10, 28, 297.50),(43, 7, 29, 550.00),(41, 20, 28, 82.50),(42, 22, 28, 120.00),(44, 7, 30, 550.00),(46, 22, 30, 120.00),(47, 23, 30, 120.00),(48, 21, 18, 25.00),(49, 21, 19, 25.00),(50, 17, 37, 150.00),(51, 17, 37, 150.00),(52, 21, 37, 25.00),(53, 21, 37, 25.00),(54, 4, 45, 1.20),(55, 6, 45, 0.00),(56, 7, 47, 450.00),(57, 4, 50, 200.00),(58, 13, 52, 250.00),(59, 13, 19, 300.00),(60, 9, 19, 0.00),(61, 17, 53, 150.00),(62, 7, 53, 450.00),(63, 22, 18, 120.00),(64, 7, 16, 450.00),(65, 7, 54, 450.00),(66, 7, 57, 450.00),(67, 17, 57, 150.00),(68, 7, 56, 450.00),(69, 17, 59, 150.00),(70, 7, 60, 450.00),(71, 17, 61, 150.00),(72, 17, 63, 150.00),(73, 21, 65, 25.00),(74, 7, 66, 450.00),(75, 7, 67, 450.00),(76, 11, 68, 385.00),(77, 7, 71, 450.00),(78, 11, 73, 385.00),(79, 13, 73, 300.00),(80, 4, 75, 200.00),(82, 7, 73, 30.00),(83, 18, 73, 127.50),(84, 23, 73, 120.00),(85, 7, 73, 30.00),(86, 10, 77, 382.50),(87, 7, 79, 550.00),(88, 17, 79, 150.00);
The given query was a possible edge case leading to no results in this example.
SELECT cart_id
FROM cart
INNER JOIN cartitem as c1 ON cart_id=c1.cartitem_cart_id AND c1.cartitem_package_id= 7
INNER JOIN cartitem as c3 ON cart_id=c3.cartitem_cart_id AND c3.cartitem_package_id= 9
INNER JOIN cartitem as c4 ON cart_id=c4.cartitem_cart_id AND c4.cartitem_package_id= 13
INNER JOIN cartitem as c5 ON cart_id=c5.cartitem_cart_id AND c5.cartitem_package_id= 17
INNER JOIN cartitem as c6 ON cart_id=c6.cartitem_cart_id AND c6.cartitem_package_id= 21
Output:
cart_id
-------------
19
19
The query should return all carts containing items that are connected to packages(7,9,13,17,21) in this case.
My approach to your problem would be:
SELECT
cart_id
FROM
cart
INNER JOIN
cartitem
ON
cart_id = cartitem_cart_id
WHERE
cartitem_package_id IN (7,9,13,17,21) -- items that got to be in the cart
GROUP BY
cart_id
HAVING
count(distinct cartitem_package_id) = 5 -- number of different packages
;
DEMO with your data
Explanation
The principle is to filter first with the list of the desired values, here your packages. Now count the different packages per cart (GROUP BY cart_id). If this count matches the number of values in your filter list, then every single package must be in this cart.
You can replace the value list of the IN clause with a subselect, if you get those values from a subselect.
You should see that this approach should be easy to adapt to similar needs.
I have a problem, i have a query that just simply displays the user id in set,for retreieving the user id i am calling the function and it gives me the below list as string
SELECT u.user_id
FROM user u
WHERE u.user_id
IN (
'2, 3, 4, 5, 6, 7, 22, 33, 44, 55, 66, 77, 13, 23, 43, 53, 63, 73'
)
but when i execute this query it displays only the first user_id ie: 2 and all the user id are present in the database
So any help is deeply appreciated
Your code:
SELECT u.user_id FROM user u WHERE u.user_id IN ( '2, 3, 4, 5, 6, 7, 22, 33, 44, 55, 66, 77, 13, 23, 43, 53, 63, 73' )
http://forums.mysql.com/read.php?10,217174
Im almost surprised you had any match.. each of the numbers in your in list, need to be individual strings, eg '1','2','3' etc.
Remove single quotes like this and try code again-
SELECT u.user_id FROM user u WHERE u.user_id IN ( 2, 3, 4, 5, 6, 7, 22, 33, 44, 55, 66, 77, 13, 23, 43, 53, 63, 73);
Remove the single quotes:
IN ( 2, 3, 4, 5, 6, 7, 22, 33, 44, 55, 66, 77, 13, 23, 43, 53, 63, 73 )