I have a mysql database with
- table of Parcels which need to be sent to people (here 16,000 records),
indexes on account_no, service
- table of Price Rates (500,000 records) - rate depends on: delivery area, customer price rate and type of service(e.g. next day etc), indexes
on area, price rate, service
- table of first part of postcode (or zip Code), which gives area (3000)
- table of customer account, containing price rate (1600), index on price rate
The query finds the price it will cost to send the parcel and updates the customer price for that parcel with unique id
It is taking 70 seconds for 16000 parcel records to be updated with the price to send each parcel
UPDATE
tbl_parcel AS t20, (
SELECT
id, service, rate_group, area,
(
SELECT
rate
FROM
tbl_rates_all t4
WHERE
t4.service = t10.service
AND t4.area = t10.area
AND t4.rate_group = t10.rate_group
)
AS price
FROM
(
SELECT
id,
t1.service,
rate_group,
area
FROM
tbl_parcel t1
JOIN
tbl_account t2
ON t1.account_no = t2.account_no
JOIN
tbl_pr_postcode t3
ON LEFT(full_pcode, locate(' ', full_pcode) - 1) = t3.postcode
) t10
) AS src
SET
t20.customer_price = src.price
WHERE
t20.id = src.id
Takes 70 seconds for the 16000 parcel records
Ultimately it is this part that is killing the efficiency
FROM
tbl_rates_all t4
WHERE
t4.service = t10.service
AND t4.area = t10.area
AND t4.rate_group = t10.rate_group
I could have separate rates tables for each rate as this was the original design so a variable would call e.g. tbl_rates001 which might only have 3000 records and not 500,000. Problem with doing this in mysql was when creating a table name on the fly it was not possible without using a prepared statement so i thought this method was no good. Shame you couldn't use a user variable to hold the price rate number and then add this to the table rate name.
I'm quite new to databases and queries so if something is screaming at you that would help then thanks for any input
regards
ADDTION AS REQUESTED SCHEMA
CREATE TABLE `tbl_x_rate_all` (
`id` bigint(20) NOT NULL,
`service` varchar(4) NOT NULL,
`chargetype` char(1) NOT NULL,
`area` smallint(6) NOT NULL,
`rate` float(7,2) NOT NULL,
`rate_group` smallint(6) NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
ALTER TABLE `tbl_x_rate_all` ADD PRIMARY KEY (`id`), ADD KEY `rate_group` (`rate_group`), ADD KEY `area` (`area`), ADD KEY `service` (`service`),
ADD KEY `chargetype` (`chargetype`);
Assuming id, rate_group and area are from t1 inside t10 then your query is a slower? version of the one below:
UPDATE
tbl_parcel AS t20
INNER JOIN (
SELECT
t1.id,
t4.rate as price
FROM tbl_parcel t1
JOIN tbl_account t2 ON t1.account_no = t2.account_no
JOIN tbl_pr_postcode t3 ON LEFT(full_pcode, locate(' ', full_pcode) - 1) = t3.postcode
LEFT JOIN tbl_rates_all t4 ON t1.service = t4.service AND t1.area = t4.area
AND t1.rate_group = t4.rate_group
) src ON t20.id = src.id
SET
t20.customer_price = src.price
WHERE
t20.id = src.id
I am guessing you can further lose the subquery which tend to be cumbersome:
UPDATE
tbl_parcel AS t20
INNER JOIN tbl_parcel t1 ON t20.id = t1.id
INNER JOIN tbl_account t2 ON t1.account_no = t2.account_no
INNER JOIN tbl_pr_postcode t3 ON LEFT(full_pcode, locate(' ', full_pcode) - 1) = t3.postcode
LEFT JOIN tbl_rates_all t4 ON t1.service = t4.service AND t1.area = t4.area
AND t1.rate_group = t4.rate_group
SET
t20.customer_price = t4.rate
WHERE
t20.id = t1.id
-- can also replace with
-- TRUE
-- or lose it altogether
;
You could try adding index on joins of t1 and t4 if you have reason to believe that the join is the bottleneck:
create index tbl_rates_all_service_area_rate_group_index
on tbl_rates_all (service, area, rate_group);
create index tbl_parcel_service_area_rate_group_index
on tbl_parcel (service, area, rate_group);
#Rich --
Nothing's jumping out. You might be able to get some marginal improvements by building some temp tables, handling the SET outside your main query, and using some OUTER APPLY instead of the nested queries.
If you're fairly new to mysql / databases in general, the EXPLAIN function can be very useful in optimization
https://dev.mysql.com/doc/refman/5.5/en/using-explain.html
Related
I got 2 tables tbl_issued and tbl_transaction.
tbl_issued has its columns, ItemID,Item,Serial,Quantity and Size. While tbl_transaction has its columns Released,Received,Approved and Department
My problem is I want to get their columns in 1 query, this is mysql query
SELECT `ItemID`,`Item`,`Serial`,`Quantity`,`Size`,`Class`,`Unit`,(SELECT `Released` FROM `tbl_transaction` WHERE `TransactionID` = 12458952) AS `Released`,
(SELECT `Received` FROM `tbl_transaction` WHERE `TransactionID` = 12458952) AS `Received`,
(SELECT `Approved` FROM `tbl_transaction` WHERE `TransactionID` = 12458952) AS `Aprroved`,
(SELECT `Department` FROM `tbl_transaction` WHERE `TransactionID` = 12458952) AS `Department`
FROM `tbl_issued` WHERE `TransactionID` = 12458952
but transferring this on vb.net does not provide output.
Any ideas how i will translate this query to vb.net? Thanks in advance for help!
I don't know what you are trying to do but if you want it to be simplified, here's how. Have you tried Inner Joins? It's like this.
SELECT ItemID, Item, Serial, Quantity, Size, Class, Unit, Released, Received,
Approved, Deparment from tbl_issued a INNER JOIN tbl_transaction b on
a.TransactionID = b.TransactionID Where a.TransactionID = 12458952
I assume that both tables have TransactionID based on your query.
I have the following table in my database. Its purpose is to hold colour sets. I.e. [red + black], [blue + green + yellow], etc.
CREATE TABLE `df_productcolours`
(
`id` int(11) NOT NULL AUTO_INCREMENT,
`id_colourSet` int(11) NOT NULL,
`id_colour` int(11) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `UNIQUE` (`id_colourSet`,`id_colour`),
KEY `idx_colourSet` (`id_colourSet`),
KEY `idx_colour_id` (`id_colour`),
CONSTRAINT `fk_colourid` FOREIGN KEY (`id_colour`) REFERENCES `df_lu_color` (`id`)
ON DELETE NO ACTION ON UPDATE NO ACTION
)
I made a stored proc that takes an array of id_colour integers as input, and returns a colour set id. What it's meant to do is return the set that contains those colours, and ONLY those colours that are provided as input. What it's actually doing is returning sets that contain the colours requested plus some others.
This is the code that I have so far:
SET #count = (SELECT COUNT(*) FROM tempTable_inputColours);
SELECT A.id_colourSet
FROM df_productcolours AS A
INNER JOIN tempTable_inputColours AS B
ON A.id_colour = B.id_colour
GROUP BY A.id_colourSet
HAVING COUNT(A.id_colour) = #count
AND COUNT(B.id_colour) = #count;
I have a feeling the issue may be with the way I'm joining, but I just can't seem to get it. Any help would be appreciated. Thanks.
You can try this:
SELECT A.id_colourSet
FROM df_productcolours AS A
INNER JOIN tempTable_inputColours AS B
ON A.id_colour = B.id_colour
WHERE A.id_colourSet IN (SELECT id_colour FROM tempTable_inputColours)
AND A.id_colour IN (SELECT id_colour FROM tempTable_inputColours)
EDIT
SELECT A.id_colourSet
FROM df_productcolours AS A
INNER JOIN tempTable_inputColours AS B
ON A.id_colour = B.id_colour
WHERE A.id_colourSet =(SELECT SUM(id_colour) FROM tempTable_inputColours)
I think I solved it myself after a few days of punishment. Here's the code:
SET clrCount = (SELECT COUNT(*) FROM _tmp_ColourSet);
-- The first half of the query does an inner join,
-- it will return all sets that have ANY of our requested colours.
-- But the HAVING condition will make it return sets that have AT LEAST all of the colours we are requesting.
-- So at this point we have all the super-sets, if you will.
-- Then, the second half of the query will restrict that further,
-- to only sets that have the same number of colours as we are requesting.
-- And voila :)
-- FIND ALL COLOUR SETS THAT HAVE ALL REQUESTED COLOURS
SET colourSetId = (SELECT A.id_colourSet
FROM df_productcolours AS A
INNER JOIN _tmp_colourset AS B
ON A.id_colour = B.id_colour
GROUP BY A.id_colourSet
HAVING COUNT(A.id_colour) = clrCount
-- FIND ALL COLOUR SETS THAT HAVE EXACTLY N COLOURS
AND A.id_colourSet IN (SELECT A.id_colourSet
FROM df_productcolours AS A
GROUP BY A.id_colourSet
HAVING COUNT(A.id_colour) = clrCount));
Hope it saves someone pulling their hair out.
I have 4 tables:
Table talks
table talks_fan
table talks_follow
table talks_comments
What I'm trying to achieve is counting all comments, fans, followers for every single talk.
I came up with this so far.
All tables have talk_id and only in talks table is a primary key
SELECT
g. *,
COUNT( m.talk_id ) AS num_of_comments,
COUNT( f.talk_id ) AS num_of_followers
FROM
talks AS g
LEFT JOIN talks_comments AS m
USING ( talk_id )
LEFT JOIN talks_follow AS f
USING ( talk_id )
WHERE g.privacy = 'public'
GROUP BY g.talk_id
ORDER BY g.created_date DESC
LIMIT 30;
I also tried using this method
SELECT
t.*,
COUNT(b.talk_id) AS comments,
COUNT(bt.talk_id) AS followers
FROM
talks t
LEFT JOIN talks_follow bt
ON bt.talk_id = t.talk_id
LEFT JOIN talks_comments b
ON b.talk_id = t.talk_id
GROUP BY t.talk_id;
Both give me the same results ....?!
Update: Create Statements
CREATE TABLE IF NOT EXISTS `talks` (
`talk_id` bigint(20) NOT NULL AUTO_INCREMENT,
`user_id` mediumint(9) NOT NULL,
`title` varchar(255) NOT NULL,
`content` text NOT NULL,
`created_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`privacy` enum('public','private') NOT NULL DEFAULT 'private',
PRIMARY KEY (`talk_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=7 ;
CREATE TABLE IF NOT EXISTS `talks_comments` (
`comment_id` bigint(20) NOT NULL AUTO_INCREMENT,
`talk_id` bigint(20) NOT NULL,
`user_id` mediumint(9) NOT NULL,
`comment` text NOT NULL,
`date_created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`status` tinyint(1) NOT NULL DEFAULT '0',
PRIMARY KEY (`comment_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=8 ;
CREATE TABLE IF NOT EXISTS `talks_fan` (
`fan_id` bigint(20) NOT NULL AUTO_INCREMENT,
`talk_id` bigint(20) NOT NULL,
`user_id` bigint(20) NOT NULL,
`created_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`status` tinyint(1) NOT NULL DEFAULT '1',
PRIMARY KEY (`fan_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=4 ;
CREATE TABLE IF NOT EXISTS `talks_follow` (
`follow_id` bigint(20) NOT NULL AUTO_INCREMENT,
`talk_id` bigint(20) NOT NULL,
`user_id` mediumint(9) NOT NULL,
`date_created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`follow_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=5 ;
The final query that works
SELECT t.* , COUNT( DISTINCT b.comment_id ) AS comments,
COUNT( DISTINCT bt.follow_id ) AS followers,
COUNT( DISTINCT c.fan_id ) AS fans
FROM talks t
LEFT JOIN talks_follow bt ON bt.talk_id = t.talk_id
LEFT JOIN talks_comments b ON b.talk_id = t.talk_id
LEFT JOIN talks_fan c ON c.talk_id = t.talk_id
WHERE t.privacy = 'public'
GROUP BY t.talk_id
ORDER BY t.created_date DESC
LIMIT 30
EDIT: Final answer to the whole issue...
I have modified the Query and created some code in PHP (Codeigniter) to solve my issue apone the reccomendation of #Bill Karwin
$sql="
SELECT t.*,
COUNT( DISTINCT b.comment_id ) AS comments,
COUNT( DISTINCT bt.follow_id ) AS followers,
COUNT( DISTINCT c.fan_id ) AS fans,
GROUP_CONCAT( DISTINCT c.user_id ) AS list_of_fans
FROM talks t
LEFT JOIN talks_follow bt ON bt.talk_id = t.talk_id
LEFT JOIN talks_comments b ON b.talk_id = t.talk_id
LEFT JOIN talks_fan c ON c.talk_id = t.talk_id
WHERE t.privacy = 'public'
GROUP BY t.talk_id
ORDER BY t.created_date DESC
LIMIT 30
";
$query = $this->db->query($sql);
if($query->num_rows() > 0)
{
$results = array();
foreach($query->result_array() AS $talk){
$fan_user_id = explode(",", $talk['list_of_fans']);
foreach($fan_user_id AS $user){
if($user == 1 /* this supposed to be user id or session*/){
$talk['list_of_fans'] = 'yes';
}
}
$follower_user_id = explode(",", $talk['list_of_follower']);
foreach($follower_user_id AS $user){
if($user == 1 /* this supposed to be user id or session*/){
$talk['list_of_follower'] = 'yes';
}
}
$results[] = array(
'talk_id' => $talk['talk_id'],
'user_id' => $talk['user_id'],
'title' => $talk['title'],
'created_date' => $talk['created_date'],
'comments' => $talk['comments'],
'followers' => $talk['followers'],
'fans' => $talk['fans'],
'list_of_fans' => $talk['list_of_fans'],
'list_of_follower' => $talk['list_of_follower']
);
}
}
I STILL BELIEVE IT COULD BE OPTIMIZED IN THE DB AND JUST USE THE RESULT...
Im thinking if there are 1000 follower and 2000 fans of every single TALK then the result will take much longer to load.. HOW IF YOUT MULTIPLY THE NO WITH 10. Or im mistaking hear...
EDIT: adding benchmark for the query test...
I have used codeigniter profiler to know how long it take for the query to finish excuting.
that been said i also start adding data in the tables gratually
the result as follows.
Testing the DB after answerting data into it
Query Results time
table Talks
---------------
table data 50 rows.
Time: 0.0173 seconds
Table Rows: 644 rows
Time: 0.0535 seconds
Table Rows: 1250 rows
Time: 0.0856 seconds
Adding data to other tables
--------------------------
Talks = 1250 rows
talks_follow = 4115
talks_fan = 10 rows
Time: 2.656 seconds
Adding data to other tables
--------------------------
Talks = 1250 rows
talks_follow = 4115
talks_fan = 10 rows
talks_comments = 3650 rows
Time: 10.156 seconds
After replacing LEFT JOIN with STRAIGHT_JOIN
Time: 6.675 seconds
It seems that its extremely heavy on the DB.....
NOW Im Going to another dilemma on how to enhance its performance
Edited: using #leonardo_assumpcao suggestion
After rebuilding the DB using #leonardo_assumpcao suggestion
for indexing few fields..........
Adding data to other tables
--------------------------
Talks = 6000 Rows
talks_follow = 10000 Rows
talks_fan = 10000 Rows
talks_comments = 10000 Rows
Time: 17.940 second
Is this normal for heavy data DB......?
I can say this is (at least) one of the coolest select statements I improved today.
SELECT STRAIGHT_JOIN
t.* ,
COUNT( DISTINCT b.comment_id ) AS comments,
COUNT( DISTINCT bt.follow_id ) AS followers,
COUNT( DISTINCT c.fan_id ) AS fans
FROM
(
SELECT * FROM talks
WHERE privacy = 'public'
ORDER BY created_date DESC
LIMIT 0, 30
) AS t
LEFT JOIN talks_follow bt ON (bt.talk_id = t.talk_id)
LEFT JOIN talks_comments b ON (b.talk_id = t.talk_id)
LEFT JOIN talks_fan c ON (c.talk_id = t.talk_id)
GROUP BY t.talk_id ;
But it seems to me that your problem resides on your tables; A first step to obtain efficient queries is to index every field involved on your desired joins.
I've made some modifications on the tables you shown above; You can see its code here (updated).
Quite interesting, isn't it? Since we're here, take also your ERR model:
First try it using MySQL test database. Hopefully it will solve your performance troubles.
(Forgive my english, it's my second language)
You can force this into one query like so:
SELECT COUNT(*) num, 'talks' item FROM talks
UNION
SELECT COUNT(*) num, 'talks_fan' item FROM talks_fan
UNION
SELECT COUNT(*) num, 'talks_follow' item FROM talks_follow
UNION
SELECT COUNT(*) num, 'talks_comment' item FROM talks_comment
This will give you a five row resultset with one row per table. Each row is the count in a particular table.
If you must get it all into a single row you can do a pivot like so.
SELECT
SUM( CASE item WHEN 'talks' THEN num ELSE 0 END ) AS 'talks',
SUM( CASE item WHEN 'talks_fan' THEN num ELSE 0 END ) AS 'talks_fan',
SUM( CASE item WHEN 'talks_follow' THEN num ELSE 0 END ) AS 'talks_follow',
SUM( CASE item WHEN 'talks_comment' THEN num ELSE 0 END ) AS 'talks_comment'
FROM
( SELECT COUNT(*) num, 'talks' item FROM talks
UNION
SELECT COUNT(*) num, 'talks_fan' item FROM talks_fan
UNION
SELECT COUNT(*) num, 'talks_follow' item FROM talks_follow
UNION
SELECT COUNT(*) num, 'talks_comment' item FROM talks_comment
) counts
(This doesn't take into account your WHERE g.privacy = clause because I don't understand that. But you could add a WHERE clause to one one of the four queries in the UNION item to handle that.)
Notice that this truly is four queries on four separate tables coerced into a single query.
And, by the way, there is no difference in value between COUNT(*) and COUNT(id) when id is the primary key of the table. COUNT(id) doesn't count the rows for which the id is NULL, but if id is the primary key, then it is NOT NULL. But COUNT(*) is faster, so use it.
Edit if you need the number of fan, follow, and comment rows for each distinct talk, do this. It's the same idea of doing a union and a pivot, but with an extra parameter.
SELECT
talk_id,
SUM( CASE item WHEN 'talks_fan' THEN num ELSE 0 END ) AS 'talks_fan',
SUM( CASE item WHEN 'talks_follow' THEN num ELSE 0 END ) AS 'talks_follow',
SUM( CASE item WHEN 'talks_comment' THEN num ELSE 0 END ) AS 'talks_comment'
FROM
(
SELECT talk_id, COUNT(*) num, 'talks_fan' item
FROM talks_fan
GROUP BY talk_id
UNION
SELECT talk_id, COUNT(*) num, 'talks_follow' item
FROM talks_follow
GROUP BY talk_id
UNION
SELECT talk_id, COUNT(*) num, 'talks_comment' item
FROM talks_comment
GROUP BY talk_id
) counts
GROUP BY talk_id
After doing this for (too) many years, I've discovered that the best way to describe a query you need is to say to yourself "I need a result set with one row for each xxx, with columns for yyy, zzz, and qqq."
The reason the counts are the same is that it's counting rows after the joins have combined the tables. By joining to multiple tables, you're creating a Cartesian product.
Basically, you're counting not only how many comments per talk, but how many comments * followers per talk. Then you count the followers as how many followers * comments per talk. Thus the counts are the same, and they're all way too high.
Here's a simpler way to write a query to count each distinct comment, follower, etc. only once:
SELECT t.*,
COUNT(DISTINCT b.comment_id) AS comments,
COUNT(DISTINCT bt.follow_id) AS followers
FROM talks t
LEFT JOIN talks_follow bt ON bt.talk_id = t.talk_id
LEFT JOIN talks_comments b ON b.talk_id = t.talk_id
GROUP BY t.talk_id;
Re your comment: I wouldn't fetch all the followers in the same query. You could do it this way:
SELECT t.*,
COUNT(DISTINCT b.comment_id) AS comments,
COUNT(DISTINCT bt.follow_id) AS followers,
GROUP_CONCAT(DISTINCT bt.follower_name) AS list_of_followers
FROM talks t
LEFT JOIN talks_follow bt ON bt.talk_id = t.talk_id
LEFT JOIN talks_comments b ON b.talk_id = t.talk_id
GROUP BY t.talk_id;
But what you'd get back is a single string with the follower names separated by commas. Now you have to write application code to split the string on commas, you have to worry if some follower names actually contain commas already, and so on.
I'd do a second query, fetching the followers for a given talk. It's likely you want to display the followers only for a specific talk anyway.
SELECT follower_name
FROM talks_follow
WHERE talk_id = ?
I have a database full of Pokemon Cards, and their attacks. I want to do a query to find the Pokemon that has the strongest attack by each type. I want the view to show just the name, type, and damage of the attack.
SELECT p2.MaxD, p2.Type, p1.name
FROM Pokemon p1
INNER JOIN ( SELECT type, MAX(damage) MaxD, pokemon_name FROM Attack GROUP BY Type )
p2 ON p1.type = p2.type AND p2.pokemon_name = p1.name
I have this code. It returns the highest damage but not the correct Pokemon. The Pokemon table doesn't have a damage field. I'm trying to get a grasp of joins.
Here is the structure:
Attack table has 4 fields: pokemon_name (the pokemon this attack belongs to), damage, name (name of the attack), and type (the type of pokemon this attack belongs to).
The Pokemon table has 3: HP, type (of the pokemon), and name (of the pokemon).
First of all you have to build select that select maximal damage for each type (you already have that):
SELECT type, MAX(damage) MaxD FROM Attack GROUP BY Type
Now, this won't have a good performance unless:
type is INT (or ENUM or other numeric type)
there's index on type or type, damage
You cannot select pokemon_name because MySQL doesn't guarantee that you'll get pokemon_name matching MaxD (here's a nice answer on stackoverflow which already covers this issue).
Now you can select pokemon with that matching pokemon_name
SELECT p1.pokemon_name, p1.type, p1.damage
FROM Attack p1
INNER JOIN (
SELECT type, MAX(damage) MaxD FROM Attack GROUP BY Type
) p2 ON p1.type = p2.type
AND p1.damage = p2.MaxDamage
GROUP BY (p1.type, p1.damage)
The last GROUP BY statement makes sure that having multiple pokemons with the same attack damage won't cause multiple records for one type,damage pairs.
Again, you will achieve good performance by replacing pokemon_name with pokemon_id. Maybe you should google database normalization for a while [wikipedia],[first tutorial]. You also may want to check this Q&A out, it provides nice overview of what does "relation table" mean.
Now you have correct pokemon_name (for your program sake, I hope you'll replace this with pokemon_id) and you may put it all together:
SELECT p1.pokemon_name, p1.type, p1.damage, p.*
FROM Attack p1
INNER JOIN (
SELECT type, MAX(damage) MaxD FROM Attack GROUP BY Type
) p2 ON p1.type = p2.type
AND p1.damage = p2.MaxDamage
INNER JOIN Pokemon p
ON p.pokemon_name = p1.pokemon_name
GROUP BY (p1.type, p1.damage)
Ideal example
In perfect world you're database would look like this:
-- Table with pokemons
CREATE TABLE `pokemons` (
`id` INT NOT NULL AUTO_INCREMENT,
`name` VARCHAR(255),
-- More fields
PRIMARY KEY (`id`)
)
-- This contains pairs as (1,'Wather'), (2, 'Flame'), ...
CREATE TABLE `AttackTypes` (
`id`,
`name` VARCHAR(255)
)
-- Create records like (1, 2, 3, 152)
-- 1 = automatically generated keys
-- 2 = id of pokemon (let say it's Pikachu :P)
-- 3 = type of attack (this say it's Electric)
-- 152 = damage
-- This way each pokemon may have multiple attack types (Charizard flame + wind)
CREATE TABLE `Attacks` (
`id`,
`pokemonID` INT NOT NULL, -- Represents pokemons.id
`typeID` INT NOT NULL, -- Represents attack.id
`damage` INT
)
ID fields are ALWAYS PRIMARY KEY, NOT NULL and AUTO_INCREMENT in this example
And the select from it, again get types first:
SELECT MAX(attack.damage) AS mDmg, attack.typeID
FROM attack
GROUP BY attack.typeID
Than get pokemon ID:
SELECT a.pokemonID, a.damage, a.typeID
FROM attack AS a
INNER JOIN (
SELECT MAX(a.damage) AS mDmg, a.typeID
FROM attack AS a
GROUP BY a.typeID
) AS maxA
ON a.typeID = maxA.typeID
AND a.damage = mDmg
GROUP BY (a.typeID)
And once you've covered all that you may actually select pokemon data
SELECT aMax.pokemonID as id,
aMax.damage,
p.name AS pokemonName,
aMax.typeID AS attackTypeID,
t.name AS attackType
FROM (
SELECT a.pokemonID, a.damage, a.type
FROM attack AS a
INNER JOIN (
SELECT MAX(a.damage) AS mDmg, a.type
FROM attack AS a
GROUP BY a.type
) AS maxA
ON a.type = maxA.type
AND a.damage = mDmg
GROUP BY (a.type)
) AS aMax
INNER JOIN pokemons AS p
ON p.id = aMax.pokemonID
INNER JOIN AttackTypes AS t
ON t.id = aMax.typeID
Performance hints:
you may add field MaxDamage into AttackTypes (which would be calculated by stored procedure) and will save you one level of nasted query
all ID fields should be PRIMARY KEYs
index on Attacks.typeID allows you to quickly get all pokemons capable of that type of attack
index on Attack.damage allows you to quickly find strongest attack
index on Attack.type, Attack.damage (two fields) will be helpful when finding max value for each attack
index on Attack.pokemonID will make look up pokemon -> attack -> attack type name faster
I'm not really sure about your schema but I assumed that the pokemon_name of your attack table is really the name of the pokemon.
SELECT a.*, c.*
FROM Attack a
INNER JOIN
(
SELECT type, MAX(damage) MaxD
FROM Attack
GROUP BY Type
) b ON a.Type = b.Type AND
a.damage = b.MaxD
INNER JOIN Pokemon c
ON c.Name = a.pokemon_name AND
c.Type = a.Type
the above query displays all field from attack table and pokemon table, but If you are really interested on the name, damage and type the you only do query on attack table
SELECT a.*
FROM Attack a
INNER JOIN
(
SELECT type, MAX(damage) MaxD
FROM Attack
GROUP BY Type
) b ON a.Type = b.Type AND
a.damage = b.MaxD
I need to check (from the same table) if there is an association between two events based on date-time.
One set of data will contain the ending date-time of certain events and the other set of data will contain the starting date-time for other events.
If the first event completes before the second event then I would like to link them up.
What I have so far is:
SELECT name as name_A, date-time as end_DTS, id as id_A
FROM tableA WHERE criteria = 1
SELECT name as name_B, date-time as start_DTS, id as id_B
FROM tableA WHERE criteria = 2
Then I join them:
SELECT name_A, name_B, id_A, id_B,
if(start_DTS > end_DTS,'VALID','') as validation_check
FROM tableA
LEFT JOIN tableB ON name_A = name_B
Can I then, based on my validation_check field, run a UPDATE query with the SELECT nested?
You can actually do this one of two ways:
MySQL update join syntax:
UPDATE tableA a
INNER JOIN tableB b ON a.name_a = b.name_b
SET validation_check = if(start_dts > end_dts, 'VALID', '')
-- where clause can go here
ANSI SQL syntax:
UPDATE tableA SET validation_check =
(SELECT if(start_DTS > end_DTS, 'VALID', '') AS validation_check
FROM tableA
INNER JOIN tableB ON name_A = name_B
WHERE id_A = tableA.id_A)
Pick whichever one seems most natural to you.
UPDATE
`table1` AS `dest`,
(
SELECT
*
FROM
`table2`
WHERE
`id` = x
) AS `src`
SET
`dest`.`col1` = `src`.`col1`
WHERE
`dest`.`id` = x
;
Hope this works for you.
Easy in MySQL:
UPDATE users AS U1, users AS U2
SET U1.name_one = U2.name_colX
WHERE U2.user_id = U1.user_id
If somebody is seeking to update data from one database to another no matter which table they are targeting, there must be some criteria to do it.
This one is better and clean for all levels:
UPDATE dbname1.content targetTable
LEFT JOIN dbname2.someothertable sourceTable ON
targetTable.compare_field= sourceTable.compare_field
SET
targetTable.col1 = sourceTable.cola,
targetTable.col2 = sourceTable.colb,
targetTable.col3 = sourceTable.colc,
targetTable.col4 = sourceTable.cold
Traaa! It works great!
With the above understanding, you can modify the set fields and "on" criteria to do your work. You can also perform the checks, then pull the data into the temp table(s) and then run the update using the above syntax replacing your table and column names.
Hope it works, if not let me know. I will write an exact query for you.
UPDATE
receipt_invoices dest,
(
SELECT
`receipt_id`,
CAST((net * 100) / 112 AS DECIMAL (11, 2)) witoutvat
FROM
receipt
WHERE CAST((net * 100) / 112 AS DECIMAL (11, 2)) != total
AND vat_percentage = 12
) src
SET
dest.price = src.witoutvat,
dest.amount = src.witoutvat
WHERE col_tobefixed = 1
AND dest.`receipt_id` = src.receipt_id ;
Hope this will help you in a case where you have to match and update between two tables.
I found this question in looking for my own solution to a very complex join. This is an alternative solution, to a more complex version of the problem, which I thought might be useful.
I needed to populate the product_id field in the activities table, where activities are numbered in a unit, and units are numbered in a level (identified using a string ??N), such that one can identify activities using an SKU ie L1U1A1. Those SKUs are then stored in a different table.
I identified the following to get a list of activity_id vs product_id:-
SELECT a.activity_id, w.product_id
FROM activities a
JOIN units USING(unit_id)
JOIN product_types USING(product_type_id)
JOIN web_products w
ON sku=CONCAT('L',SUBSTR(product_type_code,3), 'U',unit_index, 'A',activity_index)
I found that that was too complex to incorporate into a SELECT within mysql, so I created a temporary table, and joined that with the update statement:-
CREATE TEMPORARY TABLE activity_product_ids AS (<the above select statement>);
UPDATE activities a
JOIN activity_product_ids b
ON a.activity_id=b.activity_id
SET a.product_id=b.product_id;
I hope someone finds this useful
UPDATE [table_name] AS T1,
(SELECT [column_name]
FROM [table_name]
WHERE [column_name] = [value]) AS T2
SET T1.[column_name]=T2.[column_name] + 1
WHERE T1.[column_name] = [value];
You can update values from another table using inner join like this
UPDATE [table1_name] AS t1 INNER JOIN [table2_name] AS t2 ON t1.column1_name] = t2.[column1_name] SET t1.[column2_name] = t2.column2_name];
Follow here to know how to use this query http://www.voidtricks.com/mysql-inner-join-update/
or you can use select as subquery to do this
UPDATE [table_name] SET [column_name] = (SELECT [column_name] FROM [table_name] WHERE [column_name] = [value]) WHERE [column_name] = [value];
query explained in details here http://www.voidtricks.com/mysql-update-from-select/
You can use:
UPDATE Station AS st1, StationOld AS st2
SET st1.already_used = 1
WHERE st1.code = st2.code
For same table,
UPDATE PHA_BILL_SEGMENT AS PHA,
(SELECT BILL_ID, COUNT(REGISTRATION_NUMBER) AS REG
FROM PHA_BILL_SEGMENT
GROUP BY REGISTRATION_NUMBER, BILL_DATE, BILL_AMOUNT
HAVING REG > 1) T
SET PHA.BILL_DATE = PHA.BILL_DATE + 2
WHERE PHA.BILL_ID = T.BILL_ID;
I had an issue with duplicate entries in one table itself. Below is the approaches were working for me. It has also been advocated by #sibaz.
Finally I solved it using the below queries:
The select query is saved in a temp table
IF OBJECT_ID(N'tempdb..#New_format_donor_temp', N'U') IS NOT NULL
DROP TABLE #New_format_donor_temp;
select *
into #New_format_donor_temp
from DONOR_EMPLOYMENTS
where DONOR_ID IN (
1, 2
)
-- Test New_format_donor_temp
-- SELECT *
-- FROM #New_format_donor_temp;
The temp table is joined in the update query.
UPDATE de
SET STATUS_CD=de_new.STATUS_CD, STATUS_REASON_CD=de_new.STATUS_REASON_CD, TYPE_CD=de_new.TYPE_CD
FROM DONOR_EMPLOYMENTS AS de
INNER JOIN #New_format_donor_temp AS de_new ON de_new.EMP_NO = de.EMP_NO
WHERE
de.DONOR_ID IN (
3, 4
)
I not very experienced with SQL please advise any better approach you know.
Above queries are for MySql server.
if you are updating from a complex query. The best thing is create temporary table from the query, then use the temporary table to update as one query.
DROP TABLE IF EXISTS cash_sales_sums;
CREATE TEMPORARY TABLE cash_sales_sums as
SELECT tbl_cash_sales_documents.batch_key, COUNT(DISTINCT tbl_cash_sales_documents.cash_sale_number) no_of_docs,
SUM(tbl_cash_sales_documents.paid_amount) paid_amount, SUM(A.amount - tbl_cash_sales_documents.bonus_amount - tbl_cash_sales_documents.discount_given) amount,
SUM(A.recs) no_of_entries FROM
tbl_cash_sales_documents
RIGHT JOIN(
SELECT
SUM(
tbl_cash_sales_transactions.amount
)amount,
tbl_cash_sales_transactions.cash_sale_document_id,
COUNT(transaction_id)recs
FROM
tbl_cash_sales_transactions
GROUP BY
tbl_cash_sales_transactions.cash_sale_document_id
)A ON A.cash_sale_document_id = tbl_cash_sales_documents.cash_sale_id
GROUP BY
tbl_cash_sales_documents.batch_key
ORDER BY batch_key;
UPDATE tbl_cash_sales_batches SET control_totals = (SELECT amount FROM cash_sales_sums WHERE cash_sales_sums.batch_key = tbl_cash_sales_batches.batch_key LIMIT 1),
expected_number_of_documents = (SELECT no_of_docs FROM cash_sales_sums WHERE cash_sales_sums.batch_key = tbl_cash_sales_batches.batch_key),
computer_number_of_documents = expected_number_of_documents, computer_total_amount = control_totals
WHERE batch_key IN (SELECT batch_key FROM cash_sales_sums);
INSERT INTO all_table
SELECT Orders.OrderID,
Orders.CustomerID,
Orders.Amount,
Orders.ProductID,
Orders.Date,
Customer.CustomerName,
Customer.Address
FROM Orders
JOIN Customer ON Orders.CustomerID=Customer.CustomerID
WHERE Orders.OrderID not in (SELECT OrderID FROM all_table)