I have a table in my database with 2 rows, Level, and Experience.
CREATE TABLE `player_xp_for_level` (
`Level` TINYINT(3) UNSIGNED NOT NULL,
`Experience` INT(10) UNSIGNED NOT NULL,
PRIMARY KEY (`Level`)
)
COLLATE='utf8_general_ci'
ENGINE=MyISAM
;
The level experience up to level 80 have been filled in by a predefined index.
However I would like the stats for level 81 to be based on the experience of level 80 but just multiplied by *1.0115
Basically I'm looking for a query that inserts one row at a time.
Checks the previous Experience, and then modifies it and inserts it.
Workflow:: Checks previous row, updates values (previous experience*1.0115) and inserts.
If you want only to show higher score without affecting data in the database. You could use CASE expression in the SELECT statement:
SELECT player, lvl,
CASE
WHEN lvl BETWEEN 80 AND 255 THEN score * 1.0115
ELSE score
END as score
FROM player_xp_for_level
As you have posted additional info, I've updated my answer with the INSERT statement. There also you could use CASE expression in following:
INSERT INTO player_xp_for_level (lvl, score)
VALUES (#lvl, CASE WHEN #lvl BETWEEN 80 AND 255 THEN #score * 1.0115 ELSE #score END);
Assuming the structure of your table, which isn't clear from the question, then something like this?
UPDATE
player_xp_for_level
SET
xp = xp * 1.0115
WHERE
player_level BETWEEN 80 AND 255;
From the minimum code you provided, I think this is what you want:
UPDATE player_xp_for_level
SET name_of_value_column = name_of_value_column * 1.0115
WHERE name_of_level_column BETWEEN 80 AND 255;
Related
If I have a table like this:
CREATE TABLE `Suppression` (
`SuppressionId` int(11) NOT NULL AUTO_INCREMENT,
`Address` varchar(255) DEFAULT NULL,
`BooleanOne` bit(1) NOT NULL DEFAULT '0',
`BooleanTwo` bit(1) NOT NULL DEFAULT '0',
`BooleanThree` bit(1) NOT NULL DEFAULT '0',
PRIMARY KEY (`SuppressionId`),
)
Is there a set-based way in which I can select all records which have exactly one of the three bit fields = 1 without writing out the field names?
For example given:
1 10 Pretend Street 1 1 1
2 11 Pretend Street 0 0 0
3 12 Pretend Street 1 1 0
4 13 Pretend Street 0 1 0
5 14 Pretend Street 1 0 1
6 14 Pretend Street 1 0 0
I want to return records 4 and 6.
You could "add them up":
where cast(booleanone as unsigned) + cast(booleantwo as unsigned) + cast(booleanthree as unsigned) = 1
Or, use tuples:
where ( (booleanone, booleantwo, booleanthree) ) in ( (0b1, 0b0, 0b0), (0b0, 0b1, 0b0), (0b0, 0b0, 0b1) )
I'm not sure what you mean by "set-based".
If your number of booleans can vary over time and you don't want to update your code, I suggest you make them lines and not columns.
For example:
CREATE TABLE `Suppression` (
`SuppressionId` int(11) NOT NULL AUTO_INCREMENT,
`Address` varchar(255) DEFAULT NULL,
`BooleanId` int(11) NOT NULL,
`BooleanValue` bit(1) NOT NULL DEFAULT '0',
PRIMARY KEY (`SuppressionId`,`BooleanId`),
)
So with 1 query and a 'group by' you can check all values of your booleans, however numerous they are. Of course, this makes your tables bigger.
EDIT: Just came out with another idea: why don't you have a checksum column added, whose value would be the sum of all your bits? So you would update it at every write into your table, and just check this one in your select
If you
must use this denormalized way of representing these flags, and you
must be able to add new flag columns to your table in production, and you
cannot rewrite your queries by hand when you add columns,
then you must figure out how to write a program to write your queries.
You can use this query to retrieve a result set of boolean-valued columns, then you can use that result set in a program to write a query involving all those columns.
SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA = DATABASE()
AND TABLE_NAME = 'Suppression'
AND COLUMN_NAME LIKE 'Boolean%'
AND DATA_TYPE = 'bit'
AND NUMERIC_PRECISION=1
The approach you have proposed here will work exponentially more poorly as you add columns, unfortunately. Any time a software engineer says "exponential" it's time to run away screaming. Seriously.
A much more scalable approach is to build a one-to-many relationship between your Suppression rows and your flags. Add this table.
CREATE TABLE SuppressionFlags (
SuppressionId int(11) NOT NULL,
FlagName varchar(31) NOT NULL,
Value bit(1) NOT NULL DEFAULT '0',
PRIMARY KEY (SuppressionID, FlagName)
)
Then, when you want to insert a row with some flag variables, do this sequence of queries.
INSERT INTO Suppression (Address) VALUES ('some address');
SET #SuppressionId := LAST_INSERT_ID();
INSERT INTO SuppressionFlags (SuppressionId, FlagName, Value)
VALUES (#SuppressionId, 'BooleanOne', 1);
INSERT INTO SuppressionFlags (SuppressionId, FlagName, Value)
VALUES (#SuppressionId, 'BooleanTwo', 0);
INSERT INTO SuppressionFlags (SuppressionId, FlagName, Value)
VALUES (#SuppressionId, 'BooleanThree', 0);
This gives you one Suppression row with three flags set in the SuppressionFlags table. Note the use of #SuppressionId to set the Id values in the second table.
Then to find all rows with just one flag set, do this.
SELECT Suppression.SuppressionId, Suppression.Address
FROM Suppression
JOIN SuppressionFlags ON Suppression.SuppressionId = SuppressionFlags.SuppressionId
GROUP BY Suppression.SuppressionId, Suppression.Address
HAVING SUM(SuppressionFlags.Value) = 1
It gets a little trickier if you want more elaborate combinations. For example, if you want all rows with BooleanOne and either BooleanTwo or BooleanThree set, you need to do something like this.
SELECT S.SuppressionId, S.Address
FROM Suppression S
JOIN SuppressionFlags A ON S.SuppressionId=A.SuppressionId AND A.FlagName='BooleanOne'
JOIN SuppressionFlags B ON S.SuppressionId=B.SuppressionId AND B.FlagName='BooleanTwo'
JOIN SuppressionFlags C ON S.SuppressionId=C.SuppressionId AND C.FlagName='BooleanThree'
WHERE A.Value = 1 AND (B.Value = 1 OR C.Value = 1)
This common database pattern is called the attribute / value pattern. Because SQL doesn't easily let you use variables for column names (it doesn't really have reflection) this kind of way of naming your attributes is your best path to extensibility.
It's a little more SQL. But you can add as many new flags as you need, in production, without rewriting queries or getting a combinatorial explosion of flag-matching. And SQL is built to handle this kind of query.
I have a table with 15 million records containing name, email addresses and IPs. I need to update another column in the same table with the country code using the IP address. I downloaded a small database (ip2location lite - https://lite.ip2location.com/) containing the all ip ranges and associated countries. The ip2location table has the following structure;
CREATE TABLE `ip2location_db1` (
`ip_from` int(10) unsigned DEFAULT NULL,
`ip_to` int(10) unsigned DEFAULT NULL,
`country_code` char(2) COLLATE utf8_bin DEFAULT NULL,
`country_name` varchar(64) COLLATE utf8_bin DEFAULT NULL,
KEY `idx_ip_from` (`ip_from`),
KEY `idx_ip_to` (`ip_to`),
KEY `idx_ip_from_to` (`ip_from`,`ip_to`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_bin
I'm using the following function to retrieve the country code from an ip address;
CREATE DEFINER=`root`#`localhost` FUNCTION `get_country_code`(
ipAddress varchar(30)
) RETURNS VARCHAR(2)
DETERMINISTIC
BEGIN
DECLARE ipNumber INT UNSIGNED;
DECLARE countryCode varchar(2);
SET ipNumber = SUBSTRING_INDEX(ipAddress, '.', 1) * 16777216;
SET ipNumber = ipNumber + (SUBSTRING_INDEX(SUBSTRING_INDEX(ipAddress, '.', 2 ),'.',-1) * 65536);
SET ipNumber = ipNumber + (SUBSTRING_INDEX(SUBSTRING_INDEX(ipAddress, '.', -2 ),'.',1) * 256);
SET ipNumber = ipNumber + SUBSTRING_INDEX(ipAddress, '.', -1 );
SET countryCode =
(SELECT country_code
FROM ip2location.ip2location_db1
USE INDEX (idx_ip_from_to)
WHERE ipNumber >= ip2location.ip2location_db1.ip_from AND ipNumber <= ip2location.ip2location_db1.ip_to
LIMIT 1);
RETURN countryCode;
END$$
DELIMITER ;
I've ran an EXPLAIN statement and this is the output;
'1', 'SIMPLE', 'ip2location_db1', NULL, 'range', 'idx_ip_from_to', 'idx_ip_from_to', '5', NULL, '1', '33.33', 'Using index condition'
My problem is that the query on 1000 records takes ~15s to execute which mean running the same query on all the database would require more than 2 days to complete. Is there a way to improve this query.
PS - If I remove the USE INDEX (idx_ip_from_to) the query takes twice as long. Can you explain why?
Also I'm not a database expert so bear with me :)
This can be quite tricky. I think the issue is that only the ip_from part of the condition can be used. See if this gets the performance you want:
SET countryCode =
(SELECT country_code
FROM ip2location.ip2location_db1 l
WHERE ipNumber >= l.ip_from
ORDER BY ip_to
LIMIT 1
);
I know I'm leaving off the ip_to. If this works, then you can do the full check in two parts. First get the ip_from using a similar query. Then use an equality query to get the rest of the information in the row.
The reason USE INDEX helps is because MySQL wasn't planning to use that index. Its optimizer chose a different one, but it guessed wrong. Sometimes this happens.
Also, I'm not sure if this will affect performance a ton, but you should just use INET_ATON to change the IP address string into an integer. You don't need that SUBSTRING_INDEX business, and it may be slower.
What I would do here is measure the maximum distance between from and to:
SELECT MAX(ip_from - ip_to) AS distance
FROM ip2location_db1;
Assuming this is not a silly number, you will then be able to use the ip_from index properly. The check becomes:
WHERE ipNumber BETWEEN ip_from AND ip_from + distance
AND ipNumber <= ip_to
The goal here is to make all of the information to find a narrow set of rows come from a limited range of one column's value: ip_from. Then ip_to is just an accuracy check.
The reason you want to do this is because the ip_to value (second part of the index) can't be used until the corresponding ip_from value is found. So it still has to scan most of the index records for low values of ip_from without an upper bound.
Otherwise, you might consider measuring how unique the IP addresses are in your 15 million records. For example, if there are only 5 million unique IPs, it could be better to extract a unique list, map those to country codes, and then use that mapping (either at runtime, or to update the original table.) Depends.
If the values are very unique, but potentially in localized clusters, you could try removing the irrelevant rows from ip2location_db1, or even horizontal partitioning to improve the range checks. I'm not sure this would win anything, but if you can use some index on the original table to consult specific partitions only, you might be able to win some performance.
A JobID goes as follows: ALC-YYYYMMDD-001. The first three are a companies initials, the last three are an incrementing number that resets daily and increments throughout the day as jobs are added for a maximum of 999 jobs in a day; it is these last three that I am trying to work with.
I am trying to get a before-insert trigger to look for the max JobID of the day, and add one so I can have the trigger derive the proper JobID. For the first job, it will of course return null. So here is what I have so far.
Through the following I can get a result of '000'.
set #maxjobID =
(select SUBSTRING(
(Select MAX(
SUBSTRING((Select JobID FROM jobs WHERE SUBSTRING(JobID,5,8)=date_format(curdate(), '%Y%m%d')),4,12)
)
),14,3)
);
select lpad((select ifnull(#maxjobID,0)),3,'0')
But I really need to add one to this keeping the leading zeros to increment the first and subsequent jobs of the day. My problem is as soon as try to add '1' I get a return of 'BLOB'. That is:
select lpad((select ifnull(#maxjobID,0)+1),3,'0')
returns 'BLOB'
I need it to return '001' so I can concatenate that result with the CO initials and the current date.
try casting VARCHAR back to INTEGER
SELECT lpad(SELECT (COALESCE(#maxjobID,0, CAST(#maxjobID AS SIGNED)) + 1),3,'0')
If you're using the MyISAM storage engine, you can implement exactly this with AUTO_INCREMENT, without denormalising your data into a delimited string:
For MyISAM tables, you can specify AUTO_INCREMENT on a secondary column in a multiple-column index. In this case, the generated value for the AUTO_INCREMENT column is calculated as MAX(auto_increment_column) + 1 WHERE prefix=given-prefix. This is useful when you want to put data into ordered groups.
In your case:
Normalise your schema:
ALTER TABLE jobs
ADD initials CHAR(3) NOT NULL FIRST,
ADD date DATE NOT NULL AFTER initials,
ADD seq SMALLINT(3) UNSIGNED NOT NULL AFTER date,
;
Normalise your existing data:
UPDATE jobs SET
initials = SUBSTRING_INDEX(JobID, '-', 1),
date = STR_TO_DATE(SUBSTRING(JobID, 5, 8), '%Y%m%d'),
seq = SUBSTRING_INDEX(JobID, '-', -1)
;
Set up the AUTO_INCREMENT:
ALTER TABLE jobs
DROP PRIMARY KEY,
DROP JobID,
MODIFY seq SMALLINT(3) UNSIGNED NOT NULL AUTO_INCREMENT,
ADD PRIMARY KEY(initials, date, seq)
;
You can then recreate your JobID as required on SELECT (or even create a view from such a query):
SELECT CONCAT_WS(
'-',
initials,
DATE_FORMAT(date, '%Y%m%d'),
LPAD(seq, 3, '0')
) AS JobID,
-- etc.
If you're using InnoDB, whilst you can't generate sequence numbers in this fashion I'd still recommend normalising your data as above.
So, I found a query that works (thus far).
Declare maxjobID VARCHAR(16);
Declare jobincrement SMALLINT;
SET maxjobID =
(Select MAX(
ifnull(SUBSTRING(
(Select JobID FROM jobs WHERE SUBSTRING(JobID,5,8)=date_format(curdate(), '%Y%m%d')),
5,
12),0)
)
);
if maxjobID=0
then set jobincrement=1;
else set jobincrement=(select substring(maxjobID,10,3))+1;
end if;
Set NEW.JobID=concat
(New.AssignedCompany,'-',date_format(curdate(), '%Y%m%d'),'-',(select lpad(jobincrement,3,'0')));
Thanks for the responses! Especially eggyal for pointing out the auto_increment capabilities in MyISAM.
the base query works as intenden, but when i try to sum the first columns, its supose to be 5, but insted i get 4, why?
base query:
SET #last_task = 0;
SELECT
IF(#last_task = RobotShortestPath, 0, 1) AS new_task,
#last_task := RobotShortestPath
FROM rob_log
ORDER BY rog_log_id;
1 1456
0 1456
0 1456
1 1234
0 1234
1 1456
1 2556
1 1456
sum query
SET #last_task = 0;
SELECT SUM(new_task) AS tasks_performed
FROM (
SELECT
IF(#last_task = RobotShortestPath, 0, 1) AS new_task,
#last_task := RobotShortestPath
FROM rob_log
ORDER BY rog_log_id
) AS tmp
4
table structure
CREATE TABLE rob_log (
rog_log_id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT,
# RobotPosX FLOAT NOT NULL,
# RobotPosY FLOAT NOT NULL,
# RobotPosDir TINYINT UNSIGNED NOT NULL,
RobotShortestPath MEDIUMINT UNSIGNED NOT NULL,
PRIMARY KEY(rog_log_id),
KEY (rog_log_id, RobotShortestPath)
);
INSERT INTO rob_log(RobotShortestPath) SELECT 1456;
INSERT INTO rob_log(RobotShortestPath) SELECT 1456;
INSERT INTO rob_log(RobotShortestPath) SELECT 1456;
INSERT INTO rob_log(RobotShortestPath) SELECT 1234;
INSERT INTO rob_log(RobotShortestPath) SELECT 1234;
INSERT INTO rob_log(RobotShortestPath) SELECT 1456;
INSERT INTO rob_log(RobotShortestPath) SELECT 2556;
INSERT INTO rob_log(RobotShortestPath) SELECT 1456;
testing it at sqlfiddle: http://sqlfiddle.com/#!2/e80f5/3
as an answer for Counting changes in timeline with MySQL
but got relly confused
Here's the reason (as discussed on Twitter):
The variable #last_task was defined in a separate query "batch". I break up the queries on SQL Fiddle into individual batches, executed separately. I do this so you can see the output from each batch as a distinct result set below. In your Fiddle, you can see that there are two sets of output: http://sqlfiddle.com/#!2/e80f5/3/0 and http://sqlfiddle.com/#!2/e80f5/3/1. These map to the two statements you are running (the set and the select). The problem is, your set statement defines a variable that only exists in the first batch; when the select statement runs, it is a separate batch and your variable isn't defined within that context.
To correct this problem, all you have to do is define a different query terminator. Note the dropdown box/button under both the schema and the query panels ( [ ; ] ) - click on that, and you can choose something other than semicolon (the default). Then your two statements will be included together as part of the same batch, and you'll get the result you want. For example:
http://sqlfiddle.com/#!2/e80f5/9
It's probably a some bug in older version of MySQL.
I have tried it on MySQL 5.5 and its working perfectly.
I've got this table
CREATE TABLE `subevents` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`title` varchar(150) DEFAULT NULL,
`content` text,
`class` tinyint(4) NOT NULL DEFAULT '1',
PRIMARY KEY (`id`)
) ENGINE=MyISAM
Each row can have a different value in the 'class' field.
I'd like to select any number of rows, ordered randomly, as long as the sum of the values in the 'class' field is equal to 100.
How could I accomplish it directly in the MySQL query without doing it later in PHP?
Thanks everybody!
By "ordered randomly" I assume you mean that the order of the rows doesn't matter but no row can be used more than once. So you are looking for a combination of rows in which the sum of class equals 100. Use the brute force method. Randomly generate possible solutions until you find one that works.
delimiter //
CREATE PROCEDURE subsetsum(total)
BEGIN
DECLARE sum INTEGER;
REPEAT
CREATE OR REPLACE VIEW `solution`
AS SELECT * FROM `subevents`
WHERE 0.5 <= RAND();
SELECT SUM(`class`) INTO sum FROM `solution`;
UNTIL sum = total END REPEAT;
END
//
delimiter ;
CALL subsetsum(100); /* For example */
SELECT * FROM `solution`;
I have tested this with tables having a TINYINT column of random values and it is actually reasonably fast. The only problem is that there is no guarantee that subsetsum() will ever return.
I don't think this is possible with only SQL...the only thing which comes to my mind is to redo a the sql query as long the sum isn't 100
But I have no clue how to select a random number of rows at once.