Assuming I have a table like the one below:
create table filetype_filestatus (
id integer(11) not null auto_increment,
file_type_id integer(11) not null,
file_status_id integer(11) not null,
)
I want to add a sequence column like so:
alter table filetype_filestatus add column sequence integer(11) not null;
alter table filetype_filestatus add unique key idx1 (file_type_id, file_status_id, sequence);
Now I want to add the column, which is straightforward, and populate it with some default values that satisfy the unique key.
The sequence column is to allow the user to arbitrarily order the display of file_status for a particular file_type. I'm not too concerned by the initial order since that can be revised in the application.
Ideally I would end up with something like:
FileType FileStatus Sequence
1 1 1
1 2 2
1 3 3
2 2 1
2 2 2
The best I can think of is something like:
update filetype_filestatus set sequence = file_type_id * 1000 + file_status_id;
Are there better approaches?
Hmm, I believe this should work:
UPDATE filetype_filestatus as a
SET sequence = (SELECT COALESCE(MAX(b.sequence), 0)
FROM filetype_filestatus as b
WHERE b.file_type_id = a.file_type_id) + 1
WHERE sequence = 0
I'd recommend adding the new column to the table, running the alter table statement (and getting the default of 0), run the update statement, then add the constraint (well, you have to anyways). Anything that gets touched updates to a sequence greater than 0, so this can be safely run multiple times, too.
EDIT:
As #Dems has pointed out, the subquery is being run before the update, and so the above doesn't actually work for this purpose. It does work on single-line inserts (which doesn't help at all here).
EDIT:
Gah, you have an id column, this works just fine (and yes, I tested this one first).
UPDATE filetype_filestatus as a
SET sequence = (SELECT COALESCE(COUNT(*), 0)
FROM filetype_filestatus as b
WHERE b.file_type_id = a.file_type_id
AND b.id < a.id) + 1
WHERE sequence = 0
Don't know about the performance implications, though.
If all you need are "some values that conform to idx1", why not just copy the id field? It is, after all, unique...
UPDATE
filetype_filestatus
SET
sequence = id;
EDIT
How to get sequential values based on the OPs changes to the question being asked.
ROW_NUMBER() is not available in MySQL, and it is also my understanding that you can't use the table being updated in the source query as well.
create temporary table temp_filetype_filestatus (
id integer(11) not null auto_increment,
file_type_id integer(11) not null,
file_status_id integer(11) not null,
PRIMARY KEY (file_type_id, file_status_id)
)
INSERT INTO temp_filetype_filestatus (
file_type_id,
file_status_id
)
SELECT
file_type_id,
file_status_id
FROM
filetype_filestatus
ORDER BY
file_type_id,
file_status_id
-- Update Option 1
------------------
UPDATE
filetype_filestatus
SET
sequence
=
(SELECT id FROM temp_filetype_filestatus
WHERE file_type_id = filetype_filestatus.file_type_id
AND file_status_id = filetype_filestatus.file_status_id)
-
(SELECT id FROM temp_filetype_filestatus
WHERE file_type_id = filetype_filestatus.file_type_id
ORDER BY file_status_id ASC LIMIT 1)
+
1
-- Update Option 2
------------------
UPDATE
filetype_filestatus
SET
sequence
=
(SELECT COUNT(*) FROM temp_filetype_filestatus
WHERE file_type_id = filetype_filestatus.file_type_id
AND file_status_id <= filetype_filestatus.file_status_id)
Related
Problem description
I have a table, say trans_flow:
CREATE TABLE trans_flow (
id BIGINT(20) AUTO_INCREMENT PRIMARY KEY,
card_no VARCHAR(50) DEFAULT NULL,
money INT(20) DEFAULT NULL
)
New data is inserted into this table constantly.
Now, I want to fetch only the rows that have not been fetched in the last query. For example, at 5:00, id ranges from 1 to 100, and I read the rows 80 - 100 and do some processing. Then, at 5:01, the id comes to 150, and I want to get exactly the rows 101 - 150. Otherwise, the processing program will read in old and already processed data. Note that such queries are committed continuously. From a certain perspective, I want to implement "streaming process" on MySQL.
A tentative idea
I have a simple but maybe ugly solution. I create an auxiliary table query_cursor which stores the beginning and end ids of one query:
CREATE TABLE query_cursor (
task_id VARCHAR(20) PRIMARY KEY COMMENT 'Specify which task is reading this table',
first_row_id BIGINT(20) DEFAULT NULL,
last_row_id BIGINT(20) DEFAULT NULL
)
During each query, I first update the query range stored in this table by:
UPDATE query_cursor
SET first_row_id = (SELECT last_row_id + 1 FROM query_cursor WHERE task_id = 'xxx'),
last_row_id = (SELECT MAX(id) FROM trans_flow)
WHERE task_id = 'xxx'
And then, doing query on table trans_flow using stored cursors:
SELECT * FROM trans_flow
WHERE id BETWEEN (SELECT first_row_id FROM query_cursor WHERE task_id = 'xxx')
AND (SELECT last_row_id FROM query_cursor WHERE task_id = 'xxx')
Question for help
Is there a simpler and more elegant implementation that can achieve the same effect (the best if no need to use an auxiliary table)? The version of MySQL is 5.7.
Once upon a time, I had a table like this:
CREATE TABLE `Events` (
`EvtId` INT UNSIGNED NOT NULL AUTO_INCREMENT,
`AlarmId` INT UNSIGNED,
-- Other fields omitted for brevity
PRIMARY KEY (`EvtId`)
);
AlarmId was permitted to be NULL.
Now, because I want to expand from zero-or-one alarm per event to zero-or-more alarms per event, in a software update I'm changing instances of my database to have this instead:
CREATE TABLE `Events` (
`EvtId` INT UNSIGNED NOT NULL AUTO_INCREMENT,
-- Other fields omitted for brevity
PRIMARY KEY (`EvtId`)
);
CREATE TABLE `EventAlarms` (
`EvtId` INT UNSIGNED NOT NULL,
`AlarmId` INT UNSIGNED NOT NULL,
PRIMARY KEY (`EvtId`, `AlarmId`),
CONSTRAINT `fk_evt` FOREIGN KEY (`EvtId`) REFERENCES `Events` (`EvtId`)
ON DELETE CASCADE ON UPDATE CASCADE
);
So far so good.
The data is easy to migrate, too:
INSERT INTO `EventAlarms`
SELECT `EvtId`, `AlarmId` FROM `Events` WHERE `AlarmId` IS NOT NULL;
ALTER TABLE `Events` DROP COLUMN `AlarmId`;
Thing is, my system requires that a downgrade also be possible. I accept that downgrades will sometimes be lossy in terms of data, and that's okay. However, they do need to work where possible, and result in the older database structure while making a best effort to keep as much original data as is reasonably possible.
In this case, that means going from zero-or-more alarms per event, to zero-or-one alarm per event. I could do it like this:
ALTER TABLE `Events` ADD COLUMN `AlarmId` INT UNSIGNED;
UPDATE `Events`
LEFT JOIN `EventAlarms` USING(`EvtId`)
SET `Events`.`AlarmId` = `EventAlarms`.`AlarmId`;
DROP TABLE `EventAlarms`;
… which is kind of fine, since I don't really care which one gets kept (it's best-effort, remember). However, as warned, this is not good for replication as the result may be unpredictable:
> SHOW WARNINGS;
Unsafe statement written to the binary log using statement format since
BINLOG_FORMAT = STATEMENT. Statements writing to a table with an auto-
increment column after selecting from another table are unsafe because the
order in which rows are retrieved determines what (if any) rows will be
written. This order cannot be predicted and may differ on master and the
slave.
Is there a way to somehow "order" or "limit" the join in the update, or shall I just skip this whole enterprise and stop trying to be clever? If the latter, how can I leave the downgraded AlarmId as NULL iff there were multiple rows in the new table between which we cannot safely distinguish? I do want to migrate the AlarmId if there is only one.
As a downgrade is a "one-time" maintenance operation, it doesn't have to be exactly real-time, but speed would be nice. Both tables could potentially have thousands of rows.
(MariaDB 5.5.56 on CentOS 7, but must also work on whatever ships with CentOS 6.)
First, we can perform a bit of analysis, with a self-join:
SELECT `A`.`EvtId`, COUNT(`B`.`EvtId`) AS `N`
FROM `EventAlarms` AS `A`
LEFT JOIN `EventAlarms` AS `B` ON (`A`.`EvtId` = `B`.`EvtId`)
GROUP BY `B`.`EvtId`
The result will look something like this:
EvtId N
--------------
370 1
371 1
372 4
379 1
380 1
382 16
383 1
384 1
Now you can, if you like, drop all the rows representing events that map to more than one alarm (which you suggest as a fallback solution; I think this makes sense, though you could modify the below to leave one of them in place if you really wanted).
Instead of actually DELETEing anything, though, it's easier to introduce a new table, populated using the self-joining query shown above:
CREATE TEMPORARY TABLE `_migrate` (
`EvtId` INT UNSIGNED,
`n` INT UNSIGNED,
PRIMARY KEY (`EvtId`),
KEY `idx_n` (`n`)
);
INSERT INTO `_migrate`
SELECT `A`.`EvtId`, COUNT(`B`.`EvtId`) AS `n`
FROM `EventAlarms` AS `A`
LEFT JOIN `EventAlarms` AS `B` ON(`A`.`EvtId` = `B`.`EvtId`)
GROUP BY `B`.`EvtId`;
Then your update becomes:
UPDATE `Events`
LEFT JOIN `_migrate` ON (`Events`.`EvtId` = `_migrate`.`EvtId` AND `_migrate`.`n` = 1)
LEFT JOIN `EventAlarms` ON (`_migrate`.`EvtId` = `EventAlarms`.`EvtId`)
SET `Events`.`AlarmId` = `EventAlarms`.`AlarmId`
WHERE `EventAlarms`.`AlarmId` IS NOT NULL
And, finally, clean up after yourself:
DROP TABLE `_migrate`;
DROP TABLE `EventAlarms`;
MySQL still kicks out the same warning as before, but since know that at most one value will be pulled from the source tables, we can basically just ignore it.
It should even be reasonably efficient, as we can tell from the equivalent EXPLAIN SELECT:
EXPLAIN SELECT `Events`.`EvtId` FROM `Events`
LEFT JOIN `_migrate` ON (`Events`.`EvtId` = `_migrate`.`EvtId` AND `_migrate`.`n` = 1)
LEFT JOIN `EventAlarms` ON (`_migrate`.`EvtId` = `EventAlarms`.`EvtId`)
WHERE `EventAlarms`.`AlarmId` IS NOT NULL
id select_type table type possible_keys key key_len ref rows Extra
---------------------------------------------------------------------------------------------------------------------
1 SIMPLE _migrate ref PRIMARY,idx_n idx_n 5 const 6 Using index
1 SIMPLE EventAlarms ref PRIMARY,fk_AlarmId PRIMARY 8 db._migrate.EvtId 1 Using where; Using index
1 SIMPLE Events eq_ref PRIMARY PRIMARY 8 db._migrate.EvtId 1 Using where; Using index
Use a subquery and user variables to select just one EventAlarms
In your update instead of EventAlarms use
( SELECT `EvtId`, `AlarmId`
FROM ( SELECT `EvtId`, `AlarmId`,
#rn := if ( #EvtId = `EvtId`
#rn + 1,
if ( #EvtId := `EvtId` , 1, 1)
) as rn
FROM `EventAlarms`
CROSS JOIN ( SELECT #EvtId := 0, #rn := 0) as vars
ORDER BY EvtId, AlarmId
) as t
WHERE rn = 1
) as SingleEventAlarms
There are a few options in the webpage layout
Latest news
Recommend news
Followed news
History news
Most Viewed news
The user can select the order of the layout e.g. they can order the most viewed news to the top of the order.
So I am considering how to store the choice in table that can be convenient for development.
The number of choices are fixed, only these 5 choices
The user will frequently update the order
I was thinking of:
create a user_choice table
user_id
latest (nullable , integer)
recommend (nullable , integer)
follow (nullable , integer)
history (nullable , integer)
most_view (nullable , integer)
so , when ever a user register create a record in the table, and whenever update change the row, this approach seems feasible but as well not straight forward to re-order the layout in program
So, are there any better structure ideas?
Thanks for helping
I will create a table layout_order with user id, layout_id and order_id this way is easy add more layout without need add more columns to your table.
When you create a new user you assign a default order.
user_id layout_id order_id
1 1 1
1 2 2
1 3 3
1 4 4
1 5 5
Here is an example of UPDATE. You need #user_id, #layout_id and #order_id for that layout.
Here I use variable to create a new rank with a special ORDER BY
SqlFiddle Demo You can check what return just the SELECT inside the JOIN
SET #layout_id = 5;
SET #order_id = 2;
SET #user_id = 1;
UPDATE layout_order L
JOIN (SELECT l.*, #rownumber := #rownumber + 1 AS rank
FROM layout_order l
CROSS JOIN (select #rownumber := 0) r
WHERE user_id = #user_id
ORDER BY CASE
WHEN layout_id = #layout_id THEN #order_id -- here is the variable
WHEN order_id < #order_id THEN order_id -- order doesn't change
WHEN order_id >= #order_id THEN order_id + 1
END
) t
ON L.user_id = t.user_id
AND L.layout_id = t.layout_id
SET L.order_id = t.rank;
I would move in this direction:
create table user
( -- your pre-existing user table, this is a stub
id int auto_increment primary key
-- the rest
);
create table sortChoices
( -- codes and descriptions of them
code int auto_increment primary key,
description varchar(100) not null
);
create table user_sortChoices_Junction
( -- intersect or junction table for user / sortChoices
userId int not null,
code int not null,
theOrder int not null,
primary key (userId,code), -- prevents dupes
constraint `fk_2user` foreign key (userId) references user(id),
constraint `fk_2sc` foreign key (code) references sortChoices(code)
);
The junction table drives it, is flexible, and those that follow don't lock themselves into the same thinking 'there will only ever be 5'
Plus, there is the Data Normalization issue, for the others that prefer CSV values. Here is a write-up I did for that, and ties in to Junction Tables.
So, this is as much for those that follow, as it is for the OP question.
Given the following table structure, how can I change the value of primary to 0 when a duplicate unique index is found?
CREATE TABLE `ncur` (
`user_id` INT NOT NULL,
`rank_id` INT NOT NULL,
`primary` TINYINT DEFAULT NULL,
PRIMARY KEY (`user_id`, `rank_id`),
UNIQUE (`user_id`, `primary`)
);
So, when I run a query like this:
UPDATE `ncur` SET `primary` = 1 WHERE `user_id` = 4 AND `rank_id` = 5;
When a constraint of user_id-primary is matched, I want it to set all primary values for user_id to NULL, and then complete the update query by updating the row it had found.
I am not as much familiar with MySQL as I am with Oracle; However, I think this query should work for you:
UPDATE `ncur` a
SET `primary` = (
/* 1st Subquery */
SELECT 1 FROM (SELECT * FROM `ncur`) b
WHERE b.`user_id` = a.`user_id` AND b.`rank_id` = a.`rank_id`
AND a.`rank_id` = 5
UNION ALL
/* 2nd Subquery */
SELECT 0 FROM (SELECT * FROM `ncur`) b
WHERE b.`user_id` = a.`user_id` AND b.`rank_id` <> 5 AND a.`rank_id` <> 5
GROUP BY `user_id`
HAVING COUNT(*) = 1
)
WHERE `user_id` = 4
Justification:
The query updates all the records that have user_id = 4.
For each of such records, primary is set to a different value of 1, 0, or NULL, depending on the value of rank_id in this record as well as the information regarding how many other records with the same user_id exists in the table.
The subquery that returns the value for primary consists of three subqueries, only one of which returns a value depending on the circumstances.
1st Subquery: This subquery returns 1 for the record with rank_id = 5; Otherwise it returns NULL.
2nd Subquery: This subquery returns 0 for the records with rank_id
!= 5 if there is only one such record in the table; otherwise it returns NULL.
Please note: if the query is run while there are no records with rank_id = 5, it will still update the other records according to the rules specified above. If this is not desired, the condition in the parent query must be changed from:
WHERE `user_id` = 4
to:
WHERE `user_id` = 4 AND
EXISTS(SELECT * FROM (SELECT * FROM `ncur`) b WHERE 'rank_id` = 5)
I have a table with fields (id , brand, model , os)
id as primary key
tables have ~ 6000 rows
Now i want to add new field with id=4012 (already exist) & increment id++ for id>4012
silliest way :
make table backup
remove entries with id >= 4012
insert new entry with id = 4012
restore table from backup
stupid, but works ))
Looking for more beautiful solution
Thx
table structure :
CREATE TABLE IF NOT EXISTS `mobileslist` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`brand` text NOT NULL,
`model` text NOT NULL,
`os` text NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=14823 ;
i try :
UPDATE mobileslist SET id = id + 1 WHERE id IN (SELECT id FROM
mobileslist WHERE id >= 4822 ORDER BY id);
but got answer :
1093 - You can't specify target table 'mobileslist' for update in FROM clause
1) Create a temporary table, with descending order by ID.
2) Perform an UPDATE query on the temporary table which sets ID = ID + 1 WHERE ID >= 4012
3) Drop the temporary table
4) Perform your insert operation on the original table.
Hope i understood it right, you want to insert a new entry at position 4012 moving & reassigning all the entries present at Id = 4012 or more with a new id incremented by 1.
Hope this helps.
Try this:
UPDATE <TableName>
SET
id = id + 1
WHERE id IN (SELECT id FROM <TableName> WHERE id >= 4012 ORDER BY id)
INSERT INTO <TableName>
(id , brand, model , os)
VALUE
(4012, "<BrandName>", "<Model>", "<OS>")
Updated Answer:
DECLARE #MaxId INT, #Difference INT
SELECT
#MaxId = MAX(id)
FROM mobileslist
SET #Difference = #MaxId - 4012
UPDATE mobileslist
SET
id = id + #Difference
where id >= 4012
INSERT INTO mobileslist
(id , brand, model , os)
VALUE
(4012, "TestBrand", "TestModel", "TestOS")
UPDATE mobileslist
SET
id = id - #Difference
where id > 4012