I am using MySQL 5.5. I need to add a Trigger to my table using mysql trigger syntax: http://dev.mysql.com/doc/refman/5.0/en/trigger-syntax.html
The example they have given doesn't explain how I can go about doing this -
I have a table - table(a INT, b INT, c INT);. field a and b are numbers, while field c should be a + b. Now i'm sure you are wondering why not just slap this in a view and be done with it, or why not put this in my code. The reason is because I am working with a client that needs the convenience of an auto calc'ed field, with the ability to modify the value incase it needs variation. They are an auditing company and massaging the numbers is often required because of companies missing audit dates etc.
So how can I create a trigger that will:
on insert:
make `c` the value of `a` + `b`.
on update:
if the value of NEW.`c`==OLD.`c` THEN
make `c` the value of `a` + `b`.
ELSE
no change
The reason for the update not changing if the new value is different to the old value is because that would mean they want to modify the number to be slightly different to what the actual sum is.
Please feel free to change my logic - my aim is to preserve the value of c if it has been entered manually and to blast it if it hasn't been touched manually.
Thanks!
I know this is an old question, but if the answer is still needed here it is.
First of all an id column has been added to the table for example's sake to have more direct updates.
CREATE TABLE table1
(
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
a INT, b INT, c INT
);
Now in INSERT trigger the logic is changed to allow an insert of a pre-calculated value to C column.
CREATE TRIGGER tg_table1_before_insert
BEFORE INSERT ON table1
FOR EACH ROW
SET NEW.c = IF(NEW.c IS NULL, NEW.a + NEW.b, NEW.c);
An update trigger implements the logic per your requirements
CREATE TRIGGER tg_table1_before_update
BEFORE UPDATE ON table1
FOR EACH ROW
SET NEW.c = IF(NEW.c <=> OLD.c, NEW.a + NEW.b, NEW.c);
Now lets do some inserts and updates
INSERT INTO table1 (a, b) VALUES (1, 2), (3, 4);
INSERT INTO table1 (a, b, c) VALUES (5, 6, 0), (7, 8, 100);
UPDATE table1 SET c = 25 WHERE id = 2;
UPDATE table1 SET c = c WHERE id = 3;
As a result we have
| ID | A | B | C |
--------------------
| 1 | 1 | 2 | 3 | -- calculated on insert
| 2 | 3 | 4 | 25 | -- explicitly set on update
| 3 | 5 | 6 | 11 | -- re-calculated on update
| 4 | 7 | 8 | 100 | -- explicitly set on insert
Here is SQLFiddle demo
Related
I am new to MYSQL and would like to create a table where a constant Letter depicting the department is added to an auto increment number. This way I would be able to identify the category of the worker upon viewing the ID.
Ex. Dept A and employee 135. The ID I am imaging should read A135 or something similar. I have created the table, the auto increment works fine, the constant letter has been declared and is featuring. However I would like to concatenate them in order to use the A135 as a primary key.
Any Help Please?
This quite tricky, and you would be probably better off doing manual concatenation in a select query.
But since you asked for it...
In normal usage you would have used a computed column for this, but they do not support using autoincremented columns in their declaration. So you would need to use triggers:
on insert, query information_schema.tables to retrieve the autoincremented id that is about to be assigned and use it to generate the custom id
on update, reset the custom id
Consider the following table structure:
create table workers (
id int auto_increment primary key,
name varchar(50) not null,
dept varchar(1) not null,
custom_id varchar(12)
);
Here is the trigger for insert:
delimiter //
create trigger trg_workers_insert before insert ON workers
for each row
begin
if new.custom_id is null then
select auto_increment into #nextid
from information_schema.tables
where table_name = 'workers' and table_schema = database();
set new.custom_id = CONCAT(new.dept, lpad(#nextid, 11, 0));
end if;
end
//
delimiter ;
And the trigger for update:
delimiter //
create trigger trg_workers_update before update ON workers
for each row
begin
if new.dept is not null then
set new.custom_id = CONCAT(new.dept, lpad(old.id, 11, 0));
end if;
end
//
delimiter ;
Let's run a couple of inserts for testing:
insert into workers (dept, name) values ('A', 'John');
insert into workers (dept, name) values ('B', 'Jim');
select * from workers;
| id | name | dept | custom_id |
| --- | ---- | ---- | ------------ |
| 1 | John | A | A00000000001 |
| 2 | Jim | B | B00000000002 |
And let's test the update trigger
update workers set dept = 'C' where name = 'Jim';
select * from workers;
| id | name | dept | custom_id |
| --- | ---- | ---- | ------------ |
| 1 | John | A | A00000000001 |
| 2 | Jim | C | C00000000002 |
Demo on DB Fiddle
Sorry, my answer does not fit in a comment.
I agree with #GMB.
This is a tricky situation and in some cases (selects mainly) will lead in a performance risk due you'll have to split PK in where statements, which is not recommended.
Having a column for department and another for auto_increment is more logical. And the only gap you have is to know the number of employees per department you'll have to make a count grouping by dept. Instead of a max() splitting your concatenated PK, which is is at high performance cost.
Let atomic and logic data remain in separate columns. I would suggest to create a third column with the concatenated value.
If, for some company reason, you need B1 and A1 values for employees of different departments, I'd suggest to have 3 columns
Col1 - letter(not null)
Col2 - ID(Not auto-increment, but calculated as #GMB's solution) (Not NULL)
Col3 - Concatenation of Col1 and Col2 (not null)
PK( Col1, col2)
I have a table of an app setting that looks like this:
Code | Value |
---------------------
MAC_ADDR | 'SAMPLE'|
PC_OPT | 0 |
SHOW_ADDR | 1 |
Then I'm receiving a json in my trigger function like this:
{MAC_ADDR: 'NEWADDR', PC_OPT: 1, SHOW_ADDR: 0}
How do I perform an update based on all the keys from my json?
you can just use json_populate_record, eg:
t=# create table tj("MAC_ADDR" text, "PC_OPT" int, "SHOW_ADDR" int);
CREATE TABLE
t=# insert into tj select 'SAMPLE',0,1;
INSERT 0 1
t=# select * from tj;
MAC_ADDR | PC_OPT | SHOW_ADDR
----------+--------+-----------
SAMPLE | 0 | 1
(1 row)
t=# update tj set "MAC_ADDR"=j."MAC_ADDR", "PC_OPT"=j."PC_OPT", "SHOW_ADDR"=j."SHOW_ADDR"
from json_populate_record(null::tj,'{"MAC_ADDR": "NEWADDR", "PC_OPT": 1, "SHOW_ADDR": 0}') j
where true;
UPDATE 1
t=# select * from tj;
MAC_ADDR | PC_OPT | SHOW_ADDR
----------+--------+-----------
NEWADDR | 1 | 0
(1 row)
keep in mind - you did not specify PK or other column to update rows so all rows will be updated in example above. Which suits your data sample, but would not in case of more data
Update
I misunderstood the question, in (code, value) table it's even easier, eg:
update some_tbl
set "Value" = '{"MAC_ADDR": "NEWADDR", "PC_OPT": 1, "SHOW_ADDR": 0}'::json->'MAC_ADDR'
where "Code"='MAC_ADDR'
o again, using the code above you can map update with json keys...
Consider two tables like this:
TABLE: current
-------------------
| id | dept | value |
|----|------|-------|
| 4| A | 20 |
| 5| B | 15 |
| 6| A | 25 |
-------------------
TABLE: history
-------------------
| id | dept | value |
|----|------|-------|
| 1| A | 10 |
| 2| C | 10 |
| 3| B | 20 |
-------------------
These are just simple examples... in the actual system both tables have considerably more columns and considerably more rows (10k+ rows in current and 1M+ rows in history).
A client application is continuously (several times a second) inserting new rows into the current table, and 'moving' older existing rows from current to history (delete/insert within a single transaction).
Without blocking the client in this activity we need to take a consistent sum of values per dept across the two tables.
With transaction isolation level set to REPEATABLE READ we could just do:
SELECT dept, sum(value) FROM current GROUP BY dept;
followed by
SELECT dept, sum(value) FROM history GROUP BY dept;
and add the two sets of results together. BUT each query would block inserts on its respective table.
Changing the isolation level to READ COMMITTED and doing the same two SQLs would avoid blocking inserts, but now there is a risk of entries being double counted if moved from current to history while we are querying (since each SELECT creates its own snapshot).
Here's the question then.... what happens with isolation level READ COMMITTED if I do a UNION:
SELECT dept, sum(value) FROM current GROUP BY dept
UNION ALL
SELECT dept, sum(value) FROM history GROUP BY dept;
Will MySQL generate a consistent snapshot of both tables at the same time (thereby removing the risk of double counting) or will it still take snapshot one table first, then some time later take snapshot of the second?
I have not yet found any conclusive documentation to answer my question, so I went about trying to prove it instead. Although not proof in the scientific sense, my findings suggest a consistent snapshot is created for all tables in a UNION query.
Here's what I did.
Create the tables
DROP TABLE IF EXISTS `current`;
CREATE TABLE IF NOT EXISTS `current` (
`id` BIGINT NOT NULL COMMENT 'Unique numerical ID.',
`dept` BIGINT NOT NULL COMMENT 'Department',
`value` BIGINT NOT NULL COMMENT 'Value',
PRIMARY KEY (`id`));
DROP TABLE IF EXISTS `history`;
CREATE TABLE IF NOT EXISTS `history` (
`id` BIGINT NOT NULL COMMENT 'Unique numerical ID.',
`dept` BIGINT NOT NULL COMMENT 'Department',
`value` BIGINT NOT NULL COMMENT 'Value',
PRIMARY KEY (`id`));
Create a procedure that sets up 10 entries in the current table (id = 0, .. 9), then sits in a tight loop inserting 1 new row into current and 'moving' the oldest row from current to history. Each iteration is performed in a transaction, as a result the current table remains at a steady 10 rows, while the history table grows quickly. At any point in time min(current.id) = max(history.id) + 1
DROP PROCEDURE IF EXISTS `idLoop`;
DELIMITER $$
CREATE PROCEDURE `idLoop`()
BEGIN
DECLARE n bigint;
-- Populate initial 10 rows in current table if not already there
SELECT IFNULL(MAX(id), -1) + 1 INTO n from current;
START TRANSACTION;
WHILE n < 10 DO
INSERT INTO current VALUES (n, n % 10, n % 1000);
SET n = n + 1;
END WHILE;
COMMIT;
-- In tight loop, insert new row and 'move' oldest current row to history
WHILE n < 10000000 DO
START TRANSACTION;
-- Insert new row to current
INSERT INTO current values(n, n % 10, n % 1000);
-- Move oldest row from current to history
INSERT INTO history SELECT * FROM current WHERE id = (n - 10);
DELETE FROM current where id = (n - 10);
COMMIT;
SET n = n + 1;
END WHILE;
END$$
DELIMITER ;
Start this procedure running (this call won't return for some time - which is intentional)
call idLoop();
In another session on the same database we can now try out a variation on the UNION ALL query in my original posting.
I have modified it to (a) slow down execution,and (b) return a simple result set (two rows) that indicates whether any entries 'moved' whilst the query was running have been missed or double counted.
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;
SELECT 'HST' AS src, MAX(id) AS idx, COUNT(*) AS cnt, SUM(value) FROM history WHERE dept IN (0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
UNION ALL
SELECT 'CRT' AS src, MIN(id) AS idx, COUNT(*) AS cnt, SUM(value) FROM current WHERE dept IN (0, 1, 2, 3, 4, 5, 6, 7, 8, 9);
The sum(value) and where dept in (...) are just there to add work to the query and slow it down.
The indication of a positive outcome is if the two idx values are adjacent, like this:
+-----+--------+--------+------------+
| src | idx | cnt | SUM(value) |
+-----+--------+--------+------------+
| HST | 625874 | 625875 | 312569875 |
| CRT | 625875 | 10 | 8795 |
+-----+--------+--------+------------+
2 rows in set (1.43 sec)
I'd still be happy to hear any authoritative information on this.
If I have table structure as so:
CREATE TABLE a (
aid INT AUTO_INCREMENT,
acol1 INT,
acol2 INT,
PRIMARY KEY(aid);
)
CREATE TABLE b (
bid INT AUTO_INCREMENT,
bcol INT,
PRIMARY KEY(bid);
)
and run the statement:
`INSERT INTO a SET acol1 = (SELECT MAX(acol1) + 1 as newMax FROM a WHERE id = ?)
Is there anyway for me to retrieve the value of newMax after the query is executed? I am looking for something similar to last_insert_id() in PHP but for temporary values in the query.
Obviously I am trying to not query the database again if possible.
EDIT:
Actual situation:
CREATE TABLE group (
group_id INT AUTO_INCREMENT,
PRIMARY KEY(group_id)
) ENGINE = MyISAM;
CREATE TABLE item (
group_refid INT, --references group.group_id
group_pos INT, --represents this item's position in its group
text VARCHAR(4096), --data
PRIMARY KEY(group_refid, group_pos)
) ENGINE = MyISAM;
So the issue is that when I add a new item to a group, I need to make its
group_pos = MAX(group_pos) WHERE group_refid = ?
which would require a query with something like:
INSERT INTO item (group_refid, group_pos) SET group_refid = 1, group_pos = (SELECT MAX(group_pos) + 1 FROM item WHERE group_refid = 1);
As you know, this query does not work. There is added complexity that there may not be an item entry yet for a particular group_id.
I am trying to get this all into one atomic statement to prevent race conditions.
INSERT INTO item (group_refid,group_pos)
SELECT 1, (
SELECT IFNULL(MAX(group_pos),0) + 1
FROM item
WHERE group_refid=1
);
However, if we're talking MyISAM tables explicitly, not another engine, this would work:
mysql> CREATE TABLE items (group_refid INT, group_pos INT AUTO_INCREMENT, PRIMARY KEY(group_refid,group_pos)) ENGINE=MyISAM;
Query OK, 0 rows affected (0.12 sec)
mysql> INSERT INTO items (group_refid) VALUES (1),(2),(1),(1),(2),(4),(2),(1);
Query OK, 8 rows affected (0.02 sec)
Records: 8 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM items ORDER BY group_refid, group_pos;
+-------------+-----------+
| group_refid | group_pos |
+-------------+-----------+
| 1 | 1 |
| 1 | 2 |
| 1 | 3 |
| 1 | 4 |
| 2 | 1 |
| 2 | 2 |
| 2 | 3 |
| 4 | 1 |
+-------------+-----------+
However, that AUTO_INCREMENT on a second column in the PK is not portable to another database engine.
you cant. insert query is for insering not selecting.
You must run other query like that
SELECT MAX(acol1) + 1 as newMax FROM a WHERE acol2 = ?
for more read this
I think you can do:
INSERT INTO b
SET bcol = (SELECT #acol := MAX(acol1) + 1 as newMax FROM a WHERE acol2 = ?);
Then you can use the variable #acol to get the value you want.
EDIT:
Is this what you want?
INSERT INTO item (group_refid, group_pos)
SELECT 1, MAX(group_pos) + 1
FROM item
WHERE group_refid = 1;
Not directly in the statement, no. You'll need a separate statement to retrieve values.
But, you could "capture" the value from the SELECT into a user-defined variable, and then retrieve that with a SELECT (in the same database session), if you needed to "know" the value returned from the SELECT.
For example:
INSERT INTO b (bcol)
SELECT #bcol := (MAX(a.acol1) + 1) AS newMax
FROM a WHERE a.acol2 = ?)
SELECT #bcol + 0 AS new_bcol
NOTE:
Note that the user-defined variable assigned in the select is subject to modification elsewhere in the session, for example, it could be overwritten by the execution of a trigger defined the target table of the INSERT.
As an edge case, not that anyone would do this, but it's also possible there might be a BEFORE INSERT trigger that modifies the value of bcol, before it gets inserted. So, if you need to "know" the value that was actually inserted, that would be available in an AFTER INSERT trigger. You could capture that in a user-defined variable in that trigger.
Running a second, separate query against the a table is subject to a race condition, a small window of opportunity for a another session to insert/update/delete a row in table a, such that it's possible that a second query could return a different value than the first query... it might not be the value that was retrieved the first time. Unless of course you are within the context of an InnoDB transaction with REPEATABLE READ isolation level, or you've implemented some concurrency-killing locking strategy.
My problem is: I have a table with an auto_increment column. When I insert some values, all is right.
Insert first row : ID 1
Insert second row : ID 2
Now I want to insert a row at ID 10.
My problem is, that after this there are only rows inserted after ID 10 (which is the normal behaviour ).
But I want that the database first fills up ID 3-9 before making that.
Any suggestions?
EDIT:
To clarify: this is for an URL shortener I want to build for myself.
I convert the id to a word(a-zA-z0-9) for searching, and for saving in the database I convert it to a number which is the ID of the table.
The Problem is now:
I shorten the first link (without a name) -> ID is 1 and the automatically name is 1 converted to a-zA-Z0-9 which is a
Next the same happenes -> ID is 2 and the name is b, which is 2 converted.
Next interesting, somebody want to name the link test -> ID is 4597691 which is the converted test
Now if somebody adds another link with no name -> ID is 4597692 which would be tesu because the number is converted.
I want that new rows will be automatically inserted at the last gap that was made (here 3)
You could have another integer column for URL IDs.
Your process then might look like this:
If a default name is generated for a link, then you simply insert a new row, fill the URL ID column with the auto-increment value, then convert the result to the corresponding name.
If a custom name is specified for a URL, then, after inserting a row, the URL ID column would be filled with the number obtained from converting the chosen name to an integer.
And so on. When looking up for integer IDs, you would then use the URL ID column, not the table auto-increment column.
If I'm missing something, please let me know.
You could do 6 dummy inserts and delete/update them later as you need. The concept of the auto increment, by design, is meant to limit the application's or user's control over the number to ensure a unique value for every single record entered into the table.
ALTER TABLE MY_TABLE AUTO_INCREMENT = 3;
You would have to find first unused id, store it as user variable, use as id for insert.
SELECT #id := t1.id +1
FROM sometable t1 LEFT JOIN sometable t2
ON t2.id = t1.id +1 WHERE t2.id IS NULL LIMIT 1;
INSERT INTO sometable(id, col1, col2, ... ) VALUES(#id, 'aaa', 'bbb', ... );
You will have to run both queries for every insert if you still have gaps, its up to you to decide whether it is worth doing it.
not 100% sure what you're trying to achieve but something like this might work:
drop table if exists foo;
create table foo
(
id int unsigned not null auto_increment primary key,
row_id tinyint unsigned unique not null default 0
)
engine=innodb;
insert into foo (row_id) values (1),(2),(10),(3),(7),(5);
select * from foo order by row_id;
+----+--------+
| id | row_id |
+----+--------+
| 1 | 1 |
| 2 | 2 |
| 4 | 3 |
| 6 | 5 |
| 5 | 7 |
| 3 | 10 |
+----+--------+
6 rows in set (0.00 sec)