Mysql Trigger will not pass last_insert_id() to connection - mysql

This is my schema:
I am trying to have an insert into "desktops" or "laptops" insert an id generated automatically from "computers". That works.
My issue is when I insert into either table, I can not select last_insert_id();
Is there something I am doing wrong? I am trying to pass the id all the way forward to my application, for further processing. Selecting MAX(id) is not a valid solution. My SQL connection makes one insert statement, and the trigger should not break that functionality...
Use test;
CREATE TABLE `laptops` (
`id` int(11) NOT NULL,
`name` varchar(45) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=innodb DEFAULT CHARSET=utf8;
CREATE TABLE `desktops` (
`id` int(11) NOT NULL,
`name` varchar(45) DEFAULT NULL,
PRIMARY KEY (`id`),
) ENGINE=innodb DEFAULT CHARSET=utf8;
CREATE TABLE `computers` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`type` varchar(45) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=innodb DEFAULT CHARSET=utf8;
CREATE TRIGGER `laptops_BINS` BEFORE INSERT ON `laptops` FOR EACH ROW
BEGIN
IF (EXISTS(SELECT id FROM laptops WHERE name = NEW.name)) THEN
SET NEW.id = NULL;
ELSE
INSERT INTO computers (type) VALUES ('laptop');
SET NEW.id = LAST_INSERT_ID();
SET NEW.id = LAST_INSERT_ID(NEW.id);
END IF;
END
CREATE TRIGGER `desktop_BINS` BEFORE INSERT ON `desktops` FOR EACH ROW
BEGIN
IF (EXISTS(SELECT id FROM desktops WHERE name = NEW.name)) THEN
SET NEW.id = NULL;
ELSE
INSERT INTO computers (type) VALUES ('desktop');
SET NEW.id = LAST_INSERT_ID();
SET NEW.id = LAST_INSERT_ID(NEW.id);
END IF;
END
INSERT INTO laptops (name) VALUES ('laptop1');
INSERT INTO laptops (desktop) VALUES ('desktop1');
INSERT INTO laptops (name) VALUES ('laptop2');
INSERT INTO laptops (desktop) VALUES ('desktop2');
SELECT last_insert_id();
Expecting 4, actually its 0.
Any thoughts as to how I can fix the trigger? Maybe someone can help me format the AFTER_INSERT statement to fix last_insert_id?
I tried setting the values to auto-increment, and unique in the laptops and desktops table, neither will fix the issue.

Rather than trying to deal with the 'confusion' of 'last_insert_id'. I decided to change the table structure to be a more 'common' format.
That is change the 'laptops' and 'desktops' tables to have the 'auto_increment' keys. This changes the 'computers' table to have a primary key of 'computer_id' from 'laptops' or 'desktops' and a 'computer_type'.
Here are the table structures and triggers.
It has been tested on mysql 5.5.16 on windows xp.
CREATE TABLE `laptops` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(45) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;
CREATE TABLE `desktops` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(45) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8;
CREATE TABLE `computers` (
`computer_id` int(11) NOT NULL,
`computer_type` varchar(45) NOT NULL,
PRIMARY KEY (`computer_id`,`computer_type`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
DELIMITER $$
USE `testmysql`$$
DROP TRIGGER /*!50032 IF EXISTS */ `laptop_bins`$$
CREATE
/*!50017 DEFINER = 'test'#'localhost' */
TRIGGER `laptop_bins` AFTER INSERT ON `laptops`
FOR EACH ROW BEGIN
INSERT INTO computers (computer_id, computer_type ) VALUES (new.id, 'laptop');
END;
$$
DELIMITER ;
DELIMITER $$
USE `testmysql`$$
DROP TRIGGER /*!50032 IF EXISTS */ `desktop_bins`$$
CREATE
/*!50017 DEFINER = 'test'#'localhost' */
TRIGGER `desktop_bins` AFTER INSERT ON `desktops`
FOR EACH ROW BEGIN
INSERT INTO computers (computer_id, computer_type ) VALUES (new.id, 'desktop');
END;
$$
DELIMITER ;
Sample Queries and Output:
INSERT INTO laptops (NAME) VALUES ('laptop1');
INSERT INTO desktops (NAME) VALUES ('desktop1');
INSERT INTO laptops (NAME) VALUES ('laptop2');
INSERT INTO desktops (NAME) VALUES ('desktop2');
Laptops:
id name
------ ---------
1 laptop1
2 laptop2
Desktops:
id name
------ ----------
1 desktop1
2 desktop2
Computers:
computer_id computer_type
----------- ---------------
1 desktop
1 laptop
2 desktop
2 laptop

This more a possible approach to the requirement than an answer.
I can create the code if required. It is not a lot of code on top of what is here.
The problem is to maintain tables in an other database, in sync, without doing lots of repeat work.
My suggestion:
In the 'computers' database - have a 'computers_new' table that is inserted to by the 'after insert' trigger and holds the relevant key information. Including a 'unprocessed' column.
I would then run a script at regular intervals or was triggered when the 'computers_new' table changed. It would:
1) transfer the 'unprocessed' details to the 'laptops', 'desktops' tables in the other database.
2) mark the transferred records as processed.
Advantages:
Lots of small chunks of work.
By using transactions it is reliable.
Drawbacks.
Ensuring tables are in sync.

Related

Problem updating a table after insert using trigger in MySQL

I'll start by explaining how the db should work:
In this example I have a table that stores work orders, this table has 5 total fields: ID, Number, Worker, temperature, humidity.
And another table that stores sensor data with 4 fields: ID, Device ID, Temp, Hum.
We built an APP that allows workers to submit work order data, My problem comes here The app generates the ID, Number and Worker field, and we want to add the sensor data (Temperature and humidity) to that table every time an insert is made. I tried doing this with a trigger but i get "Error Code: 1442. Can't update table 'ordenes' in stored function/trigger because it is already used by statement which invoked this stored function/trigger."
I tried multiple ways of doing it but I either get no change on the table or that error message.
Im looking for a way to do this:
trigger after insert
> insert into "new created line"(temperature, humidity) values
(select temp,humidity from sensors order by id desc limit 1)
Thanks in advance
EDIT:
Create Scheme and table:
SET #OLD_UNIQUE_CHECKS=##UNIQUE_CHECKS, UNIQUE_CHECKS=0;
SET #OLD_FOREIGN_KEY_CHECKS=##FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0;
SET #OLD_SQL_MODE=##SQL_MODE, SQL_MODE='ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION';
CREATE SCHEMA IF NOT EXISTS `Cegasa` DEFAULT CHARACTER SET utf8 ;
USE `Cegasa` ;
DROP TABLE IF EXISTS `Cegasa`.`ORDENES` ;
CREATE TABLE IF NOT EXISTS `Cegasa`.`ORDENES` (
`idORDENES` INT NOT NULL AUTO_INCREMENT,
`NumOrden` VARCHAR(45) NULL,
`Empleado` VARCHAR(45) NULL,
`Temperatura` VARCHAR(45) NULL,
`Humedad` VARCHAR(45) NULL,
PRIMARY KEY (`idORDENES`))
ENGINE = InnoDB;
DROP TABLE IF EXISTS `Cegasa`.`sensores` ;
CREATE TABLE IF NOT EXISTS `Cegasa`.`sensores` (
`id` INT NOT NULL AUTO_INCREMENT,
`EUI` VARCHAR(45) NULL,
`Temp` VARCHAR(45) NULL,
`Hum` VARCHAR(45) NULL,
PRIMARY KEY (`id`))
ENGINE = InnoDB;
USE `Cegasa`;
DELIMITER $$
USE `Cegasa`$$
DROP TRIGGER IF EXISTS `Cegasa`.`ORDENES_AFTER_INSERT` $$
USE `Cegasa`$$
CREATE DEFINER = CURRENT_USER TRIGGER `Cegasa`.`ORDENES_AFTER_INSERT` AFTER INSERT ON `ORDENES` FOR EACH ROW
BEGIN
insert into `cegasa`.`Ordenes` (
`temp`,
`hum`
) SELECT temp,hum FROM sensores ORDER BY ID DESC LIMIT 1;
END$$
DELIMITER ;
SET SQL_MODE=#OLD_SQL_MODE;
SET FOREIGN_KEY_CHECKS=#OLD_FOREIGN_KEY_CHECKS;
SET UNIQUE_CHECKS=#OLD_UNIQUE_CHECKS;
Insert for example sensor data:
INSERT INTO `cegasa`.`sensores`
(`id`,
`EUI`,
`Temp`,
`Hum`)
VALUES
(default,
"th312322aa",
"10",
"33"),(
default,
"daedaf12392",
"30",
"70"
);
Similar insert to the one the app makes
INSERT INTO `cegasa`.`ordenes`
(`idORDENES`,
`NumOrden`,
`Empleado`)
VALUES
(default,
1,
"123a");
Desired outcome after this insert
CREATE TABLE IF NOT EXISTS `sensores` (
`id` INT NOT NULL AUTO_INCREMENT,
`EUI` VARCHAR(45) NULL,
`Temp` VARCHAR(45) NULL,
`Hum` VARCHAR(45) NULL,
PRIMARY KEY (`id`))
ENGINE = InnoDB;
INSERT INTO `sensores` (`id`,`EUI`,`Temp`,`Hum`) VALUES
(default, "th312322aa", "10", "33"),
(default, "daedaf12392", "30", "70");
SELECT * FROM sensores;
id
EUI
Temp
Hum
1
th312322aa
10
33
2
daedaf12392
30
70
CREATE TABLE IF NOT EXISTS `ordenes` (
`idORDENES` INT NOT NULL AUTO_INCREMENT,
`NumOrden` VARCHAR(45) NULL,
`Empleado` VARCHAR(45) NULL,
`Temperatura` VARCHAR(45) NULL,
`Humedad` VARCHAR(45) NULL,
PRIMARY KEY (`idORDENES`))
ENGINE = InnoDB;
CREATE TRIGGER get_last_Temp_Hum
BEFORE INSERT ON ordenes
FOR EACH ROW
BEGIN
DECLARE new_temp VARCHAR(45); -- declare intermediate variables
DECLARE new_hum VARCHAR(45);
SELECT Temp, Hum INTO new_temp, new_hum -- select vast values into it
FROM sensores
ORDER BY id DESC LIMIT 1;
SET NEW.Temperatura = new_temp, -- set columns values in newly inserted row
NEW.Humedad = new_hum; -- to the values stored in the variables
END
INSERT INTO `ordenes` (`idORDENES`,`NumOrden`,`Empleado`) VALUES
(default, 1, "123a");
SELECT * FROM ordenes;
idORDENES
NumOrden
Empleado
Temperatura
Humedad
1
1
123a
30
70
fiddle
Trigger fires on INSERT statement but before the values are inserted into the table (i.e. the insertion is an intention yet). The query in the trigger retrieves needed values into the variables, then SET statement copies these values into the columns in the row which will be inserted. And after the trigger finishes the row contains needed values in the columns, and these values are saved into the table.

insert ignore when exception raised by trigger

I'm triying to insert multiple rows with one insert query on a table that has a trigger launched before insert and raise duplicate-exception if a condition is true
table structure
CREATE TABLE `users` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`first_name` varchar(50) DEFAULT NULL,
`last_name` varchar(50) DEFAULT NULL,
`date_registration` date DEFAULT NULL,
`email` varchar(255) NOT NULL,
`password` varchar(128) NOT NULL,
PRIMARY KEY (`id`),
KEY `email` (`email`),
KEY `date_registration` (`date_registration`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
/*!50100 PARTITION BY RANGE (id)
(PARTITION id1k VALUES LESS THAN (1000) ENGINE = MyISAM,
PARTITION id3k VALUES LESS THAN (3000) ENGINE = MyISAM,
PARTITION id7k VALUES LESS THAN (7000) ENGINE = MyISAM,
PARTITION id10k VALUES LESS THAN (10000) ENGINE = MyISAM,
PARTITION id13k VALUES LESS THAN (13000) ENGINE = MyISAM,
.........
Trigger code
delimiter //
drop trigger if exists users_before_insert //
create trigger users_before_insert before insert on users
for each row
begin
set #found := false;
select true into #found from users u where u.email = NEW.email;
if #found then
signal sqlstate '23000' set message_text = 'Email alread exists !';
end if;
end //
delimiter ;
when i try to insert with duplicate records, even if the query uses ignore
exp:
insert ignore into users (first_name,last_name,date_registration,email,password) values
('aaaa','zzzz','2016-08-20','aaaa#mywebsite.com','strongpwd1'),
('bbbb','yyyy','2016-08-21','bbbb#mywebsite.com','strongpwd2'),
('cccc','xxxx','2016-08-22','aaaa#mywebsite.com','strongpwd3'),
('dddd','wwww','2016-08-23','dddd#mywebsite.com','strongpwd4');
ERROR 1644 (23000): Email alread exists !
qry aborted when it comes to the first duplicate 'aaaa#mywebsite.com'.
Is there a solution to ignore exception raised by trigger ?
No, while this trigger is active you cannot insert duplicate rows.
It is not a good practice try to avoid a trigger, then you should think in changing the set of conditions required to fire the trigger instead.

replace does only trigger one trigger in mariadb

I have the following tables
CREATE TABLE `trigger_root` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`p` int(11) DEFAULT NULL,
PRIMARY KEY (`id`)
);
CREATE TABLE `trigger_test` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`p` int(11) DEFAULT NULL,
PRIMARY KEY (`id`)
);
and the following triggers
DELIMITER ||
CREATE TRIGGER tit
BEFORE INSERT ON trigger_root
FOR EACH ROW
BEGIN
INSERT INTO trigger_test (p) values (NEW.p);
END ||
CREATE TRIGGER tdt
BEFORE delete ON trigger_root
FOR EACH ROW
BEGIN
delete from trigger_test where p=OLD.p;
END ||
DELIMITER ;
However if I use the following statement
replace into trigger_root(id,p) select id,p from trigger_root;
only the delete trigger is called. if i remove the delete trigger the insert trigger is called.
so it seems replace only triggers one but not both triggers
is that a general restriction or do I do something wrong?
I found the error. the insert nedds to be after rahter than before.

mysql trigger not working on insert

Table: items
Create Table:
CREATE TABLE `items` (
`ite_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`itemName` varchar(40) DEFAULT NULL,
`itemNumber` int(10) unsigned NOT NULL,
PRIMARY KEY (`ite_id`),
UNIQUE KEY `itemName` (`itemName`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1
delimiter |
create trigger item_beforeinsert before insert on items
for each row begin
if new.itemNumber < 50 then
set new.ite_id = null;
end if;
end;
|
now the following command doesn't cause a trigger
insert items( itemname, itemnumber) values ( 'xyz', 1 );
any help would be very much appreciated, thanks!
Your ite_ID is not null and you want to set it null with your trigger, beside that it's auto increment, so you wont be able to 'control' all the values to assign to that field, I.E it wont overwrite values
It'd be
insert INTO items( itemname, itemnumber) values ( 'xyz', 1 );
also, since you have set ite_id as NOT NULL, you can't use a set new.ite_id = null;
For auto incremented primary key fields you can pass NULL value while inserting. MySQL automatically assigns auto generated value. It is not an error setting up NULL to it BEFORE insert. And hence trigger didn't fire an error.
Example:
insert into items( ite_id, ... ) values ( null, ... );
The above statement is valid and works, since ite_id field is primary key with auto increment.

SELECT returning no rows using declared variable inside MySQL trigger

I am trying to perform an INSERT...SELECT to create rows in a 'tasks' table once a row is created in a 'workflow' table. (The process_index used in the workflow creation looks up those tasks required in the 'process_tasks' table then creates them in the tasks table).
The problem, however, is that after performing an insert with the process_index of 'process_one' on the workflow table, the SELECT in the trigger finds no rows. I considered that the #process_id was not being set properly, but with the alternative insert i've commented out in the trigger, it demonstrates that the #process_index is being set correctly. Can anyone advise?
Here is some simplified code to demonstrate the problem:
DROP TABLE IF EXISTS workflow;
CREATE TABLE workflow (
id INT(10) PRIMARY KEY AUTO_INCREMENT,
process_index VARCHAR(12)
) ENGINE=INNODB DEFAULT CHARSET=utf8;
DROP TABLE IF EXISTS tasks;
CREATE TABLE tasks (
id INT(10) PRIMARY KEY AUTO_INCREMENT,
process_index_used VARCHAR(12),
target_field VARCHAR(12)
) ENGINE=INNODB DEFAULT CHARSET=utf8;
DROP TABLE IF EXISTS process_tasks;
CREATE TABLE process_tasks (
id INT(10) PRIMARY KEY AUTO_INCREMENT,
process_index VARCHAR(12),
source_field VARCHAR(12)
) ENGINE=INNODB DEFAULT CHARSET=utf8;
INSERT INTO process_tasks SET process_index = 'process_one', source_field = 'alpha';
INSERT INTO process_tasks SET process_index = 'process_one', source_field = 'beta';
DROP TRIGGER IF EXISTS workflow_tasks;
DELIMITER //
CREATE TRIGGER workflow_tasks AFTER INSERT ON workflow
FOR EACH ROW BEGIN
DECLARE process_index VARCHAR(12);
SET #process_index = NEW.process_index;
-- INSERT INTO tasks (process_index_used) VALUES (#process_index);
INSERT INTO tasks (target_field) SELECT source_field FROM process_tasks WHERE process_index = #process_index;
END//
DELIMITER ;