Create a Trigger in MYSQL - mysql

I want to create a trigger on one of the Table, lets say AssetDataTable In which the values gets populated via a Windows service, so basically I want to do is
SELECT * FROM AssetDataTable Where AssetID = 105 ORDER by 1 DESC;
I get this one row back,
Now if any of the column value is zero, it should update in another StatusTable
AssetID Column1 Column2 Column3 Column4 Column5
105 18.8 19.9 13.0 18.7 0
Now In My StatusTable, the row should become
AssetID Status
105 0
I really don't have any clue how to do, any ideas?

Try with this example this is the guidelines for your desired result i hope this may helps you
DELIMITER //
CREATE TRIGGER contacts_after_update
AFTER UPDATE
ON contacts FOR EACH ROW
BEGIN
DECLARE vUser varchar(50);
-- Find username of person performing the INSERT into table
SELECT USER() INTO vUser;
-- Insert record into audit table
INSERT INTO contacts_audit
( contact_id,
updated_date,
updated_by)
VALUES
( NEW.contact_id,
SYSDATE(),
vUser );
END; //
DELIMITER ;

Related

MySQL create trigger for logging table

Hello I am new for MySQL. I have 2 table, a data table and a log/history table. I would like to make a trigger, that put the old data from Data to Data_log if any of the column change on the Data. I made a trigger but I don't know how to check if the value of any column changes. Lately I would like to create some procedure/view which can return one line data on a specific date. Like return all field from ID 1 on 2022-03-27
Data:
ID
name
data
price
1
thing1
desc of t1
100
2
thing2
desc of t2
300
Data_log:
log_id
data_id
column_name
old_data
date
1
1
data
desc t1
2022-03-28 06:49:14
2
2
price
600
2022-03-28 11:34:46
3
1
price
4400
2022-03-28 09:15:54
Trigger (only check price column):
DELIMITER //
CREATE TRIGGER `log_old_data` BEFORE UPDATE ON `data`
INSERT INTO data_log
(
data_id,
old_data
)
VALUES
(
OLD.id,
OLD.price <- I need here a Select I think
);
END//
Since you have few columns, it may be simpler to do it "by hand" for every columns
DELIMITER //
CREATE TRIGGER `log_old_data` BEFORE UPDATE ON `data`
IF NEW.name != OLD.name THEN
INSERT INTO data_log (data_id, old_data) VALUES (OLD.id, OLD.name);
END IF;
IF NEW.data != OLD.data THEN
INSERT INTO data_log (data_id, old_data) VALUES (OLD.id, OLD.data);
END IF;
IF NEW.price != OLD.price THEN
INSERT INTO data_log (data_id, old_data) VALUES (OLD.id, OLD.price);
END IF;
END //
DELIMITER ;
PS: I did not test it, but it should work. If it doesn't, leave your mysql version to allow me to test on your version
For the SELECT part, since yo record every change on a separate table you only have to do a query on it
SELECT * FROM log_old_data WHERE `log_id` = 1 AND DATE(`date`) = '2022-03-27';
PS: Careful, DATE() on a WHERE condition may not be the perfect choice, it will not use indexes. I use generated columns to add index on date for this kind of case.

MySQL Trigger After Insert, Action JOIN 2 tables

I need to create a trigger (after insert on one table) on MySQL, but the action needs to join 2 tables, for inserting into a third table. My script below returns no error, but the row is not inserted into the third table.
The first table (on which the after-insert trigger should work):
Z_TAXO
ID term_ID taxo_name
1 1 dept
2 2 staff
3 4 course
4 5 dept
The second table to be joined in the trigger:
Z_TERM
term_ID name
1 Engineering
2 Andy
4 Metallurgy
5 Business
6 Arts
The third table. If the Z_TAXO table has a new row with taxo_name = "dept", the row (joined with table Z_TERM) needs to be inserted into this table:
Z_DEPTS
ID dept_name
1 Engineering
4 Business
I created a trigger:
delimiter //
CREATE TRIGGER TRG_NEW_DEPT
AFTER INSERT ON Z_TAXO
FOR EACH ROW
BEGIN
DECLARE _dept_ID bigint(20);
DECLARE _dept_name varchar(200);
IF Z_TAXO.taxo_name = "DEPT" THEN
BEGIN
SELECT Z_TAXO.ID INTO _dept_ID FROM Z_TAXO, Z_TERM
WHERE Z_TAXO.ID = new.Z_TAXO.ID AND Z_TAXO.term_ID = Z_TERM.term_ID;
SELECT Z_TERM.name INTO _dept_name FROM Z_TERM, Z_TAXO
WHERE Z_TAXO.term_ID = Z_TERM.term_ID AND Z_TAXO.ID = new.Z_TAXO.ID;
INSERT INTO Z_DEPTS (ID, dept_name) VALUES (_dept_ID, _dept_name);
END;
END IF;
END//
delimiter ;
Then inserted a row to the Z_TAXO table:
INSERT INTO Z_TAXO (ID, term_ID, taxo_name) VALUES (5, 6, "dept");
Expecting to have this new row in table Z_DEPTS:
ID dept_name
5 Arts
But when I select * from Z_DEPTS, the result is still:
ID dept_name
1 Engineering
4 Business
What can be wrong? I can't modify the design of the tables, because they came from a wordpress Plugin. Thanks in advance!
Couple of comments about your code. 1) When using new. qualifiers you don't further qualify with the table name so new.z_taxo.id is invalid andd should simply be new.id 2) You don't need a begin..end block in a mysql if statement 3) if just doesn't make sense referring to the table z_taxo in your select stataments - a simple insert select will do.
try
drop trigger if exists trg_new_dept;
delimiter //
CREATE TRIGGER TRG_NEW_DEPT
AFTER INSERT ON Z_TAXO
FOR EACH ROW
BEGIN
INSERT INTO Z_DEPTS (ID, dept_name)
select term_id, name
from z_term
where term_id = new.term_id;
END//
delimiter ;

Updating other table column after insert in MySQL

I have these two table called "cases" and attendance respectively which has four columns:
cases-
id empid reaction date_t
1 EMP12654 interested 2017-09-22
attendance-
id empid logintime logouttime date_t flag workinghours call_att
1 EMP12654 00:14:49 05:14:49 2017-09-18 set 6 1
What I want to do is create a trigger on cases table that updates call_att column of attendance table with number of entries in reaction column of cases table, this is what I have tried so far
CREATE DEFINER=`root`#`localhost` TRIGGER `number_call`
AFTER INSERT ON `cases` FOR EACH ROW
BEGIN UPDATE attendance set call_att=call_att +1
WHERE empid=new.empid AND date_t=new.date_t; END
But that doesn't seem to work. I am quite new to triggers.
try this
CREATE TRIGGER number_call
AFTER INSERT ON cases
FOR EACH ROW
BEGIN
UPDATE attendance set call_att=(select count(*) from cases where empid=NEW.empid )
date_t=NEW.date_t;
END

Performance of MySQL very bad when stored procedure iterates over large table of 15M rows

I have a stored procedure that opens a CURSOR on a select statement that iterates over a table of 15M rows (This table is a simpl import of a large CSV).
I need to normalize that data by inserting various pieces of each row into 3 different tables (capture auto-update ID's, use them in forein key constraints, and such).
So I wrote a simple stored procedure, open CURSOR, FETCH the fields into varialbes and do the 3 insert statements.
I'm on a small DB server, default mysql installation (1 cpu, 1.7GB ram), I had hoped for a few hours for this task. I'm at 24 hours+ and top shows 85% wasted CPU.
I think I have some kind of terrible inefficiency. Any ideas on improving the efficiency of the task? Or just determining where the bottleneck is?
root#devapp1:/mnt/david_tmp# vmstat 10
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 1 256 13992 36888 1466584 0 0 9 61 1 1 0 0 98 1
1 2 256 15216 35800 1466312 0 0 57 7282 416 847 2 1 12 85
0 1 256 14720 35984 1466768 0 0 42 6154 387 811 2 1 10 87
0 1 256 13736 36160 1467344 0 0 51 6979 439 934 2 1 9 89
DROP PROCEDURE IF EXISTS InsertItemData;
DELIMITER $$
CREATE PROCEDURE InsertItemData() BEGIN
DECLARE spd TEXT;
DECLARE lpd TEXT;
DECLARE pid INT;
DECLARE iurl TEXT;
DECLARE last_id INT UNSIGNED;
DECLARE done INT DEFAULT FALSE;
DECLARE raw CURSOR FOR select t.shortProductDescription, t.longProductDescription, t.productID, t.productImageURL
from frugg.temp_input t;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
OPEN raw;
read_loop: LOOP
FETCH raw INTO spd, lpd, pid, iurl;
IF done THEN
LEAVE read_loop;
END IF;
INSERT INTO item (short_description, long_description) VALUES (spd, lpd);
SET last_id = LAST_INSERT_ID();
INSERT INTO item_catalog_map (catalog_id, catalog_unique_item_id, item_id) VALUES (1, CAST(pid AS CHAR), last_id);
INSERT INTO item_images (item_id, original_url) VALUES (last_id, iurl);
END LOOP;
CLOSE raw;
END$$
DELIMITER ;
MySQL will almost always perform better executing straight SQL statements, than looping inside a stored procudure.
That said, if you are using InnoDB tables, your procedure will run faster inside a START TRANSACTION / COMMIT block.
Even better would be to add an AUTO_INCREMENT to the records in frugg.temp_input, and querying against that table:
DROP TABLE IF EXISTS temp_input2;
CREATE TABLE temp_input2 (
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
shortProductDescription TEXT,
longProductDescription TEXT,
productID INT,
productImageURL TEXT,
PRIMARY KEY (id)
);
START TRANSACTION;
INSERT INTO
temp_input2
SELECT
NULL AS id,
shortProductDescription,
longProductDescription,
productID,
productImageURL
FROM
frugg.temp_input;
INSERT
INTO item
(
id,
short_description,
long_description
)
SELECT
id,
shortProductDescription AS short_description,
longProductDescription AS long_description
FROM
temp_input2
ORDER BY
id;
INSERT INTO
item_catalog_map
(
catalog_id,
catalog_unique_item_id,
item_id
)
SELECT
1 AS catalog_id,
CAST(productID AS CHAR) AS catalog_unique_item_id,
id AS item_id
FROM
temp_input2
ORDER BY
id;
INSERT INTO
item_images
(
item_id,
original_url
)
SELECT
id AS item_id,
productImageURL AS original_url
FROM
temp_input2
ORDER BY
id;
COMMIT;
Even better than the above, is before loading the .CSV file into frugg.temp_input, you add an AUTO_INCREMENT field to it, saving you the extra step of creating/loading temp_input2 shown above.
I'm at a similar thought as Ross offered, but without knowing more of your tables, indexes, what the "auto-increment" column names are, I would just do direct inserts... However, you'll have an issue if you encounter any duplicates which I didn't see any checking for.. I would just insert as follows and have appropriate indexes to help the re-join (based on short and long product descriptions).
I would just try by inserting and inserting from the select, then inserting from that select... such as...
INSERT INTO item
( short_description,
long_description )
SELECT
t.ShortProductDescription,
t.LongProductDescription
from
frugg.temp_input t;
done, 15 million inserted... into items table... Now, add to the catalog map table...
INSERT INTO item_catalog_map
( catalog_id,
catalog_unique_item_id,
item_id )
SELECT
1 as Catalog_id,
CAST( t.productID as CHAR) as catalog_unique_item_id,
item.AutoIncrementIDColumn as item_id
from
frugg.temp_input t
JOIN item on t.ShortProductDescription = item.short_desciption
AND t.LongProductDescription = item.long_description
done, all catalog map entries with corresponding "Item ID" inserted...
INSERT INTO item_images
( item_id,
original_url )
SELECT
item.AutoIncrementIDColumn as item_id,
t.productImageURL as original_url
from
frugg.temp_input t
JOIN item on t.ShortProductDescription = item.short_desciption
AND t.LongProductDescription = item.long_description
Done with the image URLs.

MySQL insert from a table assigning auto-increment ID and update FK in third table

I have 3 tables
old_customers
id name
5 Mario
13 John
.. ...
new_customers
id name address
7 Luigi Roma
.. ... ...
orders
id customer_id
1 5
2 7
3 13
.. ..
I want to copy old_customers to new_customers assigning them a new auto-increment id and updating the orders foreign key customer_id.
How to perform this simultaneous INSERT and UPDATE in one simple MySQL query?
A basic psudo-sql idea
INSERT INTO new_customers (name) SELECT name FROM old_customers
AND
UPDATE orders SET customer_id=LAST_INSERT_ID() WHERE customer_id=old_customers.id
A week later ...
Thanks to help received this is the developed MySQL solution:
create a PROCEDURE that declare a CURSOR and INSERT+UPDATE fetched results in a LOOP
DELIMITER //
CREATE PROCEDURE move_costumers()
BEGIN
DECLARE fetched_id INT(3);
DECLARE fetched_name VARCHAR(50);
DECLARE my_cursor CURSOR FOR SELECT id,name FROM old_customers;
OPEN my_cursor;
BEGIN
DECLARE EXIT HANDLER FOR NOT FOUND BEGIN END;
LOOP
FETCH my_cursor INTO fetched_id,fetched_name;
INSERT INTO new_customers (name) VALUES (fetched_name);
UPDATE orders SET orders.customer_id = LAST_INSERT_ID()
WHERE orders.customer_id = fetched_id;
END LOOP;
END;
CLOSE my_cursor;
END//
It's a loop without control variable and without label, as I found in Simple Cursor Traversal 2
Why dont you write a udf, it will help you achieve your requirement.
The procedure for this purpose will be something like this ::
Follow the steps ::
1)Get the largest id used in new_customer table eg.
(Select max(id) into v_curr_id from new_users group by user_id)
and store it in a variable as v_curr_id.
2) Create a cursor which iterates and reads every row of the old_customer and everytime store it into a variable v_old_cust_id, v_old_custname
3)Inside the cursor :
increment the v_curr_id and insert a new row in the new_cust table having the cust_id as v_curr_id and name as v_old_custname.
e.g.
insert into new_customers(id, name) values (v_curr_id,v_old_custname);
Then update the order_table like
update order_table set cust_id = v_curr_id where cust_id=v_old_custname;
4) After creation you will just have to call the procedure
like
call my_proc()
For syntax reference, visit cursor_example
UDF? come on...
just copy old customer data into new_customer table, adding old_id as a column so you could update this way:
INSERT INTO new_customers (name,old_id) SELECT name, id FROM old_customers
UPDATE orders o
SET customer_id = (select id form new_customers nc where nc.old_id = o.id)
Proc with cursor will be soooo slow...