Say you are running two mysql servers: one a master, the other the slave. The master has triggers set that update columns with the COUNT of the number of rows in other tables. For instance, you have a news table and a comments table. News contains an INT column called "total_comments" which is incremented via trigger every time a new row is put into "comments." Does the slave need this trigger as well (to keep "news.total_comments" up to date) or will it get be told to update the appropriate "news.total_comments" directly?
From the docs http://dev.mysql.com/doc/refman/5.0/en/faqs-triggers.html:
22.5.4: How are actions carried out through triggers on a master
replicated to a slave? First, the
triggers that exist on a master must
be re-created on the slave server.
Once this is done, the replication
flow works as any other standard DML
statement that participates in
replication. For example, consider a
table EMP that has an AFTER insert
trigger, which exists on a master
MySQL server. The same EMP table and
AFTER insert trigger exist on the
slave server as well. The replication
flow would be: An INSERT statement is
made to EMP. The AFTER trigger on EMP
activates. The INSERT statement is
written to the binary log. The
replication slave picks up the INSERT
statement to EMP and executes it. The
AFTER trigger on EMP that exists on
the slave activates.
And
22.5.4 Actions carried out through triggers on a master are not
replicated to a slave server.
Thus, you DO need the triggers on the slave.
It depends on the replication you're using. If you use statement based replication, then you must use matching triggers in the master and the slave. If you use row-based replication, then you must not include the triggers on the slave.
You can have the action of requests made by triggers in the binary log with federated tables (MySQL5) by adding the same table with a local connection.
---------------------------------------------------------------------------------------
-- EXEMPLE :
-- We want install a replication of the table test_table that will be managed by Trg_Update triggers.
---------------------------------------------------------------------------------------
Create database TEST;
USE TEST;
CREATE TABLE test_trigger (
id INT(20) NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL DEFAULT '',
PRIMARY KEY (id),
INDEX name (name),
) ;
DELIMITER |
CREATE TRIGGER Trg_Update AFTER INSERT ON test_trigger
FOR EACH ROW BEGIN
INSERT INTO federated_table (name, other) values (NEW.name, ‘test trigger on federated table -> OK’)
END|
DELIMITER ;
CREATE TABLE test_table (
id INT(20) NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL DEFAULT '',
other VARCHAR(32) NOT NULL DEFAULT '',
PRIMARY KEY (id),
INDEX name (name),
INDEX other_key (other)
) ;
CREATE TABLE federated_table (
id INT(20) NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL DEFAULT '',
other VARCHAR(32) NOT NULL DEFAULT '',
PRIMARY KEY (id),
INDEX name (name),
INDEX other_key (other)
)
ENGINE=FEDERATED
CONNECTION='mysql://root#localhost/TEST/test_table';
---------------------------------------------------------------------------------------
Related
I recently encountered an error in my application with concurrent transactions. Previously, auto-incrementing for compound key was implemented using the application itself using PHP. However, as I mentioned, the id got duplicated, and all sorts of issues happened which I painstakingly fixed manually afterward.
Now I have read about related issues and found suggestions to use trigger.
So I am planning on implementing a trigger somewhat like this.
DELIMITER $$
CREATE TRIGGER auto_increment_my_table
BEFORE INSERT ON my_table FOR EACH ROW
BEGIN
SET NEW.id = SELECT MAX(id) + 1 FROM my_table WHERE type = NEW.type;
END $$
DELIMITER ;
But my doubt regarding concurrency still remains. Like what if this trigger was executed concurrently and both got the same MAX(id) when querying?
Is this the correct way to handle my issue or is there any better way?
An example - how to solve autoincrementing in compound index.
CREATE TABLE test ( id INT,
type VARCHAR(192),
value INT,
PRIMARY KEY (id, type) );
-- create additional service table which will help
CREATE TABLE test_sevice ( type VARCHAR(192),
id INT AUTO_INCREMENT,
PRIMARY KEY (type, id) ) ENGINE = MyISAM;
-- create trigger which wil generate id value for new row
CREATE TRIGGER tr_bi_test_autoincrement
BEFORE INSERT
ON test
FOR EACH ROW
BEGIN
INSERT INTO test_sevice (type) VALUES (NEW.type);
SET NEW.id = LAST_INSERT_ID();
END
db<>fiddle here
creating a service table just to auto increment a value seems less than ideal for me. – Mohamed Mufeed
This table is extremely tiny - you may delete all records except one per group with largest autoincremented value in this group anytime. – Akina
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=61f0dc36db25dd5f0cf4647d8970cdee
You may schedule excess rows removing (for example, daily) in service event procedure.
I have managed to solve this issue.
The answer was somewhat in the direction of Akina's Answer. But not quite exactly.
The way I solved it did indeed involved an additional table but not like the way He suggested.
I created an additional table to store meta data about transactions.
Eg: I had table_key like this
CREATE TABLE `journals` (
`id` bigint NOT NULL AUTO_INCREMENT,
`type` smallint NOT NULL DEFAULT '0',
`trans_no` bigint NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
KEY `transaction` (`type`,`trans_no`)
)
So I created a meta_journals table like this
CREATE TABLE `meta_journals` (
`type` smallint NOT NULL,
`next_trans_no` bigint NOT NULL,
PRIMARY KEY (`type`),
)
and seeded it with all the different types of journals and the next sequence number.
And whenever I insert a new transaction to the journals I made sure to increment the next_trans_no of the corresponding type in the meta_transactions table. This increment operation is issued inside the same database TRANSACTION, i.e. inside the BEGIN AND COMMIT
This allowed me to use the exclusive lock acquired by the UPDATE statement on the row of meta_journals table. So when two insert statement is issued for the journal concurrently, One had to wait until the lock acquired by the other transaction is released by COMMITing.
I am trying to run a trigger on slave for RBR (https://mariadb.com/kb/en/library/running-triggers-on-the-slave-for-row-based-events/).
I have created a table on master like this:
CREATE TABLE `t1` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`mobile` varchar(20) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
On slave, I had this trigger:
CREATE TRIGGER t1_obfus AFTER INSERT ON `t1`
FOR EACH ROW
UPDATE `t1`
SET `mobile` = LEFT(MD5(NEW.`mobile`), 20);
which did not work. I got the following error on SHOW SLAVE STATUS:
Last_SQL_Error: Could not execute Write_rows_v1 event on table d1.t1; Can't update table 't1' in stored function/trigger because it is already used by statement which invoked this stored function/trigger., Error_code: 1442; handler error HA_ERR_GENERIC; the event's master log mariadb-bin.000004, end_log_pos 454
Then I modified the trigger to:
CREATE TRIGGER t1_obfus BEFORE INSERT ON `t1`
FOR EACH ROW
UPDATE `t1`
SET NEW.`mobile` = LEFT(MD5(NEW.`mobile`), 20)
WHERE id = NEW.id;
but it still did not work. Then I created a new table on slave:
CREATE TABLE t1_2 (
id BIGINT UNSIGNED NOT NULL PRIMARY KEY AUTO_INCREMENT,
mobile VARCHAR(20)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
and modified my trigger to:
CREATE TRIGGER t1_obfus BEFORE INSERT ON `t1_2`
FOR EACH ROW
INSERT INTO `t1_2` (id, mobile)
VALUES (NEW.id,LEFT(MD5(NEW.`mobile`), 20));
Now the replication started working but there is no data in the table t1_2. How do I fix this?
If you use row-based replication, triggers on the slave won't work.
As described in the official documentation:
Blockquote
If you want triggers to execute on both the master and the slave—perhaps because you have different triggers on the master and slave—you must use statement-based replication. However, to enable slave-side triggers, it is not necessary to use statement-based replication exclusively. It is sufficient to switch to statement-based replication only for those statements where you want this effect, and to use row-based replication the rest of the time.
That's a strange issue.
There are source MySQL DB (MASTER) and its replic (SLAVE). It's a (as you understand its a MASTER-SLAVE) statement-bases replication, because I need triggers to run on the SLAVE side.
The original triggers were replaced by new ones.
Every table has 3 trigers: on INSERT, on UPDATE and on DELETE.
Trigers were generated by a single pattern and differ only by params.
Every trigger does a single INSERT query to a table (CHANGES) on the SLAVE.
This table is not replicated and exists only on the SLAVE.
There is an autoincremented column (ID - bigint) in this table.
None of the triggers set or modify values of the column ID. The DB sets a default values for it.
It's about 20 inserts executed on CHANGES per minute.
I see errors with duplicated values.
How it's possible?
Let's again:
An INSERT / UPDATE / DELETE query is executed on MASTER.
This change is replicated to SLAVE.
A trigger is called and inserts a row to the CHANGES.
Duplicated values error is generated.
And as I said before, none of the triggers set or modify value of the autoincremented field (ID). And only triggers work with table CHANGES.
I understand that two or more triggers can be called together, and try to do INSERT together, but I think DB should easy solve this. Or that's a very bad DB.
UPD:
CREATE TABLE `CHANGES` (
`id` BIGINT UNSIGNED NOT NULL AUTO_INCREMENT,
`field1` ENUM(...) NOT NULL,
`field2` BOOL NOT NULL DEFAULT FALSE,
`field3` VARCHAR(64) NOT NULL,
`field4` VARCHAR(255) NOT NULL,
`field5` TEXT,
PRIMARY KEY (`id`)
) ENGINE=INNODB DEFAULT CHARSET=utf8;
CREATE TRIGGER `tr_TABLE_insert` AFTER INSERT ON `TABLE`
FOR EACH ROW BEGIN
INSERT INTO `CHANGES` (`field1`, `field3`, `field4`, `field5`)
VALUES ("value1", "value3", "value4", "value5");
END
UPD 2:
I found a temporary and more dirty method - added BEFORE INSERT trigger for CHANGES to set ID manually.
It works. But I still can't figure out why the native AUTO_INCREMENT mechanism is generating duplicated ids.
I've created a log system based on trigger.
Every time a row is inserted or updated the trigger store a new row in another table.
The trigger works fine but after some time I found this message in logs:
[Warning] Unsafe statement written to the binary log using statement format since BINLOG_FORMAT = STATEMENT. Statement is unsafe because it invokes a trigger or a stored function that inserts into an AUTO_INCREMENT column. Inserted values cannot be logged correctly. Statement: update `gl_item` set `is_shown` = '0', `updated_at` = '2016-03-21 16:56:28' where `list_id` = '1' and `is_shown` = '1'
I've already red some post related to this issue i like:
- MySQL Replication & Triggers
But I don't understand the nature of the problem.
What this warning mean?
I don't have to insert into auto increment column with triggers?
Which is the best way to create a log system in order to avoid this warning?
Update
Output of SHOW CREATE TABLE, this is the table where the trigger will enter THE rows.
gl_item_log | CREATE TABLE `gl_item_log` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'Item log unique id',
`item_id` bigint(20) unsigned DEFAULT NULL ,
`updated_by` bigint(20) unsigned DEFAULT NULL ,
`switch_shown` tinyint(4) DEFAULT NULL ,
`switch_checked` tinyint(4) DEFAULT NULL ,
`logged_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`logged_at_microtime` decimal(6,6) unsigned NOT NULL ,
`logged_at_microtime_int` mediumint(8) unsigned NOT NULL DEFAULT '0' ,
PRIMARY KEY (`id`),
KEY `gl_item_log_updated_by_foreign` (`updated_by`),
KEY `gl_item_log_item_id_updated_by_switch_shown_switch_checked_index`
(`item_id`,`updated_by`,`switch_shown`,`switch_checked`),
CONSTRAINT `gl_item_log_item_id_foreign`
FOREIGN KEY (`item_id`) REFERENCES `gl_item` (`id`),
CONSTRAINT `gl_item_log_updated_by_foreign`
FOREIGN KEY (`updated_by`) REFERENCES `gl_general_user` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=32 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
Could be a good idea drop the id column with the auto increment field in order to have logs entries without a unique identifier?
Thanks
What this warning mean?
According to the MySQL documentation:
A statement invoking a trigger (or function) that causes an update to an AUTO_INCREMENT column is not replicated correctly using statement-based replication. MySQL 5.7 marks such statements as unsafe. (Bug #45677)
With statement-based replication, the exact SQL which is run on your master database is also run on your slave(s). When your trigger is fired, if it exists on every one of your databases, it is run on each database and inserts into your log. This can be a tricky situation for your databases to remain in sync.
I don't have to insert into auto increment column with triggers?
Correct. It's never necessary to insert your own values into an auto-increment column.
Which is the best way to create a log system in order to avoid this warning?
First, either keep the trigger on every database and turn off replication for your log table, or have the trigger only on your master database and let replication copy the log table inserts to the other databases.
To work around this specific warning, configure your log table to have no auto-increment column. Your trigger can then insert into it and it shouldn't cause any replication warnings.
Another option is to switch to row-based replication. Then the trigger will only be fired automatically on the master and the auto-increment values will always replicate without issue.
I have two databases with the same schema (dev/prod) hosted on different machines (and different hosts).
Is there any mechanism or tool whereby I can do a select against specific rows in one db and insert them into the other?
You can use MySQL's FEDERATED storage engine:
The FEDERATED storage engine lets you access data from a remote MySQL database without using replication or cluster technology. Querying a local FEDERATED table automatically pulls the data from the remote (federated) tables. No data is stored on the local tables.
So, to create a connection:
CREATE TABLE federated_table (
id INT(20) NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL DEFAULT '',
other INT(20) NOT NULL DEFAULT '0',
PRIMARY KEY (id),
INDEX name (name),
INDEX other_key (other)
)
ENGINE=FEDERATED
DEFAULT CHARSET=latin1
CONNECTION='mysql://fed_user#remote_host:9306/federated/test_table';
Having defined such a table, you could then perform INSERT ... SELECT as you see fit:
INSERT INTO federated_table SELECT * FROM local_table WHERE ...
Or
INSERT INTO local_table SELECT * FROM federated_table WHERE ...
If you are federating multiple tables from the same server, you may wish to use CREATE SERVER instead.