How to avoid sym_ tables in slave side database in symmetricDs configuration? - mysql
I have master node and salve node running on the same system replicate from the master to the slave, when i start the system it will generate sym_ tables on both master and slave nodes .Is it really required for working in one way replication . i tried to add the following properties in slave side
auto.config.database=false
but it stops the synchronisation itself,and my entries in master sym_ tables as follws
delete from sym_trigger_router;
delete from sym_trigger;
delete from sym_router;
delete from sym_channel where channel_id in ('item');
delete from sym_node_group_link;
delete from sym_node_group;
delete from sym_node_host;
delete from sym_node_identity;
delete from sym_node_security;
delete from sym_node;
insert into sym_channel
(channel_id, processing_order, max_batch_size, enabled, description)
values('item', 1, 100000, 1, 'Item and pricing data');
insert into sym_node_group (node_group_id) values ('corp');
insert into sym_node_group (node_group_id) values ('store');
insert into sym_node_group_link (source_node_group_id, target_node_group_id, data_event_action) values ('corp', 'store', 'W');
insert into sym_node_group_link (source_node_group_id, target_node_group_id, data_event_action) values ('store', 'corp', 'P');
insert into sym_trigger
(trigger_id,source_table_name,channel_id,last_update_time,create_time)
values('item','item','item',current_timestamp,current_timestamp);
insert into sym_router
(router_id,source_node_group_id,target_node_group_id,router_type,create_time,last_update_time)
values('corp_2_store', 'corp', 'store', 'default',current_timestamp, current_timestamp);
insert into sym_router
(router_id,source_node_group_id,target_node_group_id,router_type,create_time,last_update_time)
values('store_2_corp', 'store', 'corp', 'default',current_timestamp, current_timestamp);
insert into sym_trigger_router
(trigger_id,router_id,initial_load_order,last_update_time,create_time)
values('item','corp_2_store', 100, current_timestamp, current_timestamp);
insert into sym_node (node_id,node_group_id,external_id,sync_enabled,sync_url,schema_version,symmetric_version,database_type,database_version,heartbeat_time,timezone_offset,batch_to_send_count,batch_in_error_count,created_at_node_id)
values ('000','corp','000',1,null,null,null,null,null,current_timestamp,null,0,0,'000');
insert into sym_node (node_id,node_group_id,external_id,sync_enabled,sync_url,schema_version,symmetric_version,database_type,database_version,heartbeat_time,timezone_offset,batch_to_send_count,batch_in_error_count,created_at_node_id)
values ('001','store','001',1,null,null,null,null,null,current_timestamp,null,0,0,'000');
insert into sym_node_security (node_id,node_password,registration_enabled,registration_time,initial_load_enabled,initial_load_time,created_at_node_id)
values ('000','5d1c92bbacbe2edb9e1ca5dbb0e481',0,current_timestamp,0,current_timestamp,'000');
insert into sym_node_security (node_id,node_password,registration_enabled,registration_time,initial_load_enabled,initial_load_time,created_at_node_id)
values ('001','5d1c92bbacbe2edb9e1ca5dbb0e481',1,null,1,null,'000');
insert into sym_node_identity values ('000');
if slave side not required sym_ tables please help me to avoid
thanks in advance
You're talking about sym_ not sys_ tables, aren't you?
Yes, you need the sym_ tables at the destination node. For example, without the sym_node your destination node wont be able to register and held the registration with the source node. sym_incoming_batch holds all batches of data synced from the destination server etc.
auto.config.database just tells SymmetricDS that you are going to manage the creation and maintenance of the SymmetricDS tables yourself. They are still required. However, you can put the SymmetricDS tables in a different catalog (database) than the target tables.
There is also a yet undocumented feature to setup up a generic jdbc data loader and use a small local H2 database as the SymmetricDS database. This option is only for one-way synchronization to the client only.
Related
Renaming a Table in SymmetricDS with Transform
I would like to use symmetricDS to copy a table from a client node to a master node but with a different name of the table at the master node. I use "transform" in order to rename the table. It works fine if my renamed table is located in the master node schema (where also all of the master sym tables are located). But, I have two issues: a) At the moment I always need to create the whole target table layout before syncing. Is it possible that symmetricDS automatically creates the renamed target table? b) Renaming a table doesn't work anymore if I locate my renamed table in a different database (called master_db), even though I specify the target_catalog_name everywhere it is required. I'm thankful for any help regarding this issue. Below is the code I use for setting up the master and the client nodes. -- config master node: INSERT INTO `symmetricds_master`.`sym_node_group` (`node_group_id`) VALUES ('master_node'); INSERT INTO `symmetricds_master`.`sym_node_group_link` (`source_node_group_id`,`target_node_group_id`,`data_event_action`) VALUES ('client_node', 'master_node', 'P'); INSERT INTO `symmetricds_master`.`sym_node_group_link` (`source_node_group_id`, `target_node_group_id`, `data_event_action`) VALUES ('master_node', 'client_node', 'W'); -- config client node: insert into symmetricds.`SYM_ROUTER` (router_id,source_node_group_id,target_catalog_name,target_node_group_id,create_time,last_update_time) values ('client2master','client_node','master_db','master_node',current_timestamp, current_timestamp); insert into symmetricds.sym_trigger (trigger_id,source_catalog_name, source_table_name,channel_id,last_update_time,create_time) values ('TriggerA','source_db','ATable','default',current_timestamp,current_timestamp); insert into symmetricds.sym_trigger_router (trigger_id, router_id, initial_load_order, create_time, last_update_time) values ('TriggerA', 'client2master', 1, current_timestamp, current_timestamp); insert into symmetricds.sym_transform_table (transform_id,source_node_group_id,target_node_group_id,transform_point,source_catalog_name,source_table_name,target_catalog_name,target_table_name,delete_action,column_policy) values ('TransfAtoB', 'client_node', 'master_node', 'Load','source_db','ATable','master_db','BTable', 'DEL_ROW', 'IMPLIED');
My first observation is that there are two different symmetricDs configuration database schemas. That's not needed. It's enough to have one master symmetricDs configuration database -- at the central node, let all clients upon registration to download the relevant configuration and apply it to their local symmetricDs schemas. a) At the moment I always need to create the whole target table layout before syncing. Is it possible that symmetricDS automatically creates the renamed target table? Use DDL commands admin commands to create a missing table on the destination node b) Renaming a table doesn't work anymore if I locate my renamed table in a different database (called master_db), even though I specify the target_catalog_name everywhere it is required. You should be having one master_db. Why would you locate the table in another database?
Cancel DELETE on MySQL REPLICATE
I'm looking for a way to prevent DELETE statement on a MySQL replication Master/Slave. In my case, the Master is a live database, with fresh entries (not older than a week), and the Slave is an archive database which must contains all entries. I have several problems with my test: If I raise an exception in a Slave BEFORE DELETE trigger, like SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'DELETE canceled'; The SQL exception raise an error and stop my Slave. If I raise a warning, the Slave keep running but the delete is not canceled I can't modify the my.cnf of MySQL. I can't use a boolean attribute for hide on master, show on slave (Master database must be as little as possible). I rack my brain on this since a few days, and I 'm running out of ideas ...
You would be better off writing the deletes to an audit table. Problem with preventing deletes in slaves is: if you try to insert a row with a pk which is already deleted in the master and if you have somehow prevented the delete in slave, the insert will fail in the slave. You can track deleted rows in a different table with same structure. http://www.techonthenet.com/mysql/triggers/before_delete.php CREATE TRIGGER audit_before_delete BEFORE DELETE ON yourtable FOR EACH ROW BEGIN -- Find the deleted row and insert record into audit table -- insert into yourtable_audit values (old.id, old.name, old.description); END;
MySQL Queries from the MySQL Workbench won't replicate
I am preparing to make some changes to a database I manage and was unsure that what I wanted to do would replicate properly so I ran some tests in a test environment and it turns out they will but only as long as I do not run the commands from the MySQL Workbench. For example if have a database named db_test and a table in that database named test_a having only a single column id and I try to execute this from the workbench: INSERT INTO db_test.test_a (id) VALUES (114); I get the expected row in the master database, but it never replicates to the slave. When I perform a SHOW SLAVE STATUS - it shows everything is fine, and current. If I then use a different SQL client such as SequelPro and insert another row the same way (but obviously a different id) it shows in the master and replicates to the slave. This has me baffled, and concerned as I want to understand what the difference is so I can avoid performing actions that never replicate.
If you have set --replicate-do-db on the slave to filter replication for database db_test, replication is filtered based on the default database, so make sure that you issue USE db_test. Your client may be working differently in this manner, or you may be issuing different statements between clients. Using --replicate-do-db set to db_test on the slave, this will replicate: USE db_test; INSERT INTO test_a (id) VALUES (114); but this will not: USE other_db; INSERT INTO db_test.test_a (id) VALUES (114); To get replication to work regardless of the current default database, use --replicate-wild-do-table to configure the database and table to replicate or don't filter at all. Also, make sure that you are connected to the Master database server.
Merge MySQL Dump with Existing Table
A user of a WordPress site with a Form plugin accidentally delete ALL of the entries for a specific form. I went into the daily backup and I have a .sql file which has all of the data for the table that the form info is stored. Now I need to merge that back into the database, but the dump uses INSERT INTO and stops immediately with an error because most of the entries already exist. I tried using "ON DUPLICATE KEY UPDATE id=id", but it ignored everything. I've been searching here and on Google for a couple hours without any kind of solution. The basic of the dumps is: LOCK TABLES `wp_frm_items` WRITE; INSERT INTO `wp_frm_items` (`id`, `item_key`, `name`, `description`, `ip`, `form_id`, `post_id`, `user_id`, `parent_item_id`, `updated_by`, `created_at`, `updated_at`) VALUES (2737,'jb7x3c','Brad Pitt','a:2:{s:7:\"browser\";s:135:\"Mozilla/5.0 (iPhone; CPU iPhone OS 6_1_3 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/6.0 Mobile/10B329 Safari/8536.25\";s:8:\"referrer\";s:38:\"http://mysite/myform/\r\n\";}','192.168.1.1',6,0,NULL,NULL,NULL,'2013-06-30 15:09:20','2013-06-30 15:09:20'); UNLOCK TABLES; ID #2737 exists, so I either want to ignore it or just update the existing table. Seems like there would be an easy way to import data from a MySQL dump into an existing database. ps. I'm trying to do this in phpMyAdmin
If the data has not changed for those rows, you can use REPLACE instead of INSERT. If you want to skip rows, one possibility is to use a temporary table. Load the rows there and DELETE those rows that have a id that exists in the old table. DELETE FROM my_new_temptable AS temp WHERE temp.id IN (SELECT id FROM wp_frm_items) Then just insert the remaining rows into wp_frm_items. Or you can move the new rows to a temporary table before restoring from the dump and copy them from there back into original table. There are many possibilities. Also, many SQL tools have table merging capabilities.
Sync two MySQL tables
Have two separate databases. The master that holds the user information(username, password, address etc.). The slave database only has one table where the user name and password. I would like to happen is then an new user on the master db i created that the username and password is also added to the slave db.
You can do this with either a TRIGGER or STORED PROCEDURE. In your case i guess you could use something like this (not tested): CREATE TRIGGER `user_update` AFTER INSERT ON `User` FOR EACH ROW BEGIN INSERT INTO `mydb`.`UserLogin` (`id`, `UserName`, `Pass`) VALUES (new.UserId, new.UserName, new.Password); END$$
We face a similar situation. We use Percona Toolkit's pt-table-sync tool. It's rather simple to use.