Renaming a Table in SymmetricDS with Transform - mysql

I would like to use symmetricDS to copy a table from a client node to a master node but with a different name of the table at the master node. I use "transform" in order to rename the table. It works fine if my renamed table is located in the master node schema (where also all of the master sym tables are located).
But, I have two issues:
a) At the moment I always need to create the whole target table layout before syncing. Is it possible that symmetricDS automatically creates the renamed target table?
b) Renaming a table doesn't work anymore if I locate my renamed table in a different database (called master_db), even though I specify the target_catalog_name everywhere it is required.
I'm thankful for any help regarding this issue.
Below is the code I use for setting up the master and the client nodes.
-- config master node:
INSERT INTO `symmetricds_master`.`sym_node_group` (`node_group_id`)
VALUES ('master_node');
INSERT INTO `symmetricds_master`.`sym_node_group_link`
(`source_node_group_id`,`target_node_group_id`,`data_event_action`)
VALUES ('client_node', 'master_node', 'P');
INSERT INTO `symmetricds_master`.`sym_node_group_link`
(`source_node_group_id`, `target_node_group_id`, `data_event_action`)
VALUES ('master_node', 'client_node', 'W');
-- config client node:
insert into symmetricds.`SYM_ROUTER`
(router_id,source_node_group_id,target_catalog_name,target_node_group_id,create_time,last_update_time)
values ('client2master','client_node','master_db','master_node',current_timestamp, current_timestamp);
insert into symmetricds.sym_trigger
(trigger_id,source_catalog_name, source_table_name,channel_id,last_update_time,create_time)
values ('TriggerA','source_db','ATable','default',current_timestamp,current_timestamp);
insert into symmetricds.sym_trigger_router
(trigger_id, router_id, initial_load_order, create_time, last_update_time)
values ('TriggerA', 'client2master', 1, current_timestamp, current_timestamp);
insert into symmetricds.sym_transform_table
(transform_id,source_node_group_id,target_node_group_id,transform_point,source_catalog_name,source_table_name,target_catalog_name,target_table_name,delete_action,column_policy)
values ('TransfAtoB', 'client_node', 'master_node', 'Load','source_db','ATable','master_db','BTable', 'DEL_ROW', 'IMPLIED');

My first observation is that there are two different symmetricDs configuration database schemas. That's not needed. It's enough to have one master symmetricDs configuration database -- at the central node, let all clients upon registration to download the relevant configuration and apply it to their local symmetricDs schemas.
a) At the moment I always need to create the whole target table layout
before syncing. Is it possible that symmetricDS automatically creates
the renamed target table?
Use DDL commands admin commands to create a missing table on the destination node
b) Renaming a table doesn't work anymore if I locate my renamed table
in a different database (called master_db), even though I specify the
target_catalog_name everywhere it is required.
You should be having one master_db. Why would you locate the table in another database?

Related

Update a table (that has relationships) using another table in SSIS

I want to be able to update a specific column of a table using data from another table. Here's what the two tables look like, the DB type and SSIS components used to get the tables data (btw, both ID and Code are unique).
Table1(ID, Code, Description) [T-SQL DB accessed using ADO NET Source component]
Table2(..., Code, Description,...) [MySQL DB accessed using ODBC Source component]
I want to update the column Table1.Description using the Table2.Description by matching them with the right Code first (because Table1.Code is the same as Table2.Code).
What i tried:
Doing a Merge Join transformation using the Code column but I couldn't figure out how to reinsert the table because since Table1 has relationships i can't simply drop the table and replace it with the new one
Using a Lookup transformation but since both tables are not the same type it didn't allow me to create the lookup table's connection manager (which would be for in my case MySQL)
I'm still new to SSIS but any ideas or help would be greatly appreciated
My solution is based on #Akina's comments. Although using a linked server would've definitely fit, my requirement is to make an SSIS package to take care of migrating some old data.
The first and last are SQL tasks, while the Migrate ICDDx is the DFT that transfers the data to a staging table created during the first SQL task.
Here's the SQL commands that gets executed during Create Staging Table :
DROP TABLE IF EXISTS [tempdb].[##stagedICDDx];
CREATE TABLE ##stagedICDDx (
ID INT NOT NULL,
Code VARCHAR(15) NOT NULL,
Description NVARCHAR(500) NOT NULL,
........
);
and here's the sql command (based on #Akina's comment) for transferring from staged to final (inside Transfer Staged):
UPDATE [MyDB].[dbo].[ICDDx]
SET [ICDDx].[Description] = [##stagedICDDx].[Description]
FROM [dbo].[##stagedICDDx]
WHERE [ICDDx].[Code]=[##stagedICDDx].[Code]
GO
Here's the DFT used (both TSQL and MySQL sources return sorted output using ORDER BY Code, so i didnt have to insert Sort components before the Merge Join) :
Note: Btw, you have to setup the connection manager to retain/reuse the same connection so that the temporary table doesn't get deleted before we transfer data to it. If all goes well, then after the Transfer Staged SQL Task, the connection would be closed and the global temporary table would be deleted.

MySQL Queries from the MySQL Workbench won't replicate

I am preparing to make some changes to a database I manage and was unsure that what I wanted to do would replicate properly so I ran some tests in a test environment and it turns out they will but only as long as I do not run the commands from the MySQL Workbench.
For example if have a database named db_test and a table in that database named test_a having only a single column id and I try to execute this from the workbench: INSERT INTO db_test.test_a (id) VALUES (114);
I get the expected row in the master database, but it never replicates to the slave.
When I perform a SHOW SLAVE STATUS - it shows everything is fine, and current. If I then use a different SQL client such as SequelPro and insert another row the same way (but obviously a different id) it shows in the master and replicates to the slave.
This has me baffled, and concerned as I want to understand what the difference is so I can avoid performing actions that never replicate.
If you have set --replicate-do-db on the slave to filter replication for database db_test, replication is filtered based on the default database, so make sure that you issue USE db_test. Your client may be working differently in this manner, or you may be issuing different statements between clients.
Using --replicate-do-db set to db_test on the slave, this will replicate:
USE db_test;
INSERT INTO test_a (id) VALUES (114);
but this will not:
USE other_db;
INSERT INTO db_test.test_a (id) VALUES (114);
To get replication to work regardless of the current default database, use --replicate-wild-do-table to configure the database and table to replicate or don't filter at all.
Also, make sure that you are connected to the Master database server.

INSERT even though column does not exist in MySQL

Let's say I have an old .SQL dump and since it was created, I have changed the table schema.
I could be running:
INSERT INTO `ec_product_campaign_relations` (`campaign_id`, `product_id`, `product_qty`) VALUES (30,28,1),(30,27,0),(30,31,0),(30,30,0);
But if column product_qty does no longer exist, the line will not get inserted.
How can I force the line to get inserted anyways and ignore that the column does not exist?
EDIT: It should mention I'm working in PHP and it is script used to sync table shema... So no "manual" control over this.
Since editing all your SQL dump won't be trivial, I suggest you to add the column to your table, make the import, then delete the column.
You might want to create a new database for this import and restore the dump as-is. Then, once you've got a handle on what changes have been made by comparing the schema in one to the new one, create a series of ALTER TABLE statements that bring it in sync.
I tend to record these in a text file in case I need to replay them later, and also keep them as a list of what's changed. You may have to do this more than once, so notes help.
Then, once you've cleaned them up to be column-compatible, dump this database table-by-table, and restore into the other as required.

Update all rows of a single column from one table to the same table in another database

Ok, so I have a database in my testing environment called 'Food'. In this database, there is a table called 'recipe', with a column called 'source'.
This same database exists in my local environment. However, I just received an updated database (in my local environment) where all the column values (for 'source') have changed.
Is there any way I can migrate the 'source' column from my local to my test environment, without changing the values for any other column? There are 1186 rows in the 'Food' database 'recipe' table in my test environment that need to be updated ONLY with the 'source' column.
You need some way to uniquely identify your Recipes. If both tables have a surrogate key that remained constant, use that. Otherwise figure out some way to match up the new data with your test data: you might already have a unique index in mind or you might need to decide on a combination of fields that uniquely identify your Recipes.
On a side note, why can't you just overwrite all the columns? It is just test data, right?
If only a column has changed and you have IDs (or keys) on your rows, you could follow these steps:
create an intermediate table locally
insert keys and new source values there (either those which have changed or all)
use mysqldump to selectively export the table from the local database
copy the dumped table to the remote database server
import it there
join it with the production table in an update statement to replace the values
drop the intermediate table on the server

Using MySQL without any procedures or functions

Is it possible to use any sort of logic in MySQL without using any procedures? My web hosting does not let me create any procedures so I'm looking for a workaround.
The type of thing I want to do is only add an item to a table if it doesn't already exist. Or add a column to a table if it's not already there. There are some operations that can be done such as CREATE TABLE IF NOT EXISTS and so on, but some operations I require do not have such luxuries :(
I realised late on that my lovely procs won't work and so I tried writing IF/ELSE logic as top-level queries, but for MySQL, IF ELSE blocks only seem to work inside functions/procs and not at the global scope.
Any workarounds greatfully received - I've already asked the hosting to grant me privileges to create procedures but no reply as yet...
I suppose you don't have access to the INFORMATION_SCHEMA either. You can possibly find solutions but it would be better, in my oninion, to:
Change your hosting provider. Seriously. Pay more - if needed - for a MySQL instance that you can configure to your needs. You only have a crippled DBMS if you are not allowed to create procedures and functions.
Posible workarounds for the specific task: You want to add a column if it doesn't exist.
1) Just ALTER TABLE and add the column. If it already exists, you'll get an error. You can catch that error, in your application.
2) (If you have no access to the INFORMATION_SCHEMA) maintain a version of the schema, for your database.
The best solution that I can think of would be to use an additional language with SQL. For example, you can run a query for a specific record, and based on the response that you get, you can conditionally run an INSERT statement.
For inserting a table if it doesn't exist, try using the SHOW TABLES statement and testing whether or not a name exists in the result set.
MySQL supports INSERT IGNORE. and INSERT ... ON DUPLICATE KEY UPDATE.
The following will insert a new row, but only if there is no existing row with id=10. (This assumes that id is defined as a unique or primary key).
INSERT IGNORE INTO my_table (id, col1, col2) values (10, "abc", "def");
The following will insert a new row, but if there is an existing row with id=10 (again, assuming id is unique or primary), the existing row will be updated to hold the new values, instead of inserting a new row.
INSERT INTO my_table (id, col1, col2) values (10, "abc", "def")
ON DUPLICATE KEY UPDATE col1=VALUES(col1), col2=VALUES(col2)
Also, CREATE TABLE supports the IF NOT EXISTS modifier. So you can do something like:
CREATE TABLE IF NOT EXISTS my_table ...
There are many other similar options and modifiers available in MySQL. Check the docs for more.
Originally I created a big script to create or update the database schema, to make it easier to deploy database changes from my local machine to the server.
My script was doing a lot of "if table 'abc' exists and it doesn't have a FK constraint called 'blah'" then create an FK constraint called 'blah' on table 'abc'... and so on.
I now realise it's not actually necessary to check whether a table has a certain column or constraint etc, because I can just maintain a schema-versioning system, and query the DB schema-version when my app starts, or when I navigate to a certain page.
e.g. let's say I want to add a new column to a table. It works like this:
Add a new migration script to the app code, containing the SQL required to add the column to the existing table
Increment the app's schema-version by 1
On app startup, the app queries the DB for the DB's schema-version
If DB schema-version < app schema-version, execute the SQL migration scripts between the two schema-versions, and then update the DB schema-version to be the same as the app
e.g. if the DB's schema-version is 5 and the app version is 8, the app will apply migration scripts 5-6, 6-7 and 7-8 to the DB. These can just be run without having to check anything on the DB side.
The app is therefore solely responsible for updating the DB schema and there's no need for me to ever have to execute schema change scripts on the local or remote DB.
I think it's a better system than the one I was trying to implement for my question.