creating a trigger that executes sql statement over ssh in mysql - mysql

I have 2 (th and ct) servers that are completely separated each with it's own database I want to sync a table(et) in th with ct
I want if new inserts are done in table th a trigger will fire a ssh connection to ct server and insert the new rows I think the script should look something like the following but I can't figure out the syntax
DROP TRIGGER IF EXISTS et-sync
CREATE TRIGGER et-sync AFTER INSERT ON th.et
FOR EACH ROW BEGIN
ssh user#11.11.2.11 "mysql -uroot -ppassword -e \"INSERT db_testplus.user SET t = NEW.t;""
END;
and should I use this or just use Percona Toolkit for MySQL
(pt-table-sync) as I don't think adding a tool to control database sync at that scale is worth it(added complexity)
I know that adding replicas is properly the best solution but considering the current system design I thought of postponing the redesign of ct database for some time as it will take sometime to make it from scratch and it's an important part for the business
any suggestions ??

For security reasons, MySQL does not allow launching processes from within itself.
Usually the alternative is to make a cron job to do orchestrate the actions, reaching into the database as needed to communicate and coordinate.

Related

How do I update a MySQL database with the data from another database?

Disclaimer: this is a bit of a "best practices" question, but I'm not sure how else to phrase it to be more concrete or evoke a more objective answer.
I have two databases for a small Rails project: a dev database and a prod database that live on the same server.
What happens is that every couple weeks, I make some data changes (mostly inserts) via the ActiveAdmin gem to the dev database via a dev Rails environment. I poke around the Rails app making sure the new data looks good. When I'm ready to deploy, I run a script that:
Dumps the dev db
Drops and recreates the prod db to delete all data
Imports the dev db dump into the prod db
My intuition tells me that this is not a great way of going about this, and it's also a bit slow, but I can't seem to find the standard way of doing data deployments from one database to another. What is the standard way, if there is one?
Things I've considered:
Setting up an additional database that is the replica of the dev database; in deployment, somehow switch the Rails app over to use the replica as "prod", update the old "prod" db to be the replica, etc. I can barely even keep this idea in my head, and it seems like such a mess.
Doing data changes directly on prod, which completely invalidates the need for a dev database (feels very gross)
Doing data changes in dev via SQL script with transactions, and applying them to prod when deploying (it'd be really, really annoying to write these scripts by hand)
Some additional notes:
The only data changes that are ever made are the ones I make
Schema changes are done via Rails migrations
The databases are relatively small (the biggest table is ~1000 rows)
If you have two databases on the same server, you can compare and insert into tables. First, for deleted rows:
BEGIN TRAN;
DELETE FROM prod.tbl1
WHERE id IN (
SELECT id FROM dev.tbl1 RIGHT JOIN prod.tbl1 ON dev.tbl1.id = prod.tbl1.id WHERE dev.tbl1.id IS NULL);
COMMIT;
Second, for new rows:
BEGIN TRAN;
INSERT INTO prod.tbl1
SELECT *
FROM dev.tbl1
WHERE id IN (
SELECT id FROM dev.tbl1 LEFT JOIN prod.tbl1 ON dev.tbl1.id = prod.tbl1.id WHERE prod.tbl1.id IS NULL);
COMMIT;
Now, a trigger on your dev database to manage updates:
CREATE DEFINER=`root`#`localhost` TRIGGER `dev`.`tbl1_update`
AFTER UPDATE ON `dev`.`tbl1`
FOR EACH ROW
BEGIN
SET NEW.update = '1';
END
You need a "update" field on the dev table. When a update query run on the table, the field "update" changes to 1 automatiaclly. Then, use this query:
BEGIN TRAN;
UPDATE prod.tbl1
LEFT JOIN dev.tbl1
ON prod.tbl1.id = dev.tbl1.id
SET prod.tbl1.fld1 = dev.tbl1.fld1, prod.tbl1.fld2 = dev.tbl1.fld2
WHERE prod.tbl1.id IN (SELECT id FROM dev.tbl1 WHERE update = '1');
UPDATE dev.tbl1 SET update = '0';
COMMIT;
You can run a query like this on all tables. You can put it on a .sql file and run with a cron job (mysql -h -u -D < myscript.sql).
This query compare tables and get the IDs on dev not present on production. Then, execute a select for the complete table (only these ids), and insert them on prod.
(Replace the id field with your unique identifier one for each table).
This seems like a pretty strange approach. Usually the data in development is regarded as disposable. You want just enough data so that you can do styling and troubleshooting - usually with psuedorandom data. Building the "finished" app data in development seems error prone and you would need to sync work if you are more than one developer.
Plus if the data set is significantly large Rails will be very slow in development due to the lack of caching.
What you want is a staging environment which runs on the same settings as the intended production. The key here is that it should be as close to production as possible. This can run on remote server or a server in a intranet.
You can also use the staging environment to display new features or progress to clients/stakeholders and let them preview new features or be looped in on the progress in development.
You can create it by copying config/environments/production.rb -> staging.rb and by setting the RAILS_ENV env var to staging on the intended staging server.
You should also create an additional section in config/database.yml or use ENV['DATABASE_URL'].
Depending on the project staging can be flushed daily with mirrored data from production or be fully synced.

Sql does not queue queries to linked server

i have a table in sql server and a table with the same name and fields in mysql server. i connected them trhough a linked server using the following trigger:
CREATE TRIGGER items_insert ON [prueba]
FOR INSERT
AS
BEGIN
declare #tmp table (a int, b varchar(10))
insert #tmp (a, b) select ID, Name from inserted
COMMIT
SET XACT_ABORT ON
INSERT INTO OPENQUERY(WEBDB, 'SELECT * FROM prueba')
SELECT a, b FROM #tmp
begin tran
end
my problem is that when i take offline the mysql serverm and i insert a record in sql server, it obviously does not insert in mysql, but when i put the mysql server it does not either. i want a queue of sorts, so that when the connection between servers drop, any new records during that time are inserted in mysql when the connection is restored. How could i achieve this?, i am new to sql server and triggers
NOTE: the trigger has the #tmp declarations according to this tutorial because i was getting a weird error about transactional errors
Triggers will never queue and using linked servers inside a trigger is a bad idea. You will find hundreds of people burning their fingers with this one I did too.
For any queue type system you will need to implement service broker or as Nilesh pointed out use a job which queue the tables.
Your current setup is going to be very problematic as I used the same approach several years ago in a attempt to get data from SQL2005 to a MySQL server. Incidental in SQL2000 you could actually replicate the data from MSSQL to any other ODBC datasource. Microsoft discontinued this in SQL2005.
So you have two choices here.
Learn Service Broker: Service broker is a awesome but little used piece of SQL. It is a asynchronous queuing technology that allows you to send message to other remote systems check this link for much more information. However this is going to take time and effort to implement as you will have to learn quiet a bit i.e. steep learning curve.
Create a queue table and process on a schedule. Create a table that has the data you want to insert into MySQL with a processed flag. In the trigger insert this data into the table. Create a SQL server job that runs every minute and inserts the data from the queuing table into the MySQL database. On successful insertion mark it as processed.
Add a processed flag to the original table. Create a job that uses the table to get all items that have not been inserted and insert them on a schedule. This is like option 2 but you dont create a additional table.

SQL 2008 - Alternative to trigger

I am looking for a solution to the following:
Database: A
Table: InvoiceLines
Database: B
Table: MyLog
Every time lines are added to InvoiceLines in database A, I want to run a query that updates the table MyLog in database B. And I want it instantly.
Normally I would create a trigger in database A on INSERT in InvoiceLines. The problem is that database A belongs to a ERP program where I don't want to make any changes at all (updates, unknown functionality in 3-layer program, etc)
Any hints to help me in the right direction...?
You can use transactional replication to send changes from your table in database A to a copy in DB B, then create your triggers on the copy. It's not "instant," but it's usually considered "near real time."
You might be able to use DB mirroring to do this somehow, but you'd have to do some testing to see if you could get it to work right (maybe set up triggers in the mirror that don't exist in the original?)
One possible solution to replicate trigger's functionality without database update is to poll the table by an external application (i.e. java) which on finding new insert would fire required query.
In SQLServer2008, something similar can be done via C# assembly but again this needs to be installed which requires database update.

SQL Server to MySQL data transfer

I am trying to transfer bulk data on a constant and continuous based from a SQL Server database to a MYSQL database. I wanted to use SQL Server's SSMS's replication but this apparently is only for SQL Server to Oracle or IBM DB2 connection. Currently we are using SSIS to transform data and push it to a temporary location at the MYSQL database where it is copied over. I would like the fastest way to transfer data and am complication several methods.
I have a new way I plan on transforming the data which I am sure will solve most time issues but I want to make sure we do not run into time problems in the future. I have set up a linked server that uses a MYSQL ODBC driver to talk between SQL Server and MYSQL. This seems VERY slow. I have some code that also uses Microsoft's ODBC driver but is used so little that I cannot gauge the performance. Does anyone know of lightening fast ways to communicate between these two databases? I have been researching MYSQL's data providers that seem to communicate with a OleDB layer. Im not too sure what to believe and which way to steer towards, any ideas?
I used the jdbc-odbc bridge in Java to do just this in the past, but performance through ODBC is not great. I would suggest looking at something like http://jtds.sourceforge.net/ which is a pure Java driver that you can drop into a simple Groovy script like the following:
import groovy.sql.Sql
sql = Sql.newInstance( 'jdbc:jtds:sqlserver://serverName/dbName-CLASS;domain=domainName',
'username', 'password', 'net.sourceforge.jtds.jdbc.Driver' )
sql.eachRow( 'select * from tableName' ) {
println "$it.id -- ${it.firstName} --"
// probably write to mysql connection here or write to file, compress, transfer, load
}
The following performance numbers give you a feel for how it might perform:
http://jtds.sourceforge.net/benchTest.html
You may find some performance advantages to dumping data to a mysql dumpfile format and using mysql loaddata instead of writing row by row. MySQL has some significant performance improvements for large data sets if you load infile's and doing things like atomic table swaps.
We use something like this to quickly load large datafiles into mysql from one system to another e.g. This is the fastest mechanism to load data into mysql. But real time row by row might be a simple loop to do in groovy + some table to keep track of what row had been moved.
mysql> select * from table into outfile 'tablename.dat';
shell> myisamchk --keys-used=0 -rq '/data/mysql/schema_name/tablename'
mysql> load data infile 'tablename.dat' into table tablename;
shell> myisamchk -rq /data/mysql/schema_name/tablename
mysql> flush tables;
mysql> exit;
shell> rm tablename.dat
The best way I have found to transfer SQL data (if you have the space) is a SQL dump in one language and then to use a converting software tool (or perl script, both are prevalent) to convert the SQL dump from MSSQL to MySQL. See my answer to this question about what converter you may be interested in :) .
We've used the ado.net driver for mysql in ssis with quite a bit of success. Basically, install the driver on the machine with integration services installed, restart bids, and it should show up in the driver list when you create an ado.net connection manager.
As for replication, what exactly are you trying to accomplish?
If you are monitoring changes, treat it as a type 1 slowly changing dimension (data warehouse terminology, but same principal applies). Insert new records, update changed records.
If you are only interested in new records and have no plans to update previously loaded data, try an incremental load strategy. Insert records where source.id > max(destination.id).
After you've tested the package, schedule a job in the sql server agent to run the package every x minutes.
Cou can also try the following.
http://kofler.info/english/mssql2mysql/
I tried this a longer time before and it worked for me. But I woudn't recommend it to you.
What is the real problem, what you try to do?
DonĀ“t you get a MSSQL DB Connection, for example from Linux?

question about MySQL database migration

If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc..
Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database.
Using MySQLDump etc.. won't work. Here is the detail. For example: I have FOUR old databases, I can name them as 'DB_a', 'DB_b', 'DB_c', 'DB_d'. Now the old table A has 28 columns, I want to add each row in table A into the new database with a new column ID 'DB_x' (x to indicate which database it comes from). If I can't differentiate the database ID by the row's content, the only way I can identify them is going through some user input parameters.
Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while.
Thanks!!
I don't entirely understand your situation with the columns (wouldn't it be more sensible to add any new columns after migration?), but one of the arguably fastest methods to copy a database across servers is mysqlhotcopy. It can copy myISAM only and has a number of other requirements, but it's awfully fast because it skips the create dump / import dump step completely.
Generally when you migrate a database to new servers, you don't apply a bunch of schema changes at the same time, for the reasons that you're running into right now.
MySQL has a dump tool called mysqldump that can be used to easily take a snapshot/backup of a database. The snapshot can then be copied to a new server and installed.
You should figure out all the changes that have been done to your "new" database, and write out a script of all the SQL commands needed to "upgrade" the old database to the new version that you're using (e.g. ALTER TABLE a ADD COLUMN x, etc). After you're sure it's working, take a dump of the old one, copy it over, install it, and then apply your change script.
Use mysqldump to dump the data, then echo output.txt > msyql. Now the old data is on the new server. Manipulate as necessary.
Sure there are tools that can help you achieving what you're trying to do. Mysqldump is a premier example of such tools. Just take a glance here:
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
What you could do is:
1) You make a dump of the current db, using mysqldump (with the --no-data option) to fetch the schema only
2) You alter the schema you have dumped, adding new columns
3) You create your new schema (mysql < dump.sql - just google for mysql backup restore for more help on the syntax)
4) Dump your data using the mysqldump complete-insert option (see link above)
5) Import your data, using mysql < data.sql
This should do the job for you, good luck!
Adding extra rows can be done on a live database:
ALTER TABLE [table-name] ADD [row-name] MEDIUMINT(8) default 0;
MySql will default all existing rows to the default value.
So here is what I would do:
make a copy of you're old database with MySql dump command.
run the resulting SQL file against you're new database, now you have an exact copy.
write a migration.sql file that will modify you're database with modify table commands and for complex conversions some temporary MySql procedures.
test you're script (when fail, go to (2)).
If all OK, then goto (1) and go live with you're new database.
These are all valid approaches, but I believe you want to write a sql statement that writes other insert statements that support the new columns you have.