i have a table in sql server and a table with the same name and fields in mysql server. i connected them trhough a linked server using the following trigger:
CREATE TRIGGER items_insert ON [prueba]
FOR INSERT
AS
BEGIN
declare #tmp table (a int, b varchar(10))
insert #tmp (a, b) select ID, Name from inserted
COMMIT
SET XACT_ABORT ON
INSERT INTO OPENQUERY(WEBDB, 'SELECT * FROM prueba')
SELECT a, b FROM #tmp
begin tran
end
my problem is that when i take offline the mysql serverm and i insert a record in sql server, it obviously does not insert in mysql, but when i put the mysql server it does not either. i want a queue of sorts, so that when the connection between servers drop, any new records during that time are inserted in mysql when the connection is restored. How could i achieve this?, i am new to sql server and triggers
NOTE: the trigger has the #tmp declarations according to this tutorial because i was getting a weird error about transactional errors
Triggers will never queue and using linked servers inside a trigger is a bad idea. You will find hundreds of people burning their fingers with this one I did too.
For any queue type system you will need to implement service broker or as Nilesh pointed out use a job which queue the tables.
Your current setup is going to be very problematic as I used the same approach several years ago in a attempt to get data from SQL2005 to a MySQL server. Incidental in SQL2000 you could actually replicate the data from MSSQL to any other ODBC datasource. Microsoft discontinued this in SQL2005.
So you have two choices here.
Learn Service Broker: Service broker is a awesome but little used piece of SQL. It is a asynchronous queuing technology that allows you to send message to other remote systems check this link for much more information. However this is going to take time and effort to implement as you will have to learn quiet a bit i.e. steep learning curve.
Create a queue table and process on a schedule. Create a table that has the data you want to insert into MySQL with a processed flag. In the trigger insert this data into the table. Create a SQL server job that runs every minute and inserts the data from the queuing table into the MySQL database. On successful insertion mark it as processed.
Add a processed flag to the original table. Create a job that uses the table to get all items that have not been inserted and insert them on a schedule. This is like option 2 but you dont create a additional table.
Related
I have 2 (th and ct) servers that are completely separated each with it's own database I want to sync a table(et) in th with ct
I want if new inserts are done in table th a trigger will fire a ssh connection to ct server and insert the new rows I think the script should look something like the following but I can't figure out the syntax
DROP TRIGGER IF EXISTS et-sync
CREATE TRIGGER et-sync AFTER INSERT ON th.et
FOR EACH ROW BEGIN
ssh user#11.11.2.11 "mysql -uroot -ppassword -e \"INSERT db_testplus.user SET t = NEW.t;""
END;
and should I use this or just use Percona Toolkit for MySQL
(pt-table-sync) as I don't think adding a tool to control database sync at that scale is worth it(added complexity)
I know that adding replicas is properly the best solution but considering the current system design I thought of postponing the redesign of ct database for some time as it will take sometime to make it from scratch and it's an important part for the business
any suggestions ??
For security reasons, MySQL does not allow launching processes from within itself.
Usually the alternative is to make a cron job to do orchestrate the actions, reaching into the database as needed to communicate and coordinate.
I am looking for a solution to the following:
Database: A
Table: InvoiceLines
Database: B
Table: MyLog
Every time lines are added to InvoiceLines in database A, I want to run a query that updates the table MyLog in database B. And I want it instantly.
Normally I would create a trigger in database A on INSERT in InvoiceLines. The problem is that database A belongs to a ERP program where I don't want to make any changes at all (updates, unknown functionality in 3-layer program, etc)
Any hints to help me in the right direction...?
You can use transactional replication to send changes from your table in database A to a copy in DB B, then create your triggers on the copy. It's not "instant," but it's usually considered "near real time."
You might be able to use DB mirroring to do this somehow, but you'd have to do some testing to see if you could get it to work right (maybe set up triggers in the mirror that don't exist in the original?)
One possible solution to replicate trigger's functionality without database update is to poll the table by an external application (i.e. java) which on finding new insert would fire required query.
In SQLServer2008, something similar can be done via C# assembly but again this needs to be installed which requires database update.
I have a Service Broker (MSSQL 2008) queue with many thousands of messages. To do some forensics on the messages, I have selected the top 10,000 messages into a ##temp table. I have successfully BCP'd out the global temp table into a file. Now I need to BCP it into a local MSSQL instance, into a new table. The table has to have the same schema as the queue.
However, I can't seem to figure out what the structure of the new table should be.
I did this:
exec tempdb..sp_columns '##x'
And then tried to make a new table with a Create Table statement, but BCP-in does not seem to work.
I figure that the schema of a queue must be in MSDB somewhere, or there has to be a way to clearly get the column Types of a Service Broker queue.
Can anyone help?
Thanks.
If you already have it in a temp table and have appropriate permissions.
SElECT TOP 0 * INTO NewTable FROM #TempTable
Then use SSMS to script it
I want to know if I can get all the INSERTs that are inserted at my database, from once.
Writing just once ... I want that shows me all the inserts with the data and everything that was inserted in the moment of insertion. Not just the data, but everything including the INSERTs, is this possible?
I'm using SQL server 2008.
I can only think of two methods:
1) You could try using the new auditing features of SQL Server 2008 How to: Create a Server Audit and Database Audit Specification
2) write an INSERT trigger for each table and send the INSERTED table to a common log.
Is it possible for MySQL database to invoke an external exe file when a new row is added to one of the tables in the database?
I need to monitor the changes in the database, so when a relevant change is made, I need to do some batch jobs outside the database.
Chad Birch has a good idea with using MySQL triggers and a user-defined function. You can find out more in the MySQL CREATE TRIGGER Syntax reference.
But are you sure that you need to call an executable right away when the row is inserted? It seems like that method will be prone to failure, because MySQL might spawn multiple instances of the executable at the same time. If your executable fails, then there will be no record of which rows have been processed yet and which have not. If MySQL is waiting for your executable to finish, then inserting rows might be very slow. Also, if Chad Birch is right, then will have to recompile MySQL, so it sounds difficult.
Instead of calling the executable directly from MySQL, I would use triggers to simply record the fact that a row got INSERTED or UPDATED: record that information in the database, either with new columns in your existing tables or with a brand new table called say database_changes. Then make an external program that regularly reads the information from the database, processes it, and marks it as done.
Your specific solution will depend on what parameters the external program actually needs.
If your external program needs to know which row was inserted, then your solution could be like this: Make a new table called database_changes with fields date, table_name, and row_id, and for all the other tables, make a trigger like this:
CREATE TRIGGER `my_trigger`
AFTER INSERT ON `table_name`
FOR EACH ROW BEGIN
INSERT INTO `database_changes` (`date`, `table_name`, `row_id`)
VALUES (NOW(), "table_name", NEW.id)
END;
Then your batch script can do something like this:
Select the first row in the database_changes table.
Process it.
Remove it.
Repeat 1-3 until database_changes is empty.
With this approach, you can have more control over when and how the data gets processed, and you can easily check to see whether the data actually got processed (just check to see if the database_changes table is empty).
you could do what replication does: hang on the 'binary log'. setup your server as a 'master server', and instead of adding a 'slave server', run mysqlbinlog. you'll get a stream of every command that modifies your database.
step in 'between' the client and server: check MySQLProxy. you point it to your server, and point your client(s) to the proxy. it lets you interpose Lua scripts to monitor, analyze or transform any SQL command.
I think it's going to require adding a User-Defined Function, which I believe requires recompilation:
MySQL FAQ - Triggers: Can triggers call an external application through a UDF?
I think it's really a MUCH better idea to have some external process poll changes to the table and execute the external program - you could also have a column which contains the status of this external program run (e.g. "pending", "failed", "success") - and just select rows where that column is "pending".
It depends how soon the batch job needs to be run. If it's something which needs to be run "sooner or later" and can fail and need to be retried, definitely have an app polling the table and running them as necessary.