Mysql trigger performance vs php manual insert - mysql

I want to create a notification after insertion on some tables. For example whenever a user inserts a comment I create a notification for the administrators that this user has created a comment.
I used to do it manually in the PHP, It wasn't as that bad, It was something like that:
// after the comment is created
Notification::create(....);
Not bad, but sometimes I give the user the ability to add images, posts, ..etc. So I have to remember every time to insert a notification.
So I am thinking of using a mysql trigger instead. But I am worry how that will affect the performance?
One last thing, Is it possible to create a trigger after insert on multiple tables?
Thanks,

Is it possible to create a trigger after insert on multiple tables?
No, it is not possible. You have to create a separate trigger for each table.
I am worry how that will affect the performance?
Performance wise it shouldn't be a disaster although by introducing triggers you artificially prolong insert/update operations on you tables (images, posts, ...) effectively increasing locking time.
But performance is not the only factor to consider. Maintainability should be important too. By creating triggers you scatter you application logic between your app and database. It's harder to test. Triggers are often forgotten e.g. when you transfer the schema or produce a dump. Sometimes you don't want them to fire while you do some maintenance DML on your tables. But MySQL lacks this capability. You'll have to use workarounds for that.
Bottom line: consider not to use triggers unless you have to.

Related

MySQL trigger vs application insert for history

I have a main table in mysql and need a history table for tracking the changes in the table.
I have 2 approaches.
trigger --> create a trigger for the main table which inserts into history table for any change in the main table
insert into the history table while inserting or updating in the main table from application
I am checking which is the best approach with performance.
Assuming your trigger performs exactly the same operation as the separate logging query (e.g. both insert a row to your history table whenever you modify your table), there is no significant performance difference between your two options, as both do the same amount of work.
The decision is usually design driven - or the preference of whoever makes the guidelines you have to follow.
Some advantages of using a trigger for your history log:
You cannot forget to log, e.g. by coding mistakes in your app, and don't have to take care of it in every quick and dirty maintenance script. MySQL does it for you.
You have direct access to all column values in the trigger including their previous values, and specifically the primary key (new.id). This makes logging trivial.
If you e.g. do batch modifications, it might be complicated to write an individual logging query. delete from tablename where xyz? You probably will do a insert into historytable ... select ... where xyz first, and if xyz is a slow condition that ends up not deleting anything, you may just double your execution time this way. So much for performance. update tablename set a = rand() where a > 0.5? Good luck writing a proper separate logging query for this.
Some advantages not using a trigger to log:
you have control over when and what you log, e.g. if you want to log only specific changes done by end users in your application, but not those by batch scripts or automatic processes, it might be easier (and faster) to just log explicitly what you want to log.
you may want to log additional information not available to the trigger (and that you don't want to store in the main table), e.g the windows login or the last button the user pressed to access the function that modified this data.
it might be more convenient to write a general logging function in a programming language, where you can use meta data to e.g. dynamically generate the logging query or compare old and new values in a loop over all columns, than to maintain 3 triggers for every table, where you usually have to list every column explicitly.
since you are especially interested in performance: although it's probably more a theoretical than a practical advantage, if you do a lot of batch modifications, it might be faster to write the log in batches too (e.g. inserting 1000 history rows at once will be faster than inserting 1000 rows individually using a trigger). But you will have to properly design your logging query, and the query itself cannot be slow.

Trigger postgres update table mysql

i have a system made with MySQL DB and Other system made with PostgreSQL. I want to create an trigger in postgres that insert rows in MySQL, but i don't know how do this, is it posible?
The reason is that i need to syncronize the users of both databases without knowing when the user is created.
You'd have to use mysql_fdw for that.
But I think that it would be a seriously bad idea to do that — if the MySQL database goes down, the trigger will throw an error, and the transaction is undone. Basically, you cannot modify the table any more. Moreover, the latency of the PostgreSQL-MySQL round trip would be added to each transaction.
I think you would be better of with some sort of log table in PostgreSQL where you store the changes. An asynchronous worker can read the changes and apply them on MySQL.
One more thought: You are not considering replicating database users, right? Because you cannot have triggers on system tables.

MySQL history or transaction log table updated by triggers

my question is about a database history or transaction log table which is currently updated by mysql procedure. The procedure is called by mysql trigger every time when we keep a history of an appropriate table in during insert, update or delete actions. As far as we have lots of tables for each of them we need to create a separate trigger e.g. for "accounts table" we need to create "accounts_insert, accounts_update and accounts_delete" triggers.
The problem is every time when we alter "accounts table" we have to modify appropriate triggers as well.
Is there any way to avoid that manual work? Would it be better to implement it in application layer/code?
There are no 'global' triggers if that's what you're thinking about.
Application side logging is one possible solution. You'll want to do this within transactions whenever possible.
Other possible approaches:
Create a script that will update your triggers for you. Can be fairly easy, if your triggers are generally similar to each other. Using information_schema database can be helpful here.
Parse general query log (careful, enabling this log can have large negative impact on server's performance)

What to consider if using triggers on tables in a sql-server merge replication

i am driving since some years a sql-server2000 merge-replication over three locations. Triggers do a lot of work in this database. i got no troubles.
Now migrating these database to a brand new sql2008, i got some issues about the triggers. They are firing even if the merge-agent does his work.
Is there anybody who has some experience with that kind of stuff on sql2008-server?
Can anybody confirm that different behaviour to sql2000?
Peace
Ice
give this a read: Controlling Constraints, Identities, and Triggers with NOT FOR REPLICATION
In most cases the default settings are
appropriate, but they can be changed
if an application requires different
behavior. The main area to consider is
triggers. For example, if you define
an insert trigger with the NOT FOR
REPLICATION option set, all user
inserts fire the trigger, but inserts
from replication agents do not.
Consider a trigger that inserts data
into a tracking table: when the user
inserts the row originally, it is
appropriate for the trigger to fire
and enter a row into the tracking
table, but the trigger should not fire
when that data is replicated to the
Subscriber, because it would result in
an unnecessary row being inserted in
the tracking table.

What is the best way to update (or replace) an entire database table on a live machine?

I'm being given a data source weekly that I'm going to parse and put into a database. The data will not change much from week to week, but I should be updating the database on a regular basis. Besides this weekly update, the data is static.
For now rebuilding the entire database isn't a problem, but eventually this database will be live and people could be querying the database while I'm rebuilding it. The amount of data isn't small (couple hundred megabytes), so it won't load that instantaneously, and personally I want a bit more of a foolproof system than "I hope no one queries while the database is in disarray."
I've thought of a few different ways of solving this problem, and was wondering what the best method would be. Here's my ideas so far:
Instead of replacing entire tables, query for the difference between my current database and what I want to place in the database. This seems like it could be an unnecessary amount of work, though.
Creating dummy data tables, then doing a table rename (or having the server code point towards the new data tables).
Just telling users that the site is going through maintenance and put the system offline for a few minutes. (This is not preferable for obvious reasons, but if it's far and away the best answer I'm willing to accept that.)
Thoughts?
I can't speak for MySQL, but PostgreSQL has transactional DDL. This is a wonderful feature, and means that your second option, loading new data into a dummy table and then executing a table rename, should work great. If you want to replace the table foo with foo_new, you only have to load the new data into foo_new and run a script to do the rename. This script should execute in its own transaction, so if something about the rename goes bad, both foo and foo_new will be left untouched when it rolls back.
The main problem with that approach is that it can get a little messy to handle foreign keys from other tables that key on foo. But at least you're guaranteed that your data will remain consistent.
A better approach in the long term, I think, is just to perform the updates on the data directly (your first option). Once again, you can stick all the updating in a single transaction, so you're guaranteed all-or-nothing semantics. Even better would be online updates, just updating the data directly as new information becomes available. This may not be an option for you if you need the results of someone else's batch job, but if you can do it, it's the best option.
BEGIN;
DELETE FROM TABLE;
INSERT INTO TABLE;
COMMIT;
Users will see the changeover instantly when you hit commit. Any queries started before the commit will run on the old data, anything afterwards will run on the new data. The database will actually clear the old table once the last user is done with it. Because everything is "static" (you're the only one who ever changes it, and only once a week), you don't have to worry about any lock issues or timeouts. For MySQL, this depends on InnoDB. PostgreSQL does it, and SQL Server calls it "snapshotting," and I can't remember the details off the top of my head since I rarely use the thing.
If you Google "transaction isolation" + the name of whatever database you're using, you'll find appropriate information.
We solved this problem by using PostgreSQL's table inheritance/constraints mechanism.
You create a trigger that auto-creates sub-tables partitioned based on a date field.
This article was the source I used.
Which database server are you using? SQL 2005 and above provides a locking method called "Snapshot". It allows you to open a transaction, do all of your updates, and then commit, all while users of the database continue to view the pre-transaction data. Normally, your transaction would lock your tables and block their queries, but snapshot locking would be perfect in your case.
More info here: http://blogs.msdn.com/craigfr/archive/2007/05/16/serializable-vs-snapshot-isolation-level.aspx
But it requires SQL Server, so if you're using something else....
Several database systems (since you didn't specify yours, I'll keep this general) do offer the SQL:2003 Standard statement called MERGE which will basically allow you to
insert new rows into a target table from a source which don't exist there yet
update existing rows in the target table based on new values from the source
optionally even delete rows from the target that don't show up in the import table anymore
SQL Server 2008 is the first Microsoft offering to have this statement - check out more here, here or here.
Other database system probably will have similar implementations - it's a SQL:2003 Standard statement after all.
Marc
Use different table names(mytable_[yyyy]_[wk]) and a view for providing you with a constant name(mytable). Once a new table is completely imported update your view so that it uses that table.