Put trigger in MySQL database to update Oracle database? - mysql

I want to create an insert trigger on MySQL which will automatically insert the record into an Oracle database. I would like to know if there are people that have experience to share on this topic.
Cheers

Invoke a script as is done in this example that calls the Oracle code.
Note: you lose support for transactions (there will be no built-in rollback for the Oracle database) when you perform this type of cascading, and you also will take a likely very large performance hit in doing so. The script could turn around and simply call Java code or some other executable that invokes your some generic code to insert into Oracle, or it could be a raw query that gets passed arguments from the script.
This is almost certainly a bad idea because of the odd side-effect behavior, but it's one that can be implemented. I think that you would be much better off having the code to do this against two different DataSources (in Java/.NET speak) rather than have a hidden script in a MySQL trigger that screams unmaintainable, as well as hidden failure for future developers.

Related

How to Get Rid of UNUSED Queries in MS ACCESS

I have reviewed the previous Questions and haven't found the answer to the following question,
Is there a Database Tool available in MS Access to run and identify the Queries that are NOT Bring used as a part of my database. We have lots of Queries that are no longer used and I need to clean the database and get rid of these Queries.
Access does have a built in “dependency” feature. The result is a VERY nice tree-view of those dependencies, and you can even launch such objects using that treeview of your application to “navigate” the application so to speak.
The option is found under database tools and is appropriately called Object Dependencies.
The result looks like this:
While you don't want to use auto correct, this feature will force on track changes. If this is a large application, then on first run a significant delay will occur. After that, the results can be viewed instantly. So, most developers still turn off track name autocorrect (often referred to track auto destroy). However, the track auto correct is required for this feature.
And, unfortunately, you have to go query by query, but at least it will display dependences for each query - (forms, or reports). However, VBA code that creates SQL on the fly and uses such queries? Well, it will not catch that case. So, at the end of the day, deleting a query may well still be used in code, and if that code creates SQL on the fly (as at LOT of VBA code does, then you can never really be sure that the query is not not used some place in the application.
So, the dependency checker can easy determine if another query, another form/sub form, or report uses that query. So dependency checker does a rather nice job.
However, VBA code is a different matter, and how VBA code runs and does things cannot be determined until such time code is actually run. In effect, a dependency checker would have to actually run the VBA code, and even then, sometimes code will make several choices as to which query to run, or use - and that is determined by code. I suppose that you could do a quick "search", since a search is global for VBA (all code in modules, reports and forms can be searched). This would find most uses of the query, but not in all cases since as noted VBA code often can and does create sql on the fly.
I have a vague recollection part of Access Analyzer from FMS Inc has this functionality built in.
Failing that, I can see 2 options that may work.
Firstly, you could use the inbuilt Database Documenter. This creates a report that you can export to Excel. You would then need to import this into the database, and write some code that loops the queries to see if they appear in this table;
Alternatively, you could use the undocumented "SaveAsText" feature to loop all Forms/Reports/Macros/Modules in your database, as well as looping the Querydefs and saving their SQL into a text file. You would then write some VBA to loop the queries, open each of the text files and check for the existence of the query.
Either way, rather than just deleting any unused queries, rename then to something like "old_Query", and leave them for a month or so in the database just in case!!
Regards,

Vertical deployment pros and cons in Jboss/MySQL

I'm working on a project which has a single WAR file for each application, it is like an app store.
So 10 apps have 10 different WAR files deployed. Usually there's a DAO, BL as separate jars inside the WAR file which exposes web services.
However there are few cases where we refer to a library usually the DAO/BL from another WAR file.
I'm not sure if this is the right approach. We seem to face difficulties when deploying to figure out what versions of deployed JARs are used etc. Another approach would be to not talk to another app's JAR(DAO) but talk to the deployed web service from the client if need be.
The DAO's have a mysql-ds.xml for a database in MySQL.
We could have one single data source for all the features but not sure if it helps.
As you can figure out from my previous paragraph, I'm a bit confused and also concerned if we have 100 different apps then maintaining all 100 of them with their dependencies would be really hard. Also how can connection pooling be effectively used from jboss? Would it be good to have single database for all apps or multiple databases - this is in terms of maintenance?? Our stack is
Jboss
Apache CXF
Dozer
DAO (Hibernate)
Entity (POJO)
Hibernate
Mysql
And maven as the build tool. I know my questions are a bit general please let me know if you need some more info.
Complex infrastructures like this are always difficult to manage.
There's three main approaches you can take, and each has pros and cons:
Web services to encapsulate all business layer/data access into an API. This minimizes the proliferation of versions of jars in various apps, but forces you to be more rigorous about API changes.
Creation of libraries that can be shared amongst multiple projects. I'm not clear on what you mean by referring to a library from another WAR file, perhaps this is what you mean in that you're including the relevant jars in your newly deployed WAR. This does lead to version compatibility concerns you mention, but can make modifying existing APIs more flexible, in that you don't have to immediately modify all existing apps.
Encapsulate all data logic in the database. In my experience, this is the most problematic, as it separates the dev from knowledge of how the business logic is working, and can be the most fragile - one stored procedure change can be harder to detect when it starts breaking other apps than the other approaches.
In my experience, it comes down to having more established processes and agreements among the team about how changes will be made. You really have to look at your business layer/data access layer as APIs and be very conservative about making changes. If you aren't already using a continuous build system, I'd highly recommend it, as it can help you catch changes that break existing applications early on and allow you to keep things in sync.
It's perfectly fine to have all your applications use the same database.
However, you run the risk that different apps use the database in different ways.
For this reason I would recommend that you put as much logic as possible in MySQL.
I cannot tell you how to do this, because I don't know what your apps do or need, but I can give you some general ideas and pointers.
General ideas and pointers
You can use stored procedures/functions to do stuff
If your apps use a stored procedure to make stuff happen in the single database, all apps will work in the same manner.
Use stored functions to do calculations on fields. (e.g. E.g. use a stored procedure to book a transaction)
Price_per_sales_line = price * quantity * 1+tax% * 1-discount%
If you put this logic in a MySQL function, than you don't need to worry about debugging this in app A or B, because all apps will work the same way.
And my personal favorite
Use triggers to make sure stuff happen properly.
E.g. if you have a transaction where you need to add new item for sale, you an put this in a stored proc, but you can also do something like:
Pseudo code
CREATE table blackhole_new_sales_item (
name varchar(45) not null
price decimal(10,2) not null
category_id integer not null )
ENGINE = Blackhole;
DELIMITER $$
CREATE TRIGGER ai_bh_new_sales_item_each FOR EACH ROW
BEGIN
/*all stuff inside a trigger happens in a single transaction*/
DECLARE new_item_id INTEGER;
INSERT IGNORE INTO items (name) VALUES (NEW.name);
SELECT id INTO new_item_id FROM items WHERE name = NEW.name;
INSERT IGNORE INTO item_categories (item_id, cat_id) VALUES (new_item_id, NEW.category_id)
INSERT INTO price (item_id, price, valid_from, valid_until) VALUES
(new_item_id, NEW.price, NOW(), '2038-12-31');
END $$
DELIMITER ;
In your apps you can just do a single:
INSERT INTO blackhole_new_sales_item VALUES ('test','0.99',2)
And the trigger will take care of everything and if you change the structure of your database, you need only change the inside of the trigger and all your apps will work without change.
If you add extra fields to the blackhole table, you need to only change the single call in each app.
You can even create an extra blackhole table and create a separate trigger for that, and fill your old-blackhole-table-trigger with fall-back code to support the older apps.
So this approach gives you a single point to put all your DB logic into so all apps will behave in the same way, whilst still being flexible enough to support upgrades.
Hope this helps.

Django code or MySQL triggers

I'm making a web service with Django that uses MySQL database. Clients interface with our database through URLs, handled by Django. Right now I'm trying to create a behavior that automatically does some checking/logging whenever a certain table is modified, which naturally means MySQL triggers. However I can also do this in Django, in the request handler that does the table modification. I don't think Django has trigger support yet, so I'm not sure which is better, doing through Django code or MySQL trigger.
Anybody with knowledge on the performance of these options care to shed some light? Thanks in advance!
There are a lot of ways to solve the problem you've described:
Application Logic
View-specific logic -- If the behavior is specific to a single view, then put the changes in the view.
Model-specific logic -- If the behavior is specific to a single model, then override the save() method for the model.
Middleware Logic -- If the behavior relates to multiple models OR needs to wrapped around an existing application, you can use Django's pre-save/post-save signals to add additional behaviors without changing the application itself.
Database Stored Procedures -- Normally a possibility, but Django's ORM doesn't use them. Not portable across databases.
Database Triggers -- Not portable from one database to another (or even one version of a database to the next), but allow you to control shared behavior across multiple (possibly non-Django) applications.
Personally, I prefer using either overriding the save() method, or using a Django signal. Using view-specific logic can catch you out on large applications with multiple views of the same model(s).
What you're describing sounds like "change data capture" to me.
I think the trade-offs might go like this:
Django pros: Middle tier code can be shared by multiple apps; portable if database changes
Django cons: Logically not part of the business transaction
MySQL pros: Natural to do it in a database
MySQL cons: Triggers are very database-specific; if you change vendors you have to rewrite
This might be helpful.

Is it possible to get the user who last edited a stored proc, function, table or view in SQL Server?

I'm not even sure SQL Server stores this kind of information, but, is it possible to get the username of the person who last modified a particular stored procedure, function, table or view?
Nothing critical, just wondering. Thanks!
If you are using SQL Server 2008, you could use some new features that allow you to put triggers on DDL changes. You can then track, based on the authenticated user, who made the change.
I think these triggers are new to SQL 2008, but they may be available in 2005.
Having said this, ideally you should have your database schema under source control, using a tool like Visual Studio Database Professional. Then you'd have a complete history of who did what and when.
Randy
It doesn't store this information out of the box.
You can use SQL Trace and Event notification (see the corresponding MSDN Article) to log this kind of information by yourself.
I have no experience with these technologies though ...
Definitely put DDL triggers in place. Even if you don't end up using them, or if you end up putting a decent source control system in place, still have the DDL triggers in place so that you can be sure about what's going on.

Executing shell command from MySQL

I know what I'm looking for is probably a security hole, but since I managed to do it in Oracle and SQL Server, I'll give it a shot:
I'm looking for a way to execute a shell command from a SQL script on MySQL. It is possible to create and use a new stored procedure if necessary.
Notice: I'm not looking for the SYSTEM command which the mysql command line tool offers. Instead I'm looking for something like this:
BEGIN IF
COND1...
EXEC_OS cmd1; ELSE
EXEC_OS cmd2; END;
where EXEC_OS is the method to invocate my code.
This isn't so much an answer to the question as it is justification for this sort of functionality - hence negating those who would say "you should do something else" or "why would you want to".
I have a database which I am trying to keep strict rules on - I don't want orphans anywhere. Referential integrity checks help me with this on the table level, but I have to keep some of the data as files within the filesystem (this is a result from a direct order from my boss to not store any binary data in the database itself).
The obvious solution here is to have a trigger which fires on deletion of a record, which then automatically deletes the associated external file.
Now, I do realise that UDF's may provide a solution, but that seems like a lot of C/C++ work to simply delete a file. Surely the database permissions themselves would provide at least some security from would-be assailants.
Now, I do realise that I could write a shell script or some such which could delete the table record and then go and delete the related file, but again, that's outside the domain of the database itself. As an old instructor once told me "the rules of the business should be reflected in the rules of the database". As one can clearly see, I cannot enforce this using MySQL.
You might want to consider writing your scripts in a more featureful scripting language, like Perl, Python, PHP, or Ruby. All of these languages have libraries to run SQL queries.
There is no built-in method in the stored procedure language for running shell commands. This is considered a bad idea, not only because it's a security hole, but because any effects of shell commands do not obey transaction isolation or rollback, as do the effects of any SQL operations you do in the stored procedure:
START TRANSACTION;
CALL MyProcedure();
ROLLBACK;
If MyProcedure did anything like create or edit a file, or send an email, etc., those operations would not roll back.
I would recommend doing your SQL work in the stored procedure, and do other work in the application that calls the stored procedure.
see do_system() in http://www.databasesecurity.com/mysql/HackproofingMySQL.pdf
According to this post at the forums.mysql.com, the solution is to use the MySQL_Proxy.