I am using Kohana 3. I want to log the MySQL queries being executed by an application. The reason to determine the query of type INSERT,UPDATE and DELETE which are being executed in a process and store them in another MySQL table with date-time for further reference.
Can anybody tell how can I achieve this?
An alternative is to enable profiling for the database module, which will log the queries made to a file.
This will log ALL queries, not just the last one ;)
It shouldn't be too hard to parse the file, or to extend the profiling/logging/caching classes to save it to a database.
Sorry, because of the Kohana tag I approached the problem from the wrong angle. You want the MYSQL server to log the commands directly, so you get ALL of the commands, not just the last one.
See the mysql server docs on logging:
http://dev.mysql.com/doc/refman/5.0/en/server-logs.html
I did this using after() method of the controller. After execution of each controller action this after() method is executed, where I wrote logic to capture last query executed and stored in my db for further reference.
Related
I'm currently writing my Main Assignment on my last semester at my study (IT-Engineering with Networking) and currently working with MySQL.
My question is: Is it possible to execute a Shell script/Command from within a MySQL Trigger/Procedure? Or can it be done from a CASE statement?
I've been searching around the internet and read that it's inadvisable to do it.
But I need a script to check a table in a database for alerts and then warn people if there is any.
If there is anyway else this could be done, then I'm open for ideas.
Any input will be appreciated :)
You can read this blog for triggering a shell script from MySQL:
https://patternbuffer.wordpress.com/2012/09/14/triggering-shell-script-from-mysql/. To summarize, two options are presented:
Polling. To improve performance, a trigger could record the change in another table which you poll instead.
MySQL UDF. Write your own plugin, and beware of security implications!
I think for your requirement just write a python/php/perl script which will connect your MySQL DB and query the alert table for any alert and accordingly show warning message on the screen or send email/sms warning.
I am using MySQL and have a very large response (15,000+ rows). This takes.. well.. time. But I can start to process the first result right away. Can I set up a stream somehow with sequelize? If so, how?
There is a related GitHub for this. It doesn't look like it will happen in the near future.
For now I am monitoring my console, saving the query that Sequelize generates, and then refactoring my code to run the query with node-mysql.
connection.query(mySavedQuery).stream().pipe(...)
Also you can use node-sequelize-stream library.
I'm planning to debug Joomla site by entering each query and it's query execution time to a database table. I have more than 10 models which have different queries. I'm pretty sure that all the queries go through a single place/class before executing but I have no idea where/what the place/class is.
My issue is, Is there a central place I can edit to log the database query and the execution time of a SQL query? I mean like edit a core file just to log every SQL query & it's execution time.
How can I get it done?
Have you considered using Joomla's built-in System Debug?
Rather than trying to do this programmatically with brute force, it seems it would be far easier and less intrusive to use a proper SQL benchmarking tool such as MySQL Benchmark Suite Another possible non-brute-force option might be Toad World
If you wanted to stay away from third-party tools, a slow query log might be the place to start.
If you really want to do it via joomla (hack):
Goto joomla's database driver, for 3.3 that is: libraries/joomla/database/driver.php
Remove the setDebug function (in case some component set it to 0)
At start of file change $debug = false; into $this->debug = true;
Now, every query gets logged together with profile information.
From time to time we receive the following database connection error from PetaPoco in an ASP.NET MVC 4 app:
There is already an open DataReader associated with this Command which must be closed first.;
System.Data; at System.Data.SqlClient.SqlInternalConnectionTds.ValidateConnectionForExecute(SqlCommand command)...
It seems like this happens as we get more load to the system.
Some suggestions we found as we researched were:
Do a PetaPoco Fetch instead of a Query
Add MultipleActiveResultSets=True to our connection string
Can someone with PetaPoco experience verify that these suggestions would help?
Any other suggestions to avoid the Exception would be appreciated.
Update 06/10/2013 We changed the Query to a Fetch and we have seen some improvement however we still sometimes see the error.
Does anyone know what drawbacks changing the connection string to MultipleActiveResultSets=True might have?
Be sure that you are creating the PetaPoco DB per request (not a static).
See: how to create a DAL using petapoco
Update 06/10/2013 All Fetch methods calls the Query method (see the source)
So changing one for the other has no effect on the error.
The drawbacks are listed on the MSDN and includes warnings with:
Statement Interleaving
Session Cache
Thread Safety
Connection Pooling
Parallel Execution
I have tried it personally and didn't got any drawbacks (depends on your app), but didn't get rid of the errors also.
The only thing that you can do to remove the error, it's follow your request code to find where in the code the statement is called twice, and then use other DB connection in that function.
Also, you can catch the error and then create a new db connection and try with that new one.
Sorry but not magic bullet here.
We have a single SQL Log for storing errors from multiple applications. We have disabled the elmah.axd page for each one of our applications and would like to have a new application that specifically displays errors from all of the apps that report errors to the common SQL log.
As of now, even though the application for all errors is using the common SQL log, it only displays errors from the current application. Has anyone done this before? What within the elmah code might need to be tweaked?
I assume by "SQL Log" you mean MSSQL Server... If so, probably the easiest way of accomplishing what you want would be to edit the stored procedures created in the SQL Server database that holds your errors.
To get the error list, the ELMAH dll calls the ELMAH_GetErrorsXML proc with the application name as a parameter, then the proc filters the return with a WHERE [Application] = #Application clause.
Just remove the WHERE clause from the ELMAH_GetErrorsXML proc, and all errors should be returned regardless of application.
To get a single error record properly, you'll have to do the same with the ELMAH_GetErrorXML proc, as it also filters by application.
This, of course, will affect any application retrieving errors out of this particular database, but I assume in your case you'll only ever have the one, so this should be good.
CAVEAT: I have not tried this, so I can't guarantee the results...
It's not a problem to override the default Elmah handler factory so that it will filter Elmah logs by applications. I wrote a sample app that shows how to do it with MySql: http://diagnettoolkit.codeplex.com/releases/view/103931. You may as well check a post on my blog where I explain how it works.
Yes, it easily works. However you can't see app name in Elmah/Default.aspx. I haven't found if it is confugurable - just display one column more.