I am writing a PyQt6 QSqlTableModel application. I'm using manual update and submitAll(). I want to capture the sql statements which were successfully executed. The executedQuery() method reports only the last query, but several may have been executed by submitAll(). There doesn't seem to be a way to step through the executed queries one at a time.
I can use brute force by tailing the database system log files, but I'd rather use a PyQt6 method.
Is there any way I can do this?
Related
I am using Snappydata and SQL to run some analysis, however the job is slow and involves join operations on very large input data.
I am considering partition the input data first, then run the jobs on different partitions at the same time to speed up the process. But
in the embedded mode I am using, my code gets the SnappySession passed in, and I can use bin/snappy-sql to query the tables, So I assume all snappydata jobs would share the same SnappySession (or same table namespace, like the same database in Postgresql in my understanding).
So I assume if I submit my job using the same jar with different input arguments, the tables namespace would be the same for different jobs, thus causing errors.
So my question is: is it possible to have multiple snappySession (or multiple namespace like database names) that run a series of operations independently, preferably in one snappydata job to avoid managing many jobs at the same time?
I am not sure I follow the question. Maybe this will help:
When queries are submitted using snappy-sql this shell uses JDBC to connect and run the query. Internally snappy will start a Job and run concurrent tasks on each partition depending on the query. And, yes, this SQL session internally is associated with a unique SnappySession (spark session).
Or, maybe, you are trying to partition the data across many tables and start processing on these tables independently but in parallel ?
My situation:
MySQL 5.5, but possible to migrate to 5.7
Legacy app is executing single MySQL query to get some data (1-10 rows, 20 columns)
Query can be modified via application configuration
Query is very complex SELECT with multiple JOINS and conditions, it's about 20KB of code
Query is well profiled, index usage fine-tuned, I spent much time on this and se no room for improvement without splitting to smaller queries
With traditional app I would split this large query to several smaller and use caching to avoid many JOINS, but my legacy app does not allow to do that. I can use only one query to return results
My plan to improve performance is:
Reduce parsing time. Parsing 20KB of SQL on every request, while only parameters values are changed seems ineffective
I'd like to turn this query into prepared statement and only fill placeholders with data
Query will be parsed once and executed multiple times, should be much faster
Problems/questions:
First of all: does above solution make sense?
MySQL prepared statements seem to be session related. I can't use that since I cannot execute any additional code ("init code") to create statements for each session
Other solution I see is to use prepared statement generated inside procedure or function. But examples I saw rely on dynamically generating queries using CONCAT() and making prepared statement executed locally inside of procedure. It seems that this kind of statements will be prepared every procedure call, so it will not save any processing time
Is there any way to declare server-wide and not session related prepared statement in MySQL? So they will survive application restart and server restart?
If not, is it possible to cache prepared statements declared in functions/procedures?
I think the following will achieve your goal...
Put the monster in a Stored Routine.
Arrange to always execute that Stored Routine from the same connection. (This may involve restructuring your client and/or inserting a "web service" in the middle.)
The logic here is that Stored Routines are compiled once per connection. I don't know whether that includes caching the "prepare". Nor do I know whether you should leave the query naked, or artificially prepare & execute.
Suggest you try some timings, plus try some profiling. The latter may give you clues into what I am uncertain about.
I will get some text from another question here:
The PreparedStatement is a slightly more powerful version of a Statement, and should always be at least as quick and easy to handle as a Statement.
The Prepared Statement may be parametrized
Most relational databases handles a JDBC / SQL query in four steps:
Parse the incoming SQL query
Compile the SQL query
Plan/optimize the data acquisition path
Execute the optimized query / acquire and return data
A Statement will always proceed through the four steps above for each SQL query sent to the database. A Prepared Statement pre-executes steps (1) - (3) in the execution process above. Thus, when creating a Prepared Statement some pre-optimization is performed immediately. The effect is to lessen the load on the database engine at execution time.
Now here is my question:
If I use hundreds or thousands of Statement, will it be cause performance problems in database? (I don't mean that they will perform slower because of more jobs to do every time). Will all those statements be cached in database or they will be lost in space as soon as they are executed?
Since there is no restictions on using prepared statements, you should work carefully with them.
As you said you need hundreds of prepaired, think twice may be you are using it wrong.
The pattern it should be used is having an application that doing a haevy inserts/updates/select hundred or thousand times a second which only differs in variables. So in real world it would be like, connecting, creating session, sending statement, and sending bunch of variables to that statement.
But if your plan is to create prepared on each single operations, it's just better to use common queries.
On your questions:
Hundreds of statements will not kill mysql or drive you to performance degradation
The prepared are stored in memory while client session is up and running. As soon as you close session the prepared die.
To be sure you need it:
Your app able to execute statements fast so you get speed value of using them
Your query will not have a variable number of arguments, otherwise you can kill you app by creating objects and storing in memory on every statement
Simply said I have to write an application to synchronise several database tables. Because of the requirements the changes should be put into a queue (in form of a SQL statement) and here lies the problem: I'm not able to change the existing application which uses the database to add the executed query directly into the queue. Therefore I need to catch all data changing SQL queries of specific tables (> 20 tables) in the database.
I though about the following solutions:
To catch directly the MySQL query with triggers like it is described Can a trigger access the query string (best answer for this case I could find!), but I couldn't get the query that actives the trigger - only the query that I used within it.
To active the General Query Log. But I read about heavy performance considerations and so it isn't an arguable solution, because it would log even the tables I don't need (> 120 tables) and a lot of simple queries run on the database.
To use a history table filled by trigger. With this I wouldn't save the SQL statement of the queries with this solution (which would slow down my current concept of synchronisation), but it would be possible to realise.
Does someone know any other solution or how I could do the impossible by accessing the query within a trigger?
I'm grateful about any suggestion!
Related questions:
Can a trigger access the query string
Log mysql db changing queries and users
you could setup mysql proxy https://launchpad.net/mysql-proxy between existing application and mysql server. And intercept/modify/add any queries in the proxy.
Is there a way that if there's a change in records, that a query that changed the data (update, delete, insert) can be added to a "history" table transparently?
For example, if mySQL detects a change in a record or set of records, is there a way for mySQL to add that query statement into a separate table so that way, we can track the changes? That would make "rollback" possible since every query (other than SELECT) would be able to reconstruct database from its first row. Right?
I use PHP to interact with mySQL.
You need to enable the MySQL BinLog. This automatically logs all the alteration statements to a binary log which can be replied as needed.
The alternative is to use an auditing function through Triggers
Read about transaction logging in MySQL. This is built in to MySQL.
MySQL has logging functionality that can be used to log all queries. I usually leave this turned off since these logs can grow very rapidly, but it is useful to turn on when debugging.
If you are looking to track changes to records so that you can "roll back" a sequence of queries if some error condition presents itself, then you may want to look into MySQL's native support of transactions.