I am writing a PyQt6 QSqlTableModel application. I'm using manual update and submitAll(). I want to capture the sql statements which were successfully executed. The executedQuery() method reports only the last query, but several may have been executed by submitAll(). There doesn't seem to be a way to step through the executed queries one at a time.
I can use brute force by tailing the database system log files, but I'd rather use a PyQt6 method.
Is there any way I can do this?
I have a database, let's say in MySQL, that logs runs of client programs that connect to the database. When doing a run, the client program will connect to the database, insert a "Run" record with the start timestamp into the "Runs" table, enter its data into other tables for that run, and then update the same record in the "Runs" table with the end timestamp of the run. The end timestamp is NULL until the end of the run.
The problem is that the client program can be interrupted -- someone can hit Ctrl^C, the system can crash, etc. This would leave the end timestamp as NULL; i.e. I couldn't tell the difference between a run that's still ongoing and one that terminated ungracefully at some point.
I wouldn't want to wrap the entire run in a transaction because the runs can take a long time and upload a lot of data, and all of the data from a partial run would be desired. (There will be lots of smaller transactions during the run, however.) I also need to be able to view the data in real-time in another SQL connection as it's being uploaded by a client, so a mega-transaction for the entire run would not be good for that purpose.
During a run, the client will have a continuous session with the SQL server, so it would be nice if there could be a "trigger" or similar functionality on the connection closing that would update the Run record with the ending timestamp. It would also be nice if such a "trigger" could add a status like "completed successfully" vs. "terminated ungracefully" to boot.
Is there a solution for this in MySQL? How about PostgreSQL or any other popular relational database system?
I currently have a PostgreSQL database, because one of the pieces of software we're using only supports this particular database engine. I then have a query which summarizes and splits the data from the app into a more useful format.
In my MySQL database, I have a table which contains an identical schema to the output of the query described above.
What I would like to develop is an hourly cron job which will run the query against the PostgreSQL database, then insert the results into the MySQL database. During the hour period, I don't expect to ever see more than 10,000 new rows (and that's a stretch) which would need to be transferred.
Both databases are on separate physical servers, continents apart from one another. The MySQL instance runs on Amazon RDS - so we don't have a lot of control over the machine itself. The PostgreSQL instance runs on a VM on one of our servers, giving us complete control.
The duplication is, unfortunately, necessary because the PostgreSQL database only acts as a collector for the information, while the MySQL database has an application running on it which needs the data. For simplicity, we're wanting to do the move/merge and delete from PostgreSQL hourly to keep things clean.
To be clear - I'm a network/sysadmin guy - not a DBA. I don't really understand all of the intricacies necessary in converting one format to the other. What I do know is that the data being transferred consists of 1xVARCHAR, 1xDATETIME and 6xBIGINT columns.
The closest guess I have for an approach is to use some scripting language to make the query, convert results into an internal data structure, then split it back out to MySQL again.
In doing so, are there any particular good or bad practices I should be wary of when writing the script? Or - any documentation that I should look at which might be useful for doing this kind of conversion? I've found plenty of scheduling jobs which look very manageable and well-documented, but the ongoing nature of this script (hourly run) seems less common and/or less documented.
Open to any suggestions.
Use the same database system on both ends and use replication
If your remote end was also PostgreSQL, you could use streaming replication with hot standby to keep the remote end in sync with the local one transparently and automatically.
If the local end and remote end were both MySQL, you could do something similar using MySQL's various replication features like binlog replication.
Sync using an external script
There's nothing wrong with using an external script. In fact, even if you use DBI-Link or similar (see below) you probably have to use an external script (or psql) from a cron job to initiate repliation, unless you're going to use PgAgent to do it.
Either accumulate rows in a queue table maintained by a trigger procedure, or make sure you can write a query that always reliably selects only the new rows. Then connect to the target database and INSERT the new rows.
If the rows to be copied are too big to comfortably fit in memory you can use a cursor and read the rows with FETCH, which can be helpful if the rows to be copied are too big to comfortably fit in memory.
I'd do the work in this order:
Connect to PostgreSQL
Connect to MySQL
Begin a PostgreSQL transaction
Begin a MySQL transaction. If your MySQL is using MyISAM, go and fix it now.
Read the rows from PostgreSQL, possibly via a cursor or with DELETE FROM queue_table RETURNING *
Insert them into MySQL
DELETE any rows from the queue table in PostgreSQL if you haven't already.
COMMIT the MySQL transaction.
If the MySQL COMMIT succeeded, COMMIT the PostgreSQL transaction. If it failed, ROLLBACK the PostgreSQL transaction and try the whole thing again.
The PostgreSQL COMMIT is incredibly unlikely to fail because it's a local database, but if you need perfect reliability you can use two-phase commit on the PostgreSQL side, where you:
PREPARE TRANSACTION in PostgreSQL
COMMIT in MySQL
then either COMMIT PREPARED or ROLLBACK PREPARED in PostgreSQL depending on the outcome of the MySQL commit.
This is likely too complicated for your needs, but is the only way to be totally sure the change happens on both databases or neither, never just one.
BTW, seriously, if your MySQL is using MyISAM table storage, you should probably remedy that. It's vulnerable to data loss on crash, and it can't be transactionally updated. Convert to InnoDB.
Use DBI-Link in PostgreSQL
Maybe it's because I'm comfortable with PostgreSQL, but I'd do this using a PostgreSQL function that used DBI-link via PL/Perlu to do the job.
When replication should take place, I'd run a PL/PgSQL or PL/Perl procedure that uses DBI-Link to connect to the MySQL database and insert the data in the queue table.
Many examples exist for DBI-Link, so I won't repeat them here. This is a common use case.
Use a trigger to queue changes and DBI-link to sync
If you only want to copy new rows and your table is append-only, you could write a trigger procedure that appends all newly INSERTed rows into a separate queue table with the same definition as the main table. When you want to sync, your sync procedure can then in a single transaction LOCK TABLE the_queue_table IN EXCLUSIVE MODE;, copy the data, and DELETE FROM the_queue_table;. This guarantees that no rows will be lost, though it only works for INSERT-only tables. Handling UPDATE and DELETE on the target table is possible, but much more complicated.
Add MySQL to PostgreSQL with a foreign data wrapper
Alternately, for PostgreSQL 9.1 and above, I might consider using the MySQL Foreign Data Wrapper, ODBC FDW or JDBC FDW to allow PostgreSQL to see the remote MySQL table as if it were a local table. Then I could just use a writable CTE to copy the data.
WITH moved_rows AS (
DELETE FROM queue_table RETURNING *
)
INSERT INTO mysql_table
SELECT * FROM moved_rows;
In short you have two scenarios:
1) Make destination pull the data from source into its own structure
2) Make source push out the data from its structure to destination
I'd rather try the second one, look around and find a way to create postgresql trigger or some special "virtual" table, or maybe pl/pgsql function - then instead of external script, you'll be able to execute the procedure by executing some query from cron, or possibly from inside postgres, there are some possibilities of operation scheduling.
I'd choose 2nd scenario, because postgres is much more flexible, and manipulating data some special, DIY ways - you will simply have more possibilities.
External script probably isn't a good solution, e.g. because you will need to treat binary data with special care, or convert dates× from DATE to VARCHAR and then to DATE again. Inside external script, various text-stored data will be probably just strings, and you will need to quote it too.
Simply said I have to write an application to synchronise several database tables. Because of the requirements the changes should be put into a queue (in form of a SQL statement) and here lies the problem: I'm not able to change the existing application which uses the database to add the executed query directly into the queue. Therefore I need to catch all data changing SQL queries of specific tables (> 20 tables) in the database.
I though about the following solutions:
To catch directly the MySQL query with triggers like it is described Can a trigger access the query string (best answer for this case I could find!), but I couldn't get the query that actives the trigger - only the query that I used within it.
To active the General Query Log. But I read about heavy performance considerations and so it isn't an arguable solution, because it would log even the tables I don't need (> 120 tables) and a lot of simple queries run on the database.
To use a history table filled by trigger. With this I wouldn't save the SQL statement of the queries with this solution (which would slow down my current concept of synchronisation), but it would be possible to realise.
Does someone know any other solution or how I could do the impossible by accessing the query within a trigger?
I'm grateful about any suggestion!
Related questions:
Can a trigger access the query string
Log mysql db changing queries and users
you could setup mysql proxy https://launchpad.net/mysql-proxy between existing application and mysql server. And intercept/modify/add any queries in the proxy.
Is there a way that if there's a change in records, that a query that changed the data (update, delete, insert) can be added to a "history" table transparently?
For example, if mySQL detects a change in a record or set of records, is there a way for mySQL to add that query statement into a separate table so that way, we can track the changes? That would make "rollback" possible since every query (other than SELECT) would be able to reconstruct database from its first row. Right?
I use PHP to interact with mySQL.
You need to enable the MySQL BinLog. This automatically logs all the alteration statements to a binary log which can be replied as needed.
The alternative is to use an auditing function through Triggers
Read about transaction logging in MySQL. This is built in to MySQL.
MySQL has logging functionality that can be used to log all queries. I usually leave this turned off since these logs can grow very rapidly, but it is useful to turn on when debugging.
If you are looking to track changes to records so that you can "roll back" a sequence of queries if some error condition presents itself, then you may want to look into MySQL's native support of transactions.