Simulate mysql queries - mysql

Is there a way to test the mysql queries, without modifying my database?
For example, can I run a delete from my_table where id=101 without actual delete anything? What about insert and update?
TY

You could start a transaction before running your queries and then rollback after running them. Note that to do this you'll require InnoDB or XtraDb tables (won't work on MyISAM).
To start a transaction send to MySQL the following statement:
START TRANSACTION;
And at the end of your queries run the following statement:
ROLLBACK;
Your database will never be modified from the point of view of other connections. Your current connection will see the changes until ROLLBACK, and after that the original state will be restored.

If you just want to see how many rows would be deleted why not substitute 'delete' with 'select count(*)'?

Copy your database to a test-database or auto-generate a import-script. Than you can test whatever you want on the test-database and restore it again using the script or import.

Related

MySQL: Strategy to move data to another server

So my situation is as follows:
There is a single Master-Slave Replication on a MySQL 5.5 basis.
The master use a small SSD as data partition.
Therefore I want to clean a certain Inno Table (lets call this table MasterA) and move old (datediff < -2) rows to another database on the slave (SlaveA) with more space on the SATA-HDD.
The problem gets interesting as in some cases I need to access data from SlaveA.
So I think it would be the best if an event triggers a transaction like this:
INSERT INTO SlaveA SELECT * FROM MasterA WHERE datediff(created, now()) < -2;
DELETE FROM MasterA WHERE datediff(created, now()) < -2;
But how could I access SlaveA from the master? I already tried the federated engine, but it gets stuck with the read_only option activated on the slave and the super privilege for the user accessing the federated table.
Maybe the event should only call the copy query on the slave, but how to delete the rows on the master afterwards?
There should be other options than installing MySQL 5.6 and use another partition for the SlaveA table on the master.
Thanks in advance!
An external daemon process (with handles to both databases) could accomplish what you are looking for but it is not a very clean solution.
If you did have a single handle with access to both databases a trigger would be a viable solution. I would change your code to use a MySQL user defined variable setting it in the first statement and use it in the second statement.
On the other hand I would question why you think you need the write master on a SSD. Insert queries are normally a lot cheaper than delete queries. If you make sure all the reads are against the slaves the master should have very minimal latency. I would recommend putting it on SATA HDD and not running delete quires against it. Then you don't have to create a custom trigger; MySQL's built in replication should work just fine.

mysql - do locks propagate over replication?

I have a Mysql master-slave(s) replication with MyISAM tables. All updates are done on the master and selects are done on either the master or slaves.
It appears that we might need to manually lock a few tables when we do certain updates. While this write lock is on the tables, no selects can happen on the locked table. But what about on the slaves? Does the lock propagate out?
Say I have table_A and table_B. I initiate a lock on table_A and table_B on the master and start performing the update. At this time no other connection can read table_A and table_B off the master? But what if at this time another connection tries to read the tables off of a slave, can they do so?
Everything that MySQL replicates can be found in the binary logs.
You can run the following command to see the details.
show global variables like 'log_bin%';
log_bin_basename will tell you the path to your binary logs with base file name.
and run
show binary logs
to find the binary files that are currently present on your server.
You can check the actual commands that are written to the file by using mysqlbinlog command together with the file name or by running show binlog events ... from the MySQL CLI.
Also, check what binlog_format are you using.
Basically - the lock of the tables is not directly propagated to slaves, but at the time, whey will execute the performed updates they will perform a lock of the updated table if needed.
As far as I know write locks do not propagate into the binlog, You can verify that by doing quick test and looking at the binlog. If you want to avoid issues on the master aswell and for some reason can not migrate to InnoDB consider integrating something like GET_LOCK() into your application instead of completely locking a table. MyISAM is quite iffy when it comes to concurrency.

Retrieving deleted rows from MYSQL Database

I have accidentally deleted all rows of the database is there a way to retrieve them back. Since I am working of department server I wont be take backup. But I know in oracle I could rollback the DML commands. I tried to use rollback but its not working?
or I have to create whole data base again?
Sorry, but if the transaction with the delete statement has already been committed then I don't think you can recover the lost data unless you have a backup.
To avoid this accident, I'd advise always testing your WHERE clause using a SELECT query first, before running a DELETE statement. Then you will notice if it will delete rows that you didn't intend to delete.

Fixing "Lock wait timeout exceeded; try restarting transaction" for a 'stuck" Mysql table?

From a script I sent a query like this thousands of times to my local database:
update some_table set some_column = some_value
I forgot to add the where part, so the same column was set to the same a value for all the rows in the table and this was done thousands of times and the column was indexed, so the corresponding index was probably updated too lots of times.
I noticed something was wrong, because it took too long, so I killed the script. I even rebooted my computer since then, but something stuck in the table, because simple queries take a very long time to run and when I try dropping the relevant index it fails with this message:
Lock wait timeout exceeded; try restarting transaction
It's an innodb table, so stuck the transaction is probably implicit. How can I fix this table and remove the stuck transaction from it?
I had a similar problem and solved it by checking the threads that are running.
To see the running threads use the following command in mysql command line interface:
SHOW PROCESSLIST;
It can also be sent from phpMyAdmin if you don't have access to mysql command line interface.
This will display a list of threads with corresponding ids and execution time, so you can KILL the threads that are taking too much time to execute.
In phpMyAdmin you will have a button for stopping threads by using KILL, if you are using command line interface just use the KILL command followed by the thread id, like in the following example:
KILL 115;
This will terminate the connection for the corresponding thread.
You can check the currently running transactions with
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`
Your transaction should be one of the first, because it's the oldest in the list. Now just take the value from trx_mysql_thread_id and send it the KILL command:
KILL 1234;
If you're unsure which transaction is yours, repeat the first query very often and see which transactions persist.
Check InnoDB status for locks
SHOW ENGINE InnoDB STATUS;
Check MySQL open tables
SHOW OPEN TABLES WHERE In_use > 0;
Check pending InnoDB transactions
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`;
Check lock dependency - what blocks what
SELECT * FROM `information_schema`.`innodb_locks`;
After investigating the results above, you should be able to see what is locking what.
The root cause of the issue might be in your code too - please check the related functions especially for annotations if you use JPA like Hibernate.
For example, as described here, the misuse of the following annotation might cause locks in the database:
#Transactional(propagation = Propagation.REQUIRES_NEW)
This started happening to me when my database size grew and I was doing a lot of transactions on it.
Truth is there is probably some way to optimize either your queries or your DB but try these 2 queries for a work around fix.
Run this:
SET GLOBAL innodb_lock_wait_timeout = 5000;
And then this:
SET innodb_lock_wait_timeout = 5000;
When you establish a connection for a transaction, you acquire a lock before performing the transaction. If not able to acquire the lock, then you try for sometime. If lock is still not obtainable, then lock wait time exceeded error is thrown. Why you will not able to acquire a lock is that you are not closing the connection. So, when you are trying to get a lock second time, you will not be able to acquire the lock as your previous connection is still unclosed and holding the lock.
Solution: close the connection or setAutoCommit(true) (according to your design) to release the lock.
Restart MySQL, it works fine.
BUT beware that if such a query is stuck, there is a problem somewhere :
in your query (misplaced char, cartesian product, ...)
very numerous records to edit
complex joins or tests (MD5, substrings, LIKE %...%, etc.)
data structure problem
foreign key model (chain/loop locking)
misindexed data
As #syedrakib said, it works but this is no long-living solution for production.
Beware : doing the restart can affect your data with inconsistent state.
Also, you can check how MySQL handles your query with the EXPLAIN keyword and see if something is possible there to speed up the query (indexes, complex tests,...).
Goto processes in mysql.
So can see there is task still working.
Kill the particular process or wait until process complete.
I ran into the same problem with an "update"-statement. My solution was simply to run through the operations available in phpMyAdmin for the table. I optimized, flushed and defragmented the table (not in that order). No need to drop the table and restore it from backup for me. :)
I had the same issue. I think it was a deadlock issue with SQL. You can just force close the SQL process from Task Manager. If that didn't fix it, just restart your computer. You don't need to drop the table and reload the data.
I had this problem when trying to delete a certain group of records (using MS Access 2007 with an ODBC connection to MySQL on a web server). Typically I would delete certain records from MySQL then replace with updated records (cascade delete several related records, this streamlines deleting all related records for a single record deletion).
I tried to run through the operations available in phpMyAdmin for the table (optimize,flush, etc), but I was getting a need permission to RELOAD error when I tried to flush. Since my database is on a web server, I couldn't restart the database. Restoring from a backup was not an option.
I tried running delete query for this group of records on the cPanel mySQL access on the web. Got same error message.
My solution: I used Sun's (Oracle's) free MySQL Query Browser (that I previously installed on my computer) and ran the delete query there. It worked right away, Problem solved. I was then able to once again perform the function using the Access script using the ODBC Access to MySQL connection.
Issue in my case: Some updates were made to some rows within a transaction and before the transaction was committed, in another place, the same rows were being updated outside this transaction. Ensuring that all the updates to the rows are made within the same transaction resolved my issue.
issue resolved in my case by changing delete to truncate
issue-
query:
delete from Survey1.sr_survey_generic_details
mycursor.execute(query)
fix-
query:
truncate table Survey1.sr_survey_generic_details
mycursor.execute(query)
This happened to me when I was accessing the database from multiple platforms, for example from dbeaver and control panels. At some point dbeaver got stuck and therefore the other panels couldn't process additional information. The solution is to reboot all access points to the database. close them all and restart.
Fixed it.
Make sure you doesn't have mismatched data type insert in query.
I had an issue where i was trying "user browser agent data" in VARCHAR(255) and having issue with this lock however when I changed it to TEXT(255) it fixed it.
So most likely it is a mismatch of data type.
I solved the problem by dropping the table and restoring it from backup.

MySQL triggers + replication with multiple databases

I am running a couple of databases on MySQL 5.0.45 and am trying to get my legacy database to sync with a revised schema, so I can run both side by side. I am doing this by adding triggers to the new database but I am running into problems with replication. My set up is as follows.
Server "master"
Database "legacydb", replicates to server "slave".
Database "newdb", has triggers which update "legacydb" and no replication.
Server "slave"
Database "legacydb"
My updates to "newdb" run fine, and set off my triggers. They update "legacydb" on "master" server. However, the changes are not replicated down to the slaves. The MySQL docs say that for simplicity replication looks at the current database context (e.g. "SELECT DATABASE();" ) when deciding which queries to replicate rather than looking at the product of the query. My trigger is run from the context of database "newdb", so replication ignores the updates.
I have tried moving the update statement to a stored procedure in "legacydb". This works fine (i.e. data replicates to slave) when I connect to "master" and manually run "USE newdb; CALL legacydb.do_update('Foobar', 1, 2, 3, 4);". However, when this procedure is called from a trigger it does not replicate.
So far my thinking on how to fix this has been one of the following.
Force the trigger to set a new current database. This would be easiest, but I don't think this is possible. This is what I hoped to achieve with the stored procedure.
Replicate both databases, and have triggers in both master and slave. This would be possible, but a pain to set up.
Force the replication to pick up all changes to "legacydb", regardless of the current database context.
If replication runs at too high a level, it will never even see any updates run by my trigger, in which case no amount of hacking is going to achieve what I want.
Any help on how to achieve this would be greatly appreciated.
This may have something to do with it:
A stored function acquires table locks before executing, to avoid inconsistency in the binary log due to mismatch of the order in which statements execute and when they appear in the log. Statements that invoke a function are recorded rather than the statements executed within the function. Consequently, stored functions that update the same underlying tables do not execute in parallel.
In contrast, stored procedures do not acquire table-level locks. All statements executed within stored procedures are written to the binary log.
Additionally, there are a whole list of issues with Triggers:
http://dev.mysql.com/doc/refman/5.0/en/routine-restrictions.html