Interrupted bulk insert statement while on table lock - mysql

In a production server we've got a huge Myisam table that collects visits and we run a routine once a month let's say to move old data to an Archive table and make the huge table lighter for backups etc. The problem is when in the process of running the routine if the server crashes or simply mysql had to restart it may interrupt the insert to archive statement or the delete from live table statement. As we cannot use transactions because of the ARCHIVE engine for the archive table, is locking the tables a solution ? or have we to plan integrity checks manually ?

Related

Table Renaming in an Explicit Transaction

I am extracting a subset of data from a backend system to load into a SQL table for querying by a number of local systems. I do not expect the dataset to ever be very large - no more than a few thousand records. The extract will run every two minutes on a SQL2008 server. The local systems are in use 24 x 7.
In my prototype, I extract the data into a staging table, then drop the live table and rename the staging table to become the live table in an explicit transaction.
SELECT fieldlist
INTO Temp_MyTable_Staging
FROM FOOBAR;
BEGIN TRANSACTION
IF(OBJECT_ID('dbo.MyTable') Is Not Null)
DROP TABLE MyTable;
EXECUTE sp_rename N'dbo.Temp_MyTable_Staging', N'MyTable';
COMMIT
I have found lots of posts on the theory of transactions and locks, but none that explain what actually happens if a scheduled job tries to query the table in the few milliseconds while the drop/rename executes. Does the scheduled job just wait a few moments, or does it terminate?
Conversely, what happens if the rename starts while a scheduled job is selecting from the live table? Does transaction fail to get a lock and therefore terminate?

Why is a mysqldump with single-transaction more consistent than a one without?

I have gone through the manual and it was mentioned that every transaction will add a BEGIN statement before it starts taking the dump. Can someone elaborate this in a more understandable manner?
Here is what I read:
This option issues a BEGIN SQL statement before dumping data from the server. It is useful only with transactional tables such as InnoDB and BDB, because then it
dumps the consistent state of the database at the time when BEGIN was issued without blocking any applications."
Can some elaborate on this?
Since the dump is in one transaction, you get a consistent view of all the tables in the database. This is probably best explained by a counterexample. Say you dump a database with two tables, Orders and OrderLines
You start the dump without a single transaction.
Another process inserts a row into the Orders table.
Another process inserts a row into the OrderLines table.
The dump processes the OrderLines table.
Another process deletes the Orders and OrderLines records.
The dump processes the Orders table.
In this example, your dump would have the rows for OrderLines, but not Orders. The data would be in an inconsistent state and would fail on restore if there were a foreign key between Orders and OrderLines.
If you had done it in a single transaction, the dump would have neither the order or the lines (but it would be consistent) since both were inserted then deleted after the transaction began.
I used to run into problems where mysqldump without the --single-transaction parameter would consistently fail due to data being changed during the dump. As far as I can figure, when you run it within a single transaction, it is preventing any changes that occur during the dump from causing a problem. Essentially, when you issue the --single-transaction, it is taking a snapshot of the database at that time and dumping it rather than dumping data that could be changing while the utility is running.
This can be important for backups because it means you get all the data, exactly as it is at one point in time.
So for example, imagine a simple blog database, and a typical bit of activity might be
Create a new user
Create a new post by the user
Delete a user which deletes the post
Now when you backup your database, the backup may backup the tables in this order
Posts
Users
What happens if someone deletes a User, which is required by the Posts, just after your backup reaches #1?
When you restore your data, you'll find that you have a Post, but the user doesn't exist in the backup.
Putting a transaction around the whole thing means that all the updates, inserts and deletes that happen on the database during the backup, aren't seen by the backup.

How to temporary lock a database in mysql

I am using Engine InnoDB on my MySQL server.
I have a patch script to upgrade my tables like add new columns and fill in default data.
I want to make sure there is no other session using the database. So I need a way to lock the database:
The lock shouldn't kick out an existing session. If their is any other existing session just fail the lock and report error
The lock need to prevent other sessions to read/write/change the database.
Thanks a lot everyone!
You don't need to worry about locking tables yourself. As the MySQL documentation (http://dev.mysql.com/doc/refman/5.1/en/alter-table.html) says:
In most cases, ALTER TABLE makes a temporary copy of the original
table. MySQL waits for other operations that are modifying the table,
then proceeds. It incorporates the alteration into the copy, deletes
the original table, and renames the new one. While ALTER TABLE is
executing, the original table is readable by other sessions. Updates
and writes to the table that begin after the ALTER TABLE operation
begins are stalled until the new table is ready, then are
automatically redirected to the new table without any failed updates.

How does table locking affect table engine change from MyISAM to InnoDB?

So I have been asked to change the engine of a few tables in a production database from MyISAM to InnoDB. I am trying to figure out how that will affect usage in production (as the server can afford no downtime).
I have read some conflicting information. Some information I have read state that the tables are locked and will not receive updates until after the conversion completes (IE, updates are not queued, just discarded until it completes).
In other places, I have read that while the table is locked, the inserts and updates will be queued until the operation is complete, and THEN the write actions are performed.
So what exactly is the story here?
This is directly from the manual:
In most cases, ALTER TABLE makes a temporary copy of the original
table. MySQL waits for other operations that are modifying the table,
then proceeds. It incorporates the alteration into the copy, deletes
the original table, and renames the new one. While ALTER TABLE is
executing, the original table is readable by other sessions. Updates
and writes to the table that begin after the ALTER TABLE operation
begins are stalled until the new table is ready, then are
automatically redirected to the new table without any failed updates.
So, number two wins. They're not "failed", they're "stalled".
The latter is correct. All queries against a table that's being altered are blocked until the alter completes, and are processed once the alter finishes. Note that this includes read queries (SELECT) as well as write queries (INSERT, UPDATE, DELETE).

Optimize mysql table to avoid locking

How do I optimize mysql tables to not use locking? Can I alter table to 'turn off' locking all the time.
Situation:
I have app which use database of 15M records. Once weekly scripts doing some task (insert/update/delete) for 20 hours, and app servers that feed data to front end (web server), and that is fine, very small performance loss I see during that time.
Problem:
Once monthly I need to optimize table, since huge number of records is out there it take 1-2 hours to finish this task (starting optimize from mysql command line, or phpMyAdmin, same) and in that period mysql DOESN'T SERVE data to front end (I suppose it is about locking tables for optimize)
Question:
So how to optmize tables to avoid locking, since there is only reading of data (no insert or update) so I suppose 'unlocking' while optimize, in this case can't make any damage?
In case your table engine is InnoDB and MySQL version is > 5.6.17 - the lock won't happen. Actually there will be lock, but for VERY short period.
Prior to Mysql 5.6.17, OPTIMIZE TABLE does not use online DDL.
Consequently, concurrent DML (INSERT, UPDATE, DELETE) is not permitted
on a table while OPTIMIZE TABLE is running, and secondary indexes are
not created as efficiently.
As of MySQL 5.6.17, OPTIMIZE TABLE uses online DDL for regular and
partitioned InnoDB tables. The table rebuild, triggered by OPTIMIZE
TABLE and performed under the cover by ALTER TABLE ... FORCE, is
performed in place and only locks the table for a brief interval,
which reduces downtime for concurrent DML operations.
Optimize Tables Official Ref.
Just better prepare free space that is > than the space currently occupied by your table, because whole table copy can happen for index rebuild.