After consulting the MySQL manual [1,2], it seems like performing a rename of a table needs a global table lock. Some quick experiments confirm this. This means that if any SELECT statements are executing against this table, the RENAME operation will block until the SELECT statements finish executing.
I was wondering whether it is possible with InnoDB or any other database engine (PostgreSQL maybe?) where read-only operations do not block the database from performing a RENAME table.
I would imagine that the SELECT continues to read from the new table name if the two operations happen concurrently.
[1] https://dev.mysql.com/doc/refman/5.0/en/rename-table.html
[2] https://dev.mysql.com/doc/refman/5.0/en/select.html
Related
SO, we are trying to run a Report going to screen, which will not change any stored data.
However, it is complex, so needs to go through a couple of (TEMPORARY*) tables.
It pulls data from live tables, which are replicated.
The nasty bit when it comes to take the "eligible" records from
temp_PreCalc
and populate them from the live data to create the next (TEMPORARY*) table output
resulting in effectively:
INSERT INTO temp_PostCalc (...)
SELECT ...
FROM temp_PreCalc
JOIN live_Tab1 ON ...
JOIN live_Tab2 ON ...
JOIN live_Tab3 ON ...
The report is not a "definitive" answer, expectation is that is merely a "snapshot" report and will be out-of-date as soon as it appears on screen.
There is no order or reproducibility issue.
So Ideally, I would turn my TRANSACTION ISOLATION LEVEL down to READ COMMITTED...
However, I can't because live_Tab1,2,3 are replicated with BIN_LOG STATEMENT type...
The statement is lovely and quick - it takes hardly any time to run, so the resource load is now less than it used to be (which did separate selects and inserts) but it waits (as I understand it) because of the SELECT that waits for a repeatable/syncable lock on the live_Tab's so that any result could be replicated safely.
In fact it now takes more time because of that wait.
I'd like to SEE that performance benefit in response time!
Except the data is written to (TEMPORARY*) tables and then thrown away.
There are no live_ table destinations - only sources...
these tables are actually not TEMPORARY TABLES but dynamically created and thrown away InnoDB Tables, as the report Calculation requires Self-join and delete... but they are temporary
I now seem to be going around in circles finding an answer.
I don't have SUPER privilege and don't want it...
So can't SET BIN_LOG=0 for this connection session (Why is this a requirement?)
So...
If I have a scratch Database or table wildcard, which excludes all my temp_ "Temporary" tables from replication...
(I am awaiting for this change to go through at my host centre)
Will MySQL allow me to
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;
INSERT INTO temp_PostCalc (...)
SELECT ...
FROM temp_PreCalc
JOIN live_Tab1 ON ...
JOIN live_Tab2 ON ...
JOIN live_Tab3 ON ...
;
Or will I still get my
"Cannot Execute statement: impossible to write to binary log since
BINLOG_FORMAT = STATEMENT and at least one table uses a storage engine
limited to row-based logging..."
Even though its not technically true?
I am expecting it to, as I presume that the replication will kick in simply because it sees the "INSERT" statement, and will do a simple check on any of the tables involved being replication eligible, even though none of the destinations are actually replication eligible....
or will it pleasantly surprise me?
I really can't face using an unpleasant solution like
SELECT TO OUTFILE
LOAD DATA INFILE
In fact I dont think I could even use that - how would I get unique filenames? How would I clean them up?
The reports are run on-demand directly by end users, and I only have MySQL interface access to the server.
or streaming it through the PHP client, just to separate the INSERT from the SELECT so that MySQL doesnt get upset about which tables are replication eligible....
So, it looks like the only way appears to be:
We create a second Schema "ScratchTemp"...
Set the dreaded replication --replicate-ignore-db=ScratchTemp
My "local" query code opens a new mysql connection, and performs a USE ScratchTemp;
Because I have selected the default database of the "ignore"d one - none of my queries will be replicated.
So I need to take huge care not to perform ANY real queries here
Reference my scratch_ tables and actual data tables by prefixing them all on my queries with the schema qualified name...
e.g.
INSERT INTO LiveSchema.temp_PostCalc (...) SELECT ... FROM LiveSchema.temp_PreCalc JOIN LiveSchema.live_Tab1 etc etc as above.
And then close this connection just as soon as I can, as it is frankly dangerous to have a non-replicated connection open....
Sigh...?
I have a script that runs 24-7 on a table to perform necessary functions on it. However, when it is running, it is almost impossible to do an ALTER TABLE ADD INDEX statement, as it seems like it just hangs indefinitely. Is there any way around this? How should I go about adding this index?
The Alter table statement is getting a metadata lockout. You cannot perform your alter statement while another transaction is in process on the same table. Since your script runs 24-7, it is not possible to do what you are asking.
To ensure transaction serializability, the server must not permit one session to perform a data definition language (DDL) statement on a table that is used in an uncompleted explicitly or implicitly started transaction in another session. The server achieves this by acquiring metadata locks on tables used within a transaction and deferring release of those locks until the transaction ends. A metadata lock on a table prevents changes to the table's structure. This locking approach has the implication that a table that is being used by a transaction within one session cannot be used in DDL statements by other sessions until the transaction ends.
You can read more about this Here at dev.mysql.
Version 5.6 has ALTER TABLE ... ALGORITHM=INLACE ... to do ADD INDEX and several other ALTERs without blocking everything.
pt-online-table-alter (from Percona.com) can do it in older versions of MySQL. It uses a TRIGGER.
I have an SP with a set of 3 temp tables created within the SP with no indexes. All three are inserted into using INSERT INTO ... WITH (TABLOCK). The database recovery model is SIMPLE for userDB as well as tempDB.
This SP is generating and inserting new data and a Transaction Commit/Rollback is good enough to maintain data integrity. So I want it to do minimal logging which I think I have enabled by using the TABLOCK hint.
I am checking the log generation before and after execution of the SP using below query and see no difference in log generation after adding the TABLOCK hint. (Checking in tempDB as tables are temp tables)
SELECT count(1) as NumLogEntries
, sum("log record length") as TotalLengthWritten
FROM fn_dblog(null, null);
Is there anything else I need to do in order to enable minimal logging?
PN: I am able to see reduced logging if I use this hint to do the same INSERT INTO separately in management studio, but not if I do the same within the SP.
I have also tried adding the trace flag 610 ON before the insert statement but to no effect.
This type of question has been posted a few times, but the solutions offered are not ideal in the following situation. In the first query, I'm selecting table names that I know exist when this first query is executed. Then while looping through them, I want to query the number of records in the selected tables, but only if they still exist. The problem is, during the loop, some of the tables are dropped by another script. For example :
SELECT tablename FROM table
-- returns say 100 tables
while (%tables){
SELECT COUNT(*) FROM $table
-- by the time it gets to the umpteenth table, it's been dropped
-- so the SELECT COUNT(*) fails
}
And, I guess because it's run by cron, it fails fataly, and I get sent an email from cron stating it failed.
DBD::mysql::st execute failed: Table 'xxx' doesn't exist at
/usr/local/lib/perl/5.10.1/Mysql.pm line 175.
Script is using the deprecated Mysql.pm perl module.
Obviously you need to secure table to make sure it won't get deleted before you execute your query. Keep in mind, that if you begin with some kind of table lock, to avoid possible drop - the DROP TABLE query issued from some other place will fail with some lock error, or at least will wait until your SELECT finishes. Dropping a table isn't really often used operation, so with most cases the schema design persists during server operation - what you observe is really rare behaviour. In general, preventing table from being dropped during other query just isn't supported, however, in comments for below document you may find some trick with usage of semaphore tables to achieve it.
http://dev.mysql.com/doc/refman/5.1/en/lock-tables.html
"A table lock protects only against inappropriate reads or writes by other sessions. The session holding the lock, even a read lock, can perform table-level operations such as DROP TABLE. Truncate operations are not transaction-safe, so an error occurs if the session attempts one during an active transaction or while holding a table lock."
"If you need to do things with tables not normally supported by read or write locks (like dropping or truncating a table), and you're able to cooperate, you can try this: Use a semaphore table, and create two sessions per process. In the first session, get a read or write lock on the semaphore table, as appropriate. In the second session, do all the stuff you need to do with all the other tables."
You should be able to protect your perl code from failing by putting it into eval block. Something like that:
eval {
# try doing something with DBD::mysql
};
if ($#) {
# oops, mysql code failed.
# probably need to try it again
}
Or even put this in "while" loop
If you used better server like Postgres, right solution would be to enclose everything into transaction. But, in MySQL dropping table is not protected by transactions.
To select information related to a list of hundreds of IDs... rather than make a huge select statement, I create temp table, insert the ids into it, join it with a table to select the rows matching the IDs, then delete the temp table. So this is essentially a read operation, with no permanent changes made to any persistent database tables.
I do this in a transaction, to ensure the temp table is deleted when I'm finished. My question is... what happens when I commit such a transaction vs. let it roll it back?
Performance-wise... does the DB engine have to do more work to roll back the transaction vs committing it? Is there even a difference since the only modifications are done to a temp table?
Related question here, but doesn't answer my specific case involving temp tables: Should I commit or rollback a read transaction?
EDIT (Clarification of Question):
Not looking for advice up to point of commit/rollback. Transaction is absolutely necessary. Assume no errors occur. Assume I have created a temp table, assume I know real "work" writing to tempdb has occurred, assume I perform read-only (select) operations in the transaction, and assume I issue a delete statement on the temp table. After all that... which is cheaper, commit or rollback, and why? What OTHER work might the db engine do at THAT POINT for a commit vs a rollback, based on this specific scenario involving temp-tables and otherwise read-only operations?
If we are talking about local temporary table (i.e. the name is prefixed with a single #), the moment you close your connection, SQL Server will kill the table. Thus, assuming your data layer is well designed to keep connections open as short a time as possible, I would not worry about wrapping the creation of temp tables in a transaction.
I suppose there could be a slight performance difference of wrapping the table in a transaction but I would bet it is so small as to be inconsequential compared to the cost of keeping a transaction open longer due to the time to create and populate the temp table.
A simpler way to insure that the temp table is deleted is to create it using the # sign.
CREATE TABLE #mytable (
rowID int,
rowName char(30) )
The # tells SQL Server that this table is a local temporary table. This table is only visible to this session of SQL Server. When the session is closed, the table will be automatically dropped. You can treat this table just like any other table with a few exceptions. The only real major one is that you can't have foreign key constraints on a temporary table. The others are covered in Books Online.
Temporary tables are created in tempdb.
If you do this, you won't have to wrap it in a transaction.