SQL drop a column that was created during the same transaction? - mysql

I'm using a transaction to make alterations to my database, this way if anything fails during the alterations i can roll back without any harm having been done.
However, since i run my queries based on a list of queries that puts my database in it's final state (any time i need to make changes, i simply add a new rule to the list), it fails when i try to drop a column that was added during the same transaction.
Example:
START TRANSACTION;
ALTER TABLE "servers" ADD COLUMN "test" INTEGER NOT NULL;
ALTER TABLE "servers" DROP COLUMN "test";
COMMIT;
When I run this, i get something along the lines of
column "test" in the middle of being added, try again later
I understand why this is happening, since the transaction hasn't been committed yet the column doesn't exist to be dropped. However is there a way around this specifically so that I can drop the column in the same transaction? Or at least queue it to be deleted once the transaction is committed.
I feel it worth noting that the queries being run within the transaction are generated using Eloquent ORM Blueprints.

MySQL doesn't support table changes in transactions. From the documentation:
Some statements cannot be rolled back. In general, these include data definition language (DDL) statements, such as those that create or drop databases, those that create, drop, or alter tables or stored routines.

Related

How to create changelog for table?

I need to create a change history of table rows when a certain field is changed. So what I wanted to do is create a trigger on table update. When the field txta changes, I want the whole row to get copied over to debugwhich is a cloned version of msser_210 with an added column for datetime at the end, without data. I would like to add NOW() on change so I would have a timestamp. This is what I have tried to far:
DELIMITER $$
CREATE TRIGGER history_trigger
BEFORE UPDATE ON msser_210
FOR EACH ROW
BEGIN
IF OLD.txta != NEW.txta
THEN
INSERT INTO `debug_history` (`idpm`,`posn`,`prnb`,`doid`,`ofcr`,`pidm`,`hitm`,`sitm`,`item`,`dsca`,`igid`,`kitm`,`leng`,`widt`,`hght`,`thik`,`radi`,`quas`,`wght`,`effc`,`colr`,`bdat`,`edat`,`back`,`cuid`,`intb`,`aggr`,`unqu`,`oqua`,`unsq`,`stoc`,`allo`,`hall`,`tqan`,`bqan`,`pkey`,`pric`,`cvqs`,`unsp`,`disc`,`dart`,`ksid`,`anhg`,`txta`,`txti`,`mndn`, `changedate`) VALUES (OLD.idpm,OLD.posn,OLD.prnb,OLD.doid,OLD.ofcr,OLD.pidm,OLD.hitm,OLD.sitm,OLD.item,OLD.dsca,OLD.igid,OLD.kitm,OLD.leng,OLD.widt,OLD.hght,OLD.thik,OLD.radi,OLD.quas,OLD.wght,OLD.effc,OLD.colr,OLD.bdat,OLD.edat,OLD.back,OLD.cuid,OLD.intb,OLD.aggr,OLD.unqu,OLD.oqua,OLD.unsq,OLD.stoc,OLD.allo,OLD.hall,OLD.tqan,OLD.bqan,OLD.pkey,OLD.pric,OLD.cvqs,OLD.unsp,OLD.disc,OLD.dart,OLD.ksid,OLD.anhg,OLD.txta,OLD.txti, OLD.mndn, NOW());
END IF;
END;
$$
Why I want to do this is because we are having (probably) a php script with a bug that writes the same text string into every field of the database but we don't know when or why it happens neither which script it does. Is there maybe a more elegant solution?
UPDATE: I found the option to "Track Changes" in phpMyAdmin, but apparently it does not track our programs php-issued UPDATE queries, the DROP and CREATE TABLE statements from PHP are tracked though. If I issue an UPDATE via phpMyAdmin, it is tracked though. Long story short I went back to my original plan with the trigger.
UPDATE2: found the answer out myself
Update: As per the OP's comment, clearly the context is very specific. An infrastructure team without access to (or the ability to feedback and direct the development team's) code needs a mechanism by which to log table changes on a production database.
Warnings about using triggers:
Triggers can be tricky to debug, not least because they're transparent and it is never obvious to someone new looking at your code that a trigger is performing some action behind the scenes. (I speak from experience.) They can also cause issues on replicated, multi-master and clustered installations. (Again, I speak from experience.) Also if they fail for some unrelated reason (e.g. the table they write to is broken), the entire transaction can/will fail (InnoDB) - which might not be what you want. (Especially with non-essential "debug" functions.)
Otherwise, triggers are a perfectly valid tool. And in your specific scenario, probably the best bet available to you.
There are several other options available to you, two of which I would highlight:
Stored procedures as an access layer to data
If you're very data centric and you already have business logic inside the database - (a hotly debated topic, I'm not here arguing that you should or should not have business logic in the database) then reading and writing to the database through stored procedures has a clear advantage.
Any transactionally tied logic can be inserted into these stored procedures such that the transactionally unsafe caller (PHP, being a common example) only needs to call 1 query (call sp_insert_tablename(123, 'abc')) and transactional safety can be enforced by the database.
Temporary debug logic can be added to these stored procedures and enabled/disabled by a flag in a settings table, session variable, final argument, whatever you please.
Data abstraction layer/library
Similar principle. Find a data abstraction layer for your client (assuming you have access to alter it's internals). For a PHP or .NET web app there are several popular choices, all of which allow you to override (extend through code inheritance) the save/delete operations to perform any additional actions you want - exactly as for stored procedures (but with the logic maintained inside models in the client).
If you want a specific example, you'll need to give us more information on what stack/language/framework(s) you're using
With both options, make sure you appropriately handle error scenarios.
The debug_history is a cloned via pypMyAdmin from the original table. It got an additional changedate column appended manually.
ALTER TABLE debug_history ADD COLUMN changedate DATETIME DEFAULT NULL;
I decided because there was no other way that I would have to type all the names myself. Because I am lazy I got a recent SQL dump, copied an INSERT INTO-Statement from the file that is used to rebuild msser_210 and altered the values.
I added an extra row with an autoincrement line, dropped the primary key and set the new primary key to the new row.
ALTER TABLE debug_history DROP PRIMARY KEY;
ALTER TABLE debug_history ADD COLUMN changenumber INT NOT NULL PRIMARY KEY AUTO_INCREMENT;
I now have a working changelog, triggered on change in txta field (Please see the question for the trigger with the original format). I renamed the txta column in the debug_history to txta_old and created a new column txta_new.
ALTER TABLE debug_history CHANGE txta txta_old TEXT NOT NULL $$
ALTER TABLE debug_history ADD COLUMN txta_new TEXT NOT NULL AFTER txta_old $$
Afterwards I had to modify the trigger because I manually had to copy all the names..
DROP TRIGGER history_trigger
DELIMITER $$
CREATE TRIGGER history_trigger
BEFORE UPDATE ON msser_210
FOR EACH ROW
BEGIN
IF OLD.txta != NEW.txta
THEN
INSERT INTO `debug_history` (`idpm`,`posn`,`prnb`,`doid`,`ofcr`,`pidm`,`hitm`,`sitm`,`item`,`dsca`,`igid`,`kitm`,`leng`,`widt`,`hght`,`thik`,`radi`,`quas`,`wght`,`effc`,`colr`,`bdat`,`edat`,`back`,`cuid`,`intb`,`aggr`,`unqu`,`oqua`,`unsq`,`stoc`,`allo`,`hall`,`tqan`,`bqan`,`pkey`,`pric`,`cvqs`,`unsp`,`disc`,`dart`,`ksid`,`anhg`,`txta_old`,`txta_new`,`txti`,`mndn`, `changedate`) VALUES (OLD.idpm,OLD.posn,OLD.prnb,OLD.doid,OLD.ofcr,OLD.pidm,OLD.hitm,OLD.sitm,OLD.item,OLD.dsca,OLD.igid,OLD.kitm,OLD.leng,OLD.widt,OLD.hght,OLD.thik,OLD.radi,OLD.quas,OLD.wght,OLD.effc,OLD.colr,OLD.bdat,OLD.edat,OLD.back,OLD.cuid,OLD.intb,OLD.aggr,OLD.unqu,OLD.oqua,OLD.unsq,OLD.stoc,OLD.allo,OLD.hall,OLD.tqan,OLD.bqan,OLD.pkey,OLD.pric,OLD.cvqs,OLD.unsp,OLD.disc,OLD.dart,OLD.ksid,OLD.anhg,OLD.txta,NEW.txta,OLD.txti, OLD.mndn, NOW());
END IF;
END;
$$

How to rename two tables in one atomic operation in MySQL

I need to rename two tables in one atomic operation so that user will never be able to see the database in its intermediate state.
I'm using MySQL and noticed that this case is perfectly described in the documentation:
13.3.3 Statements That Cause an Implicit Commit
The statements listed in this section (and any synonyms for them)
implicitly end any transaction active in the current session, as if
you had done a COMMIT before executing the statement
[...]
Data definition language (DDL) statements that define or modify
database objects. ALTER DATABASE ... UPGRADE DATA DIRECTORY NAME,
ALTER EVENT, ALTER PROCEDURE, ALTER SERVER, ALTER TABLE, ALTER VIEW,
CREATE DATABASE, CREATE EVENT, CREATE INDEX, CREATE PROCEDURE, CREATE
SERVER, CREATE TABLE, CREATE TRIGGER, CREATE VIEW, DROP DATABASE, DROP
EVENT, DROP INDEX, DROP PROCEDURE, DROP SERVER, DROP TABLE, DROP
TRIGGER, DROP VIEW, INSTALL PLUGIN (as of MySQL 5.7.6), RENAME TABLE,
TRUNCATE TABLE, UNINSTALL PLUGIN (as of MySQL 5.7.6).
But maybe there's some kind of workaround or something like this?
My situation looks like this:
I have a current data set in the table named current
I gathered a new data set in the table named next
I need to rename the current table to the current_%current_date_time% and the next table to the current in one atomic operation
Well, easy...
RENAME TABLE current TO current_20151221, next TO current;
as is stated in the manual. There it says that it's an atomic operation. Just to clear this up, implicit commits have nothing to do with it. That's a different story. That just says, that those statements end an open transaction.

MySQL/Percona 5.6: INSERT INTO a table after a table is ALTERed

I have recently installed a new computer with Percona Server 5.6 instead of MySQL 5.6, and using InnoDB/XtraDB mostly, FWIW. The database I'm working on is merely a testing ground, but I have 1 issue: after I add a column to a table (or even remove one), I usually forget to INSERT or otherwise change another table's data, which keeps track of what column names are in which table; each table has ASCII name along with a number, and this number is the only difference between table names for simplicity. So, is there a way to auto-update the "relation" table so that the column name and table's number are added or changed, instead of using a cronjob ?
Now that I think, I could DROP that table and use information_schema instead ...
EDIT 0: Don't let the above realization stop you; it's just good to know if this is possible before going for a possible other way.
Yes, relying on the 'INFORMATION_SCHEMA.COLUMNS' may be best.
Unfortunately mysql does not support DDL TRIGGER events, as this would be what you are looking for.
triggers allow you to perform many SQL and procedural operations before insertion, update or deletion of rows in a specific table. However to the best of my knowledge - and I would be stoked if I were wrong - you cant set TRIGGER events on DDL statements like ALTER and DROP TABLE...
However still take the time to learn about triggers - they save a lot of time by eliminating the need for cronjobs and external updates for things like aggregate values.
https://dev.mysql.com/doc/refman/5.6/en/trigger-syntax.html

Transactional ALTER statements in MySQL

I'm doing an update to MySQL Database which includes MySQL scripts that make ALTER TABLE sentences, as well as DIU sentences (delete, insert, update).
The idea is to make a transactional update, so if a sentence fails, a rollback is made, but if I put ALTER TABLE sentences or others specified in http://dev.mysql.com/doc/refman/5.0/en/implicit-commit.html an implicit commit is made, so I can't make a complete rollback, because the indicated operations remains commited.
I tried to use mysqldump to make a backup which is used in case of error (mysql returns distinct to zero), but it is too slow and can fail too.
What can I do? I need this to ensure that future updates are safe and not too slow, because databases contains between 30-100 GB of data.
dump and reload might be your best options instead of alter table.
From mysql prompt or from the database script:
select * from mydb.myt INTO OUTFILE '/var/lib/mysql/mydb.myt.out';
drop table mydb.myt;
create tablemyt(your table ddl here)
load data infile '/var/lib/mysql/mydb.myt.out' INTO TABLE mydb.myt;
Check this out:
http://everythingmysql.ning.com/profiles/blogs/whats-faster-than-alter
I think it offers good guidance on "alternatives to alter".
Look at pt-online-schema change.
You can configure it to leave the 'old' table around after the online ALTER is completed. The old table will have an underscore prefix. If bad things happen, drop the tables you altered and renamed the OLD tables to the original tables. If everything is OK, then just drop the OLD tables.
http://www.percona.com/doc/percona-toolkit/2.1/pt-online-schema-change.html

Restore DB from SQL script with Foreign Key Constraints

I am trying to restore a DB using an SQL script, but things foreign key constraints get in the way
I am taking a MySQL DB and bringing it over to PostgreSQL.
Since the MySQL create table syntax ended up being quite different, I took another PostgreSQL DB with the same schema, but different data and restored the schema only, from that.
In other words, I now have a database with tables, constraints, sequences and all that shnaz but no data inside.
So, it's is time to restore data.
I take a backup of the MySQL DB with phpMyAdmin (data only) as an SQL script (pgAdmin does not seem to accept zip or gzip files for some reason) and run the SQL script.
Now, this is where the problems start to happen, it's only natural, I am going from MySQL to PostgreSQL, so syntax errors are bound to happen.
But, there are other non syntax related problems to, like this one:
ERROR: insert or update on table "_account" violates foreign key constraint "fk_1_account"
DETAIL: Key (accountid)=(2) is not present in table "_entity".
So, yeah, basically, a foreign constraint exists, the query is trying to insert data into the _account table, but the corresponding data has not been inserted into the _entity table yet.
How do I get around that? Is there a way to make pgAdmin3/PostgreSQL disable ALL OF the constraints, insert the data, and then re-enable the constraints?
A syntax related error I encountered, was this one:
INSERT INTO _accounttype_seq (id) VALUES (11);
The PostgreSQL equivalent of that statement (if I am correct) is
ALTER SEQUENCE _accounttype_seq INCREMENT BY 11;
But, it's a bit of a pain to run through the whole script and change all 200+ Sequence insert statements. So, I am being lazy here, but is there an easier way to deal with the sequences as well?
Or, do you guys have any suggestions for a different set of tools to make this easier?
Thanks for your time, have a good day.
Do not try to get around the foreign key constraints. That is the way to make sure the data is bad.
First look at the constraints and make sure you are inserting to the tables in the correct order. If _entity is parent of "_account, then it should be populated first.
Next you need to have the script move any failing records to an exception table. Then you can look at them and see what the data integrity issues is and if you need to throw the records away permanently or try to figure out what the missing parent value should be. If it is critical data such as orders where the customer no longer exists (possible in any system that didn't have correct fks to begin with) and you must keep the record and cannot determine what the parent value should have been, you can create an 'Unknown" record in the customer table and assign all bad orders to that customer id.
And manually changing the alter sequences shouldn't take long even if it is boring. There wil be plently of other things you need to handle manually in a conversion of this type.
I would try to find a data import tool for PostgreSQL - I live in SQL server world where I would use SSIS but you need the equivalent of SSIS for the PostgreSQL world.
Aparently the foreign keys weren't actually enforced in MySQL (maybe because of using MyISAM) or the generated SQL just does it in the wrong order.
If it's "only" the wrong order, I see two possible solutions:
edit the generated script and either move all FK definitions to the end of the script
Edit the definition of each FK constraint and set them all to initially deferred. Then run the script as one single transaction with only on commit at the very end.
Edit (because this is too much to be put as a comment)
Using SET CONSTRAINTS ALL DEFERRED will only work if the constraints have been created with the option DEFERRABLE.
To run everything in one single transaction, you have to make sure you have turned autocommit off. Then simply run the INSERTs and at the very end issue a COMMIT. A ; will only commit if you have autocommit on.
If you want to be independent of the autocommit setting, then start your script with [BEGIN][1] and make sure there is only a single COMMIT at the very end.
BEGIN DEFERRABLE
INSERT INTO table_one ... ;
INSERT INTO table_two ... ;
.....
COMMIT;