When was my table last ALTERed? - mysql

I'm using mysql 5.1.41-3ubuntu12.10 and would like to know when my table was last ALTERed (or CREATEd, if it was never ALTERed).
SELECT * FROM information_schema.TABLES WHERE TABLE_SCHEMA = SCHEMA();
gives the CREATE and last UPDATE time, but not AFAICT the last ALTER time.

The answer depends somewhat on the storage engine. The most reliable indicator of when the table was last altered is to look at the modified time on the .frm file in the data directory. That file should be updated every time you alter the table, even for changes like updating a column default that don't require a table rebuild.
information_schema.tables.create_time is a bit of a misnomer, since that value actually changes most of the time when you alter a table. However, this is one area where the storage engine is relevant. If you do an alter without a rebuild (like changing a column default value) in InnoDB then information_schema.tables.create_time is updated, but if you do the same in MyISAM information_schema.tables.create_time is not updated. In both cases the .frm file should be updated, so I'd recommend you check the file timestamp for the most accurate data if you have access to it.

Related

DB multiple table update while permanent access

I have a set of tables in a MySql database which contain a set of related data (50 000 rows total, so low volume), which are accessed all the time (7 million/day) . Periodically (let's say once a day) I need to update ALL the data in all the tables (full refresh).
I'm considering 2 possibilities:
use transactions, but I'm not sure how it will work with reads/locks
use versioning: adding a version column in all tables and set all rows on the same "publication" with the same version. The next publication will have a version+1, then the lower version rows can be deleted. The current version is stored in a parameter table allowing the reading query to always pick the latest available version.
Anybody has experimented with both solutions? Or any different/better solution?
Thanks
Replacing an entire table
CREATE TABLE new LIKE real;
populate `new` with the new stuff -- the slow part
RENAME TABLE real TO old,
new TO real; -- atomic and fast.
Replacing an entire database: Do the above for each table, but hold off to do the RENAMEs until all the other work is done. Then do all of them in a single RENAME TABLE statement.
No locking, no transactions, no nothing.

What is the best and safest method to update/change the data type of a column in a MySQL table that has ~5.5 million rows (TINYINT to SMALLINT)?

Similar questions have been asked, but I have had issues in the past by using
ALTER TABLE tablename MODIFY columnname SMALLINT
I had a server crash and had to recover my table when I ran this the last time. Is it safe to use this command when there is that much data in the table? What if there are other queries that may be running on the table in parallel? Should I copy the table and run the query on the new table? Should I copy the column and move the data to the new column?
Please let me know if there are any best or "safest" practices when doing this.
Also, I know this depends on a lot of factors, but does anyone know how long the query should take on an InnoDB table with ~5.5 million rows (rough estimate)? The column in question is a TINYINT and has data in it. I want to upgrade to a SMALLINT to handle larger values.
Thanks!
On a slow disk, and with lots of columns in the table, it could take hours to finish.
The ALTER is "safe" because it used to do the following:
Lock the table
Create a similar table, but with SMALLINT instead of TINYINT.
Copy all the rows over to the new table.
Rename the tables and drop the old one.
Unlock
Step 3 is the slow part. The only vulnerability is in step 4, which is very fast.
A server crash during steps 1-3 should have left the old table intact, but possibly left behind a partially created tmp table named something like #sql....
Percona's pt-online-schema-change has the advantage of being virtually lockless.
This cannot be easily answered.
It depends on things like
Has the table its own file, or is it shared with others?
How big is the table in terms of bytes?
etc.
It can last from some minutes to, indeed, some hours and can involve copying over the whole content of the table, so you have quite big needs of disk space.
You can add a new SMALLINT column to the table:
ALTER TABLE tablename ADD columnname_new SMALLINT AFTER columnname;
then copy the data from old column to new one:
UPDATE tablename SET columnname_new = columnname WHERE columnname_new IS NULL LIMIT 100000
repeat above until all records done
then you can drop old column:
ALTER TABLE tablename DROP COLUMN columnname
and finally rename new column:
ALTER TABLE tablename CHANGE columnname_new columnname SMALLINT
you could do the copy of values from old column to new column in batch of 100000 rows, just to be sure not to have any issue
I would add a new column, change the code to check if a value exists in the new column and to read/write it if it does. Also change the code to read from the old column and write to the new column. At this point you can migrate the data at will, copying over values from the old column into the new column where a value does not exist in the new column.
Once all of the data has been migrated you can drop the old column.

How do I efficiently change a MySQL table structure on a table with millions of entries?

I have a MySQL database that is up to about 17 GB in size and has 38 million entries. At the moment, I need to both increase the size of one column (varchar 40 to varchar 80) and add more columns.
Many of the fields are indexed including the one that I need to change. It is part of a unique pair that is necessary for the applications to work. In attempting to just make the change yesterday, the query ran for almost four hours without finishing, when I decided to cut our outage and just bring the service back up.
What is the most efficient way to make changes to something of this size?
Many of these entries are also old and if there is a good way to sort of shard off entries but still have them available that might help with this problem by making the table a much more manageable size.
You have some choices.
In any case you should take a backup before you do this stuff.
One possibility is to take your service offline and do it in place, as you have tried. If you do that you should disable key checks and constraints.
ALTER TABLE bigtable DISABLE KEYS;
SET FOREIGN_KEY_CHECKS=0;
ALTER TABLE (whatever);
ALTER TABLE (whatever else);
...
SET FOREIGN_KEY_CHECKS=1;
ALTER TABLE bigtable ENABLE KEYS;
This will allow the ALTER TABLE operation to go faster. It will regenerate the indexes all at once when you do ENABLE KEYS.
Another possibility is to create a new table with the new schema you want, then disable the keys on the new table, then do as #Bader suggested and insert the contents of the old table.
After your new table is built you will re-enable the keys on it, then rename the old table to some name like "old_bigtable" then rename the new table to "bigtable".
It's possible that you can keep your service online while you're populating the new table. But that might work poorly.
A third possibility is to dump your giant table (to a flat file) and then load it to a new table with the new layout. That is pretty much like the second possibility except that you get a table backup for free. You can make this go pretty fast with SELECT DATA INTO OUTFILE and LOAD DATA INFILE. You'll need to have access to your server machine's file system to do this.
In all cases, disable, then re-enable, the constraints and keys to get things to go fast.
Create a new table with the new structure you want with a different name for example NewTable.
Then insert data into this new table from the old table using the following query:
INSERT INTO NewTable (field1, field2, etc...) SELECT field1, field2, ... FROM OldTable
After this is done, you can drop the old table and rename the new table to the original name
DROP TABLE `OldTable`;
RENAME TABLE `NewTable` TO `OldTable` ;
I have tried this approach on a very large table and it's much much faster than altering the table.
With MySQL 5.1 and again with 5.5 certain alter statements were enhanced to just modify the structure without rewriting the entire table ( http://dev.mysql.com/doc/refman/5.5/en/alter-table.html - search for in-place). The availability of this though varies by the type of change you are making and the engine in use, the most value comes from InnoDB Plugin. In the case of your specific changes though the entire table would be rewritten.
When we encounter these issues, we typically try to leverage replica databases. As long as you are adding and not removing you can run your DDL against the replica first and then schedule a brief outage for promoting the replica to the master role. If you happen to be on RDS this is even one of their suggested uses for their replica instances http://aws.amazon.com/about-aws/whats-new/2012/10/11/amazon-rds-mysql-rr-promotion/.
Some other alternatives include:
Selecting out a subset of records into a new table with the desired structure (use INTO OUTFILE to avoid a table lock). Once complete you can schedule a maintenance window and REPLACE INTO or UPDATE any records that have changed in the origin table since the initial data copy. Once the update is complete a RENAME TABLE... of both tables wraps the changes up.
Using a tool like Percona's pt-online-schema-change: http://www.percona.com/doc/percona-toolkit/2.1/pt-online-schema-change.html. This tool works with triggers so if you already have triggers on the tables you want to change this may not fit your needs.

Altering the data type of a column in a HUGE table. Performance issues

I want to run this on my table:
ALTER TABLE table_name MODIFY col_name VARCHAR(255)
But my table is huge, it has more than 65M (65 million) rows. Now when I execute, it takes nearly 50mins to execute this command. Any better way to alter table?
Well, you need
ALTER TABLE table_name CHANGE col_name new_name VARCHAR(255)
But, you are right, it takes a while to make the change. There really isn't any faster way to change the table in MySQL.
Is your concern downtime during the change? If so, here's a possible approach: Copy the table to a new one, then change the column name on the copy, then rename the copy.
You probably have figured out that routinely changing column names in tables in a production system is not a good idea.
another variant to use percona toolkit
https://www.percona.com/doc/percona-toolkit/2.2/pt-online-schema-change.html
You can deal with schema change without downtime using Oak.
oak-online-alter-table copies schema of original table, applies your changes and then copies the data. The CRUD operations can still be invoked as oak puts some triggers on original table so no data is going to be lost during the operation.
Please refer to other question where author of oak gives detailed explanation about this mechanism and also suggests other tools.

Resetting AUTO_INCREMENT on myISAM without rebuilding the table

Please help I am in major trouble with our production database. I had accidentally inserted a key with a very large value into an autoincrement column, and now I can't seem to change this value without a huge rebuild time.
ALTER TABLE tracks_copy AUTO_INCREMENT = 661482981
Is super-slow.
How can I fix this in production? I can't get this to work either (has no effect):
myisamchk tracks.MYI --set-auto-increment=661482982
Any ideas?
Basically, no matter what I do I get an overflow:
SHOW CREATE TABLE tracks
CREATE TABLE tracks (
...
) ENGINE=MYISAM AUTO_INCREMENT=2147483648 DEFAULT CHARSET=latin1
After struggling with this for hours, I was finally able to resolve it. The auto_increment info for myISAM is stored in TableName.MYI, see state->auto_increment in http://forge.mysql.com/wiki/MySQL_Internals_MyISAM. So fixing that file was the right way to go.
However, myisamchk definitely has an overflow bug somewhere in the update_auto_increment function or what it calls, so it does not work for large values -- or rather if the current value is already > 2^31, it will not update it (source file here -- http://www.google.com/codesearch/p?hl=en#kYwBl4fvuWY/pub/FreeBSD/distfiles/mysql-3.23.58.tar.gz%7C7yotzCtP7Ko/mysql-3.23.58/myisam/mi_check.c&q=mySQL%20%22AUTO_INCREMENT=%22%20lang:c)
After discovering this, I ended up just using "xxd" to dump the MYI file into a hexfile, edit around byte 60, and replace the auto_increment value manually in the hexfile. "xxd -r" then restores the binary file from the hex file. To discover exactly what to edit, I just used ALTER TABLE on much smaller tables and looked at the effects using diffs. No fun, but it worked in the end. There seems to be a checksum in the format, but it seems to be ignored.
Have you dropped the record with the very large key? I don't think you can change the auto_increment to a lower value if that record still exists.
From the docs on myisamchk:
Force AUTO_INCREMENT numbering for new records to start at the given value (or higher, if there are existing records with AUTO_INCREMENT values this large)