MySQL 24x7 - InnoDB ALTER TABLE blocks (TABLE LOCK) - mysql

we are trying to minimize (maintenance) downtimes of our mysql based application.
It seems that InnoDB hotbackup will give us the possibility to do regular backups without stopping the server; Master/Slave replication will give us failover capabilities (loosing a few seconds of data due to replication lag is not great, but not a showstopper also).
So far for backup and unexpected downtimes. Now to expected downtimes -
As far as I understand from reading online documentation and books an ALTER TABLE on an InnoDB table will require a TABLE LOCK thus blocking all reads and writes to this table. Effectively this will mean downtime to the application. Some large tables may take hours to be updated.
Are there any known workarounds to this? The perfect workaroudn would be of course a non-blocking ALTER TABLE. But anything to make ALTER TABLE faster is also interesting.
Thanks in advance!
PS - commercial (non-free) tools would be ok also, free solutions are of course also welcome

Since you have replication setup, it is normally possible to do some trickery with ALTER TABLE on the slave, let the slave catchup after it is done, swap roles, and then ALTER on the former master. This doesn't work for all ALTER TABLE commands, but it can handle the majority of them.
There is also a third-party tool at here, but I'm not sure how commonly it is used, how well it works, etc...

The best workaround would be to not alter your tables.
The only time a schema change should be required is if you're adding functionality, or somehow forgot an index.
If you're adding functionality, you'll likely have downtime anyway, to stage your production server.
If you forgot an index, then the database is likely slow anyway, so your users shouldn't mind downtime to fix the performance issue. You should run all you queries through an EXPLAIN to make sure you have the proper indexes declared already.
If you're afraid that you'll be altering tables frequently you might want to re-examine your schema.

Related

Performing Alter Table on Large Innodb table

I've recently been thrust into the position of db admin for our server so I'm having to learn as I go. We recently found that one of our tables had maxed out the id column and needs to be migrated to bigint.
This is for an INNODB table with roughly roughly 301GB of data. We are running mysql version 5.5.38. The command I'm running to migrate the table is
ALTER TABLE tb_name CHANGE id id BIGINT NOT NULL;
I kicked off the migration and we are now 18 hours into the migration, but I'm not seeing our disk space on the server change at all which makes me think nothing is happening. We have plenty of memory so no concern there, but it still shows the following message state when I run "show processlist;"
copy to tmp table
Does anyone have any ideas or know what I'm doing incorrectly? Please ask if you need more information.
Yes, it will take a looooong time. The disks are probably spinning as fast as they can. (SSDs employ faster hamsters.)
You can kill the ALTER, since all it is doing is, as it says, "copying to tmp table", after which it will rename the tmp table to be the real table and drop the old copy.
I hope you had innodb_file_per_table = ON when you started the ALTER. Else it will be expanding ibdata1, which won't shrink afterwards.
pt-online-schema-change is an alternative. It will still take a loooooong time (with one extra 'o' because it will be slightly slower). It will do the job without blocking other activity.
This might have been a good time to check all the columns and indexes in the table:
Could some INTs be turned into MEDIUMINT or something smaller?
Are some of the INDEXes unused?
How about normalizing some of the VARCHARs?
Maybe even PARTITIONing (but not without a good reason)? Time-series is a typical use for Data Warehousing.
Summarize the data, and toss at least the older data?
If you would like further guidance, please provide SHOW CREATE TABLE.

Performing Heavy Crunching On a Table Without Affecting the Table

I'm looking for some general advice on the best way to perform heavy crunching/data-mining on a database table, without affecting the performance of regular site queries on the table. Some of the calculations may involve joining several tables, and involve complex sorting and ordering. So "use better indexes" isn't always the solution.
This question isn't really specific. I'm looking for a general way to solve a problem that's come up many times over the years. So I don't have a specific table schema to show, a specific query to show. I've considered dumping the table first using mysqldump, and then re-importing the table under a different name, and then performing my heavy crunching on that temp table. My sysadmin hates the idea, so I'm looking for any other solutions people have come up with to deal with this type of problem.
If your "heavy crunching" is all read only and you are not doing anything that needs to be written back into your production data, use a Master/Slave replication and use the Slave for all your reporting and data analysis needs. The replication link will keep the values up to date on the Slave, and you can hit the Slave with as much load as you want without slowing down the Master which is serving your production system.
If you want to avoid affecting performance of your production database, the only solution I have used previously is to run your queries on another database server.
I would take a backup of the entire database and then restore it on a separate server.
Obviously, you cannot do this if you want to analyze real-time data. But for most analysis, a snapshot from the previous day is sufficient.

mysql speed, table index and select/update/insert

We have got a MySQL table which has got more than 7.000.000 (yes seven million) rows.
We are always doing so much SELECT / INSERT / UPDATE queries per 5 seconds.
Is it a good thing that if we create MySQL INDEX for that table? Will there be some bad consequences like data corrupting or loosing MySQL services etc.?
Little info:
MySQL version 5.1.56
Server CentOS
Table engines are MyISAM
MySQL CPU load between 200% - 400% always
In general, indexes will improve the speed of SELECT operations and will slow down INSERT/UPDATE/DELETE operations, as both the base table and the indexes must be modified when a change occurs.
It is very difficult to say such a thing. I would expect that the indexing itself might take some time. But after that you should have some improvements. As said by #Joe and #Patrick, it might hurt your modification time, but the selecting will be faster.
Ofcourse, there are some other ways of improving performance on inserting and updating. You could ofcourse batch updates if it is not important to have change visible immediatly.
The indexes will help dramatically with selects. Especially if the match up well with the commonly filtered fields. And you have a good simple primary key. They will help with both the time of the queries and the processing cycles.
The drawbacks are if you are very often updating/altering/deleting these records, especially the indexed fields. Even in this case though, it is often worth it.
How much you're going to be reporting (select statement) vs updating (should!) hugely affects both your initial design as well as your later adjustments once your db is in the wild. Since you already have what you have, testing will give you the answers you need. If you really do a lot of select queries, and a lot of updating, your solution might be to copy out data now and then to a reporting table. Then you can index like crazy with no ill effects.
You have actually asked a large question, and you should study up on this more. The general things I've mentioned above hold for most all relational dbs, but there are also particular behaviors of the particular databases (MySQL in your case), mainly in how they decide when and where to use indexes.
If you are looking for performance, indexes are the way to go. Indexes speed up your queries. If you have 7 Million records, your queries are probably taking many seconds possibley a minute depending on your memory size.
Generally speaking, I would create indexes that match the most frequent SELECT statements. Everyone talks about the negative impact of indexes on table size and speed but I would neglect those impacts unless you have a table for which you are doing 95% of the time inserts and updates but even then, if those inserts happen at night and you query during the day, go and create those indexes, your users during daytime will appreciate it.
What is the actual time impact to an insert or update statement if there is an additional index, 0.001 secondes maybe? If the index saves you many seconds per each query, I guess the additional time required to update index is well worth it.
The only time I ever had an issue with creating an index (it actually broke the program logic) was when we added a primary key to a table that was previously created (by someone else) without a primary key and the program was expecting that the SELECT statement returns the records in the sequence they were created. Creating the primary key changed that, the records when selecting without any WHERE clause were returned in a different sequence.
This is obviously a wrong design in the first place, nevertheless, if you have an older program and you encounter tables without primary key, I suggest to look at the code that reads that table before adding a primary key, just in case.
One more last thought about creating indexes, the choice of fields and the sequence in which the fields appear in the index have an impact on the performance of the index.
I had the same kind of problem that you describe.
I did a few changes and 1 query passed from 11sec to a few milliseconds
1- Upgraded to MariaDB 10.1
2- Changed ALL my DB to ARIA engine
3- Changed my.cnf to the strict mininum
4- Upgraded php 7.1 (but this one had a little impact)
5- with CentOS : "Yum update" in the terminal or via ssh (by keeping everything up to date)
1- MariaDB is the new Open source version of MYSQL
2- ARIA engine is the evolution of MYISAM
3- my.cnf have usually too much change that affect performance
Here an example
[mysqld]
performance-schema=1
general_log=0
slow_query_log=0
max_allowed_packet=268435456
By removing all extra options from the my.cnf, it's telling mysql to use default values.
In MYSQL 5 (5.1, 5.5, 5.6...) When I did that ; I only noticed a small difference.
But in MariaDB -> the small my.cnf like this did a BIG difference.
******* ALL of those changes ; the server hardware remained the same.
Hope it can help you

MySQL ALTER TABLE on very large table - is it safe to run it?

I have a MySQL database with a MyISAM table with 4 million rows. I update this table about once a week with about 2000 new rows. After updating, I then alter the table like this:
ALTER TABLE x ORDER BY PK DESC
I order the table by the primary key field in descending order. This has not given me any problems on my development machine (Windows with 3GB memory). Three times I have tried it successfully on the production Linux server (with 512MB RAM - and achieving the resulted sorted table in about 6 minutes each time), the last time I tried it I had to stop the query after about 30 minutes and rebuild the database from a backup.
Can a 512MB server cope with that alter statement on such a large table? I have read that a temporary table is created to perform the ALTER TABLE command.
Question: Can this alter command be safely run? What should be the expected time for the alteration of the table?
As I have just read, the ALTER TABLE ... ORDER BY ... query is useful to improve performance in certain scenarios. I am surprised that the PK Index does not help with this. But, from the MySQL docs, it seems that InnoDB does use the index. However InnoDB tends to be slower as MyISAM. That said, with InnoDB you wouldn't need to re-order the table but you would lose the blazing speed of MyISAM. It still may be worth a shot.
The way you explain the problems, it seems that there is too much data loaded into memory (maybe there is even swapping going on?). You could easily check that with monitoring your memory usage. It's hard to say as I do not know MySQL all that well.
On the other hand, I think your problem lies at a very different place: You are using a machine with only 512 Megs of RAM as Database server with a table containing more than 4Mio rows... And you are performing a very memory-heavy operation on the whole table on that machine. It seems that 512Megs will not nearly be enough for that.
A much more fundamental issue I am seeing here: You are doing development (and quite likely testing as well) in an environment that is very different to the production environment. The kind of problem you are explaining is to be expected. Your development machine has six times as much memory as your production machine. I believe I can safely say, that the processor is much faster as well. In that case, I suggest you create a virtual machine mimicking your production site. That way you can easily test your project without disrupting the production site.
What you're asking it to do is rebuild the entire table and all its indexes; this is an expensive operation particularly if the data doesn't fit in ram. It will complete, but it will be vastly slower if the data doesn't fit in ram, particularly if you have lots of indexes.
I question your judgement when choosing to run a machine with such tiny memory in production. Anyway:
Is this ALTER TABLE really necessary; what specific query are you trying to speed up, and have you tried it without?
Have you considered making your development machine more like production? I mean, using a dev box with MORE memory is never a good idea, and using a different OS is definitely not either.
There is probably also some tuning you can do to try to help; it largely depends on your schema (indexes in particular). 4M rows is not very many (for a machine with normal amounts of ram).
is the primary key auto_increment? if so, then doing ALTER TABLE ... ORDER BY isn't going to improve anything since everything will be inserted in order anyway.
(unless you have lots of deletes)
I'd probably create a View instead which is ordered by the PK value, so that for one thing you don't need to lock up that huge table while the ALTER is being performed.
If you're using InnoDB, you shouldn't have to explicitly perform the ORDER BY either post-insert or at query time. According to the MySQL 5.0 manual, InnoDB already defaults to primary key ordering for query results:
http://dev.mysql.com/doc/refman/5.0/en/alter-table.html#id4052480
MyISAM tables return records in insertion order by default, instead, which may work as well if you only ever append to the table, rather than using an UPDATE query to modify any rows in-place.

How to apply an index to a MySQL / MyISAM table without locking?

Having a production table, with a very critical column (date) missing an index, are there any ways to apply said index without user impact?
The table currently gets around 5-10 inserts every second, so full table locking is out; redirecting those inserts to alternative table / database, even temporarily, is also denied (for corporate politics reasons). Any other ways?
As far as I know this is not possible with MyISAM. With 5-10 INSERTs per second you should consider InnoDB anyways, unless you're not reading that much.
Are you using replication, preferable in a Master-Master Setup? (You should!) If that is the case, you could CREATE INDEX on the standby server, switch roles and do the same, then switch back. Be sure to disable replication temporarily (when using master-master) to avoid replicating the CREATE INDEX to the active node.
Depending on whether you use that table primarily to archive Logs or similar, you might aswell look into the Archive Storage engine.