how to alter a mysql innodb partition to use another key? - mysql

I have a table with 5 hash(key_1) partitions. I want to change that, so it instead has 5 hash(key_2) partitions, but without losing data.
How do I do this? I have searched but its hard to find confirmation that I dont lose data by deleting partitions.

Deleting, truncating, or dropping partitions will definitely lose data. You can change partitioning this with ALTER TABLE, for example ALTER TABLE t PARTITION BY HASH (key_2) PARTITIONS 5. This won't lose data, but (at least with InnoDB), the table will be locked for writes and rebuilt with the new partitioning.

Related

Database Table very slow after delete

I have a MySQL InnoDB database running on the Google App-Engine.
One of the tables has the current date and a user_id as primary key stored with some additional data.
The table had around 7 million rows and I deleted 6 million of them with a DELETE query. Since that any query using this table is much slower than before.
Any ideas what could cause this behavior or how to solve this?
Thanks in advance!
After such a massive delete on innodb you would better to use OPTIMISE table statement
Use OPTIMIZE TABLE in these cases, depending on the type of table:
After doing substantial insert, update, or delete operations on an InnoDB table that has its own .ibd file because it was created with
the innodb_file_per_table option enabled. The table and indexes are
reorganized, and disk space can be reclaimed for use by the operating
system.
After doing substantial insert, update, or delete operations on columns that are part of a FULLTEXT index in an InnoDB table. Set the
configuration option innodb_optimize_fulltext_only=1 first. To keep
the index maintenance period to a reasonable time, set the
innodb_ft_num_word_optimize option to specify how many words to
update in the search index, and run a sequence of OPTIMIZE TABLE
statements until the search index is fully updated.
Prior to optimize, check the table's state using ANALYSE TABLE, and it's indexes using SHOW INDEX. These instructions will provide you with information regarding the "flaws" that OPTIMIZE can fix.
All this is easy to do in phpmyadmin.

How to reduce index size of a table in mysql using innodb engine?

I am facing a performance issue in mysql due to large index size on my table. Index size has grown to 6GB and my instance is running on 32GB memory. Majority of rows is not required in that table after a few hours and can be removed selectively. But removing them is a time consuming solution and doesn't reduce index size.
Please suggest some solution to manage this index.
You can optimize your table to rebuild index and get back space if not getting even after deletion-
optimize table table_name;
But as your table is bulky so it will lock during optimze table and also you are facing issue how can remove old data even you don't need few hours old data. So you can do as per below-
Step1: during night hours or when there is less traffic on your db, first rename your main table and create a new table with same name. Now insert few hours data from old table to new table.
By this you can remove unwanted data and also new table will be optimzed.
Step2: In future to avoid this issue, you can create a stored procedure. Which will will execute in night hours only 1 time per day and either delete till previous day (as per your requirement) data from this table or will move data to any historical table.
Step3: As now your table always keep only sigle day data then you can execute optimize table statement to rebuild and claim space back on this table easily.
Note: delete statement will not rebuild index and will not free space on server. For this you need to do optimize your table. It can be by various ways like by alter statement or by optimize statement etc.
If you can remove all the rows older than X hours, then PARTITIONing is the way to go. PARTITION BY RANGE on the hour and use DROP PARTITION to remove an old hour and REORGANIZE PARTITION to create a new hour. You should have X+2 partitions. More details.
If the deletes are more complex, please provide more details; perhaps we can come up with another solution that deals with the question about index size. Please include SHOW CREATE TABLE.
Even if you cannot use partitions for purging, it may be useful to have partitions for OPTIMIZE. Do not use OPTIMIZE PARTITION; it optimizes the entire table. Instead, use REORGANIZE PARTITION if you see you need to shrink the index.
How big is the table?
How big is innodb_buffer_pool_size?
(6GB index does not seem that bad, especially since you have 32GB of RAM.)

Inserting New Column in MYSQL taking too long

We have a huge database and inserting a new column is taking too long. Anyway to speed up things?
Unfortunately, there's probably not much you can do. When inserting a new column, MySQL makes a copy of the table and inserts the new data there. You may find it faster to do
CREATE TABLE new_table LIKE old_table;
ALTER TABLE new_table ADD COLUMN (column definition);
INSERT INTO new_table(old columns) SELECT * FROM old_table;
RENAME table old_table TO tmp, new_table TO old_table;
DROP TABLE tmp;
This hasn't been my experience, but I've heard others have had success. You could also try disabling indices on new_table before the insert and re-enabling later. Note that in this case, you need to be careful not to lose any data which may be inserted into old_table during the transition.
Alternatively, if your concern is impacting users during the change, check out pt-online-schema-change which makes clever use of triggers to execute ALTER TABLE statements while keeping the table being modified available. (Note that this won't speed up the process however.)
There are four main things that you can do to make this faster:
If using innodb_file_per_table the original table may be highly fragmented in the filesystem, so you can try defragmenting it first.
Make the buffer pool as big as sensible, so more of the data, particularly the secondary indexes, fits in it.
Make innodb_io_capacity high enough, perhaps higher than usual, so that insert buffer merging and flushing of modified pages will happen more quickly. Requires MySQL 5.1 with InnoDB plugin or 5.5 and later.
MySQL 5.1 with InnoDB plugin and MySQL 5.5 and later support fast alter table. One of the things that makes a lot faster is adding or rebuilding indexes that are both not unique and not in a foreign key. So you can do this:
A. ALTER TABLE ADD your column, DROP your non-unique indexes that aren't in FKs.
B. ALTER TABLE ADD back your non-unique, non-FK indexes.
This should provide these benefits:
a. Less use of the buffer pool during step A because the buffer pool will only need to hold some of the indexes, the ones that are unique or in FKs. Indexes are randomly updated during this step so performance becomes much worse if they don't fully fit in the buffer pool. So more chance of your rebuild staying fast.
b. The fast alter table rebuilds the index by sorting the entries then building the index. This is faster and also produces an index with a higher page fill factor, so it'll be smaller and faster to start with.
The main disadvantage is that this is in two steps and after the first one you won't have some indexes that may be required for good performance. If that is a problem you can try the copy to a new table approach, using just the unique and FK indexes at first for the new table, then adding the non-unique ones later.
It's only in MySQL 5.6 but the feature request in http://bugs.mysql.com/bug.php?id=59214 increases the speed with which insert buffer changes are flushed to disk and limits how much space it can take in the buffer pool. This can be a performance limit for big jobs. the insert buffer is used to cache changes to secondary index pages.
We know that this is still frustratingly slow sometimes and that a true online alter table is very highly desirable
This is my personal opinion. For an official Oracle view, contact an Oracle public relations person.
James Day, MySQL Senior Principal Support Engineer, Oracle
usually new line insert means that there are many indexes.. so I would suggest reconsidering indexing.
Michael's solution may speed things up a bit, but perhaps you should have a look at the database and try to break the big table into smaller ones. Take a look at this: link. Normalizing your database tables may save you loads of time in the future.

MySQL Repair with Keycache performance

I have a MyISAM table with 125M records. I added 25M more records to it via:
ALTER TABLE x DISABLE KEYS;
INSERT INTO x SELECT * FROM y;
ALTER TABLE x ENABLE KEYS;
Currently ALTER TABLE x ENABLE KEYS is in the "Repair with Keycache" state. How fast is this repair operation? Is it at least as fast as the case if I didn't disable the index and let rows be added with indexes updated on the fly or is it slower?
If I kill the query now, DROP all the indexes and then re-create them again to force repair by sort (my buffer sizes are large enough) would I risk losing any data?
If you kill the query while the ALTER TABLE operation is in progress, you risk losing any data that was added to the table after the ALTER TABLE operation began. Additionally, dropping and re-creating the indexes on the table would also cause you to lose any data that was added to the table after the indexes were dropped.
In general, it is a good idea to avoid making any changes to the table while the ALTER TABLE operation is in progress. If you are concerned about the speed of the operation, it may be better to let it complete rather than trying to interrupt it.
In terms of the speed of the operation, whether or not you disable the keys before inserting new rows into the table will not affect the speed of the ALTER TABLE operation. The ALTER TABLE operation will run at roughly the same speed regardless of whether the keys were disabled or not. The speed of the operation will depend on several factors, including the size of the table and the speed of the underlying hardware.
If you want to force the ALTER TABLE operation to use the "Repair by sort" method, you can do so by setting the myisam_repair_threads system variable to a value greater than 1 before running the ALTER TABLE operation. This will cause the operation to use the "Repair by sort" method, which may be faster in some cases. However, keep in mind that this will also use more system resources, so it may not be appropriate for all situations.
It is always a good idea to back up your data before making any changes to a table, in case something goes wrong. This way, you can restore the data from the backup if needed.

Will partitions improve MySQL INSERT speed?

I'm doing a lot of INSERTs via LOAD DATA INFILE on MySQL 5.0. After many inserts, say a few hundred millions rows (InnoDB, PK + a non-unique index, 64 bit Linux 4GB RAM, RAID 1), the inserts slow down considerably and appear IO bound. Are partitions in MySQL 5.1 likely to improve performance if the data flows into separate partition tables?
The previous answer is erroneous in his assumptions that this will decrease performance. Quite the contrary.
Here's a lengthy, but informative article and the why and how to do partitioning in MySQL:
http://dev.mysql.com/tech-resources/articles/partitioning.html
Partitioning is typically used, as was mentioned, to group like-data together. That way, when you decided to archive off or flat out destroy a partition, your tables do not become fragmented. This, however, does not hurt performance, it can actually increase it. See, it is not just deletions that fragment, updates and inserts can also do that. By partitioning the data, you are instructing the RDBMS the criteria (indeces) by which the data should be manipulated and queried.
Edit: SiLent SoNG is correct. DISABLE / ENABLE KEYS only works for MyISAM, not InnoDB. I never knew that, but I went and read the docs. http://dev.mysql.com/doc/refman/5.1/en/alter-table.html#id1101502.
Updating any indexes may be whats slowing it down. You can disable indexes while your doing your update and turn them back on so they can be generated once for the whole table.
ALTER TABLE foo DISABLE KEYS;
LOAD DATA INFILE ... ;
ALTER TABLE ENABLE KEYS;
This will cause the indexes to all be updated in one go instead of per-row. This also leads to more balanced BTREE indexes.
No improvement on MySQL 5.6
"MySQL can apply partition pruning to SELECT, DELETE, and UPDATE statements. INSERT statements currently cannot be pruned."
http://dev.mysql.com/doc/refman/5.6/en/partitioning-pruning.html
If the columns INSERT checks (primary keys, for instance) are indexed - then this will only decrease the speed: MySQL will have to additionally decide on partitioning.
All queries are only improved by adding indexes. Partitioning is useful when you have tons of very old data (e.g. year<2000) which is rarely used: then it'll be nice to create a partition for that data.
Cheers!