In MySQL, data is still accessible after moving the partition directory - mysql

My partition schemes looks something like:
ALTER TABLE my_table
PARTITION BY RANGE (integer_field) (
PARTITION p0 VALUES LESS THAN (100) DATA DIRECTORY = '/my_location/partitions/p0' ,
PARTITION p1 VALUES LESS THAN (200) DATA DIRECTORY = '/my_location/partitions/p1' ,
PARTITION p_other VALUES LESS THAN (MAXVALUE) DATA DIRECTORY = '/my_location/partitions/p_other'
);
As expected, data is being stored properly into the partitions, and into the proper directory.
Problem:
Now when I remove/move the directory from the location, like mv /my_location/partitions/p0 /some_other_location/ , the directory gets moved successfully, but data is still query-able from MySQL shell, even after restarting the shell.
My Solution:
To get it working as expected, after moving the directory including the .ibd file, I had to drop the partition explicitly:
ALTER TABLE my_table DROP PARTITION p0;
This removed the partition from the scheme as required and also cleared the data, verified it by again querying the same data.
Assumption / Understanding:
I think that MySQL is caching the data, not sure where and why exactly, which makes it query-able even after partition directory is moved away. Definitely cache is not at connection level as I closed and reopned the shell.
Question:
I expected the data to disappear as soon as the directory p0 was moved away. Is it really necessary to run the drop partition statement each time the directory is moved?
Constraints:
It is sure that p0 directory is moved away only when the p0 partition is no more used. So there will not be any more data required to enter into existing p0 partition
MySQL: 8.0.19

Windows or not?
Did you restart mysqld?
mv on Linux (and close relatives) is really a "rename". And, assuming the target is on the same filesystem, that rename can even involve a different directory.
When a program (eg, mysqld) has a file (eg, the table partition) "open" mysqld, it has control over it -- even if you run off to a shell and rm the file!.
I suspect that when you restart mysqld (for any reason, including reboot), the "data directory" will become messed up.
Aside from the filesystem's partitioning, you must tell MySQL when you "archive" a partition. Read up on "transportable tablespaces" for the version you are using. Here's a writeup for 5.6; 5.7 has some improvements.
I don't see the advantage of using a filesystem partition. With "transportable tablespaces", you can disconnect a MySQL partition from a PARTITIONed table. This turns the partition into a table. Then that table can be deleted, renamed, copied, etc, without impacting the partitioned table. Search for "transportable" in http://mysql.rjweb.org/doc.php/partitionmaint ; there are some links.

As pointed by #Rick James and #Solarflare, moving around the ibd file while it is still in Open state for MySQL does behave in weird way, and tablespaces gets messed up. Following their guidance, and the MySQL Docs, here is the final approach that successfully worked for me (Possibly, the right way to do it):
Lock the table
FLUSH TABLES my_table FOR EXPORT;
This prevents any update/write operation to the desired table and makes the ibd file safe to copy. Additionally, this creates a .cfg file. These steps are very well explained in MySQL Docs
Once the table is "Locked", copy the .ibd file to the desired location for archival. PS: Do not move / delete the source file yet.
cp -r /my_location/partitions/p0 /some_other_location/
Unlock tables to be able to alter the partition
UNLOCK TABLES;
Drop the required partition safely. This also informs the tablespaces.
ALTER TABLE my_table DROP PARTITION p0;
Note that this statement leads to removal of partition, plus the data corresponding to that partition.

Related

MYSQL/MardiaDB 'recycle bin'

I'm using MardiaDB and i'm wondering if there is a way how to install a 'recycle bin' on my server where if someone deleted a table or anything it gets shifted to the recycle bin and restoring it is easy.
Not talking about mounting things to restore it and all that stuff but litterly a 'save place' where it gets stored (i have more then enough space) until i decide to delete it or just keep it there for 24 hours.
Any thoughts?
No such feature exists. http://bugs.mysql.com takes "feature requests".
Such a feature would necessarily involve MySQL; it cannot be done entirely in the OS's filesystem. This is because a running mysql caches information in RAM that the FS does not know about. And because the information about a table/db/proc/trigger/etc is not located entirely in a single file. Instead extra info exists in other, more general, files.
With MyISAM, your goal was partially possible in the fs. A MyISAM table was composed of 3 files: .frm, .MYD',.MYI`. Still MySQL would need to flush something to forget that it know about the table before the fs could move the 3 files somewhere else. MyISAM is going away; so don't even think about using that 'Engine'.
In InnoDB, a table is composed of a .ibd file (if using file_per_table) plus a .frm file, plus some info in the communal ibdata1 file. If the table is PARTITIONed, the layout is more complex.
In version 8.0, most of the previous paragraph will become incorrect -- a major change is occurring.
"Transactions" are a way of undoing writes to a table...
BEGIN;
INSERT/UPDATE/DELETE/etc...
if ( change-mind )
then ROLLBACK;
else COMMIT;
Effectively, the undo log acts as a recycle bin -- but only at the record level, and only until you execute COMMIT.
MySQL 8.0 will add the ability to have DDL statements (eg, DROP TABLE) in a transaction. But, again, it is only until COMMIT.
Think of COMMIT as flushing the recycle bin.

MySQL InnoDB specific partition remove and restore

Is it possible to drop a specific partition in InnoDB and later restore it again?
The reason behind this is to archive old partition and restore it again incase it needs to be access.
I been searching around and so far do not see anything promising.
If your are using mysql 5.6:
create table dest like source; alter table dest remove partitioning;
alter table source exchange partition XXX with table dest;
you can dump & load data later;
or you can use innobackupex to backup dest table and restore it later using transportable tablespaces

Restoring *partitioned* mysql InnoDB table from ibd files

How can I restore a partitioned mysql InnoDB table from just the .ibd files of the form TableName#P#pname.ibd ? Chris Calendar's article here
http://www.chriscalender.com/?p=28
works for non-partitioned tables with a single .ibd file, but the "discard" and "import" steps result in a "storage engine doesn't have this option" error for partitioned tables.
Wes Smith in one of the comments at the link above suggests a manual procedure to import one partition at a time, but that did not work for me. If I try to follow his approach by creating a non-partitioned table, moving the first .ibd file renamed as TableName.ibd, and doing an import, the import succeeds, but there are zero rows in the table. The subsequent suggested step of "add back in the partition", which I tried as "alter table TableName partition by range ... (partition pname1 values less than (...) ENGINE=InnoDB)"shockingly replaces the TableName.ibd file (corresponding to the first partition) with a fresh TableName#P#pname1.ibd of a trivial size. I lost one partition's worth of data trying this. I have about 150 partitions to recover.
Any advice on how to recover the data from the .ibd files? Thanks.
So, it turns out that the approach suggested in the comment at the link in the original post does work after all. The deletion of the partition I mentioned in the "alter table..." step above was legitimate as it had been intentionally deleted in the original table before the loss of the ibdata and log files (but mysql leaves the .ibd files undeleted anyway). The recovery process is painfully slow, but I have managed to script it up and hope to let it run to completion over the next several days.
Here are some tips if you find yourself in a similar situation with many partitions to recover. Suppose you had a table TableName with 100 partitions. Typically, the 100 partitions would have consecutive innodb IDs if they were created via a "create table" statement, but in general this may not be the case as partitions may have been added after creation. So, here are the steps:
1) Find out the innodb ID corresponding to each partition using Calendar's method in the blog above as follows. Note that this step does not need a restart of the mysql server. Just create a table like TableName without any partitions and discard its tablespace. Then move the first partition's .ibd file (named something like TableName#P#pname1.ibd) to the database directory as TableName.ibd and try to import it. Look into the mysql error.log (typically /var/log/mysql/error.log) to see what that partition's innodb ID is. Repeat this step for each partition (or as needed) until you have the 2-tuples (innodb_ID_i, partition_boundary_i) for all partitions in a file.
2) Start with an empty innodb state (stop the server, delete ibdata and ib_logfile*, restart server). For each innodb_ID entry in the file in step 1 above, create a table TableName_i like TableName. E.g., if the 100 partitions correspond to the IDs 321, 322,..., 370, 415, 416,...464 (two blocks of 50 contiguous IDs each), then write a script to create 320 dummy tables, 50 tables like TableName, 45 dummy tables, and 50 tables like TableName.
3)
For each TableName_i table created above,
--do
(i) rename table TableName_i to TableName
(ii) alter table TableName discard tablespace // important to do this step before the next one
(iii) mv TableName#P#pname_i.ibd TableName.ibd // with the appropriate directory prefixes
(iv) alter table TableName import tablespace
(v) alter table partition by range (partition_field) (partition pname_i values less than (partition_boundary_i)) // This is the most and only time consuming step
(vi) rename table TableName to TableName_i // or some other name or just dump it to a file
--repeat
Note that all of the above steps are scriptable and do not require restarting the server at any point except at the beginning of step 2 to start with an empty innodb state. Be careful to introduce checks for the success of each sub-step in step 3 before moving to the next, otherwise successive steps may fail and/or .ibd files may get overwritten. If feasible, use a copy in step 3(iii) instead of mv.
A final note: There might be a slightly easier alternative using percona's recovery toolkit using hex editing, but this did not work for my case of partitioned tables. I ran into the same, seemingly unresolved issue, with partitioned tables as noted in one of the comments at http://www.mysqlperformanceblog.com/2011/05/13/connecting-orphaned-ibd-files/ . Your mileage may vary though. If there were a way to avoid recreating partitions (like in step 3(v) above), that would be real nice and quick, but I am not sure if there is one.

Incorrect key file for table '/tmp/#sql_3c51_0.MYI'; try to repair it [duplicate]

This question already has answers here:
MySQL incorrect key file for tmp table when making multiple joins
(11 answers)
Closed 4 years ago.
I wrote a query and this runs on my local server correctly it has less data,
but when i run this on production server it gets an error - (this has more data around 6GB)
Incorrect key file for table '/tmp/#sql_3c51_0.MYI'; try to repair it
Here is my query
SELECT
`j25_virtuemart_products`.`virtuemart_product_id`,
`product_name`,
`product_unit`,
`product_s_desc`,
`file_url_thumb`,
`virtuemart_custom_id`,
`custom_value`
FROM
`j25_virtuemart_product_customfields`,
`j25_virtuemart_products`,
`j25_virtuemart_products_en_gb`,
`j25_virtuemart_product_medias`,
`j25_virtuemart_medias`
WHERE
(
`j25_virtuemart_products`.`virtuemart_product_id`=`j25_virtuemart_products_en_gb`.`virtuemart_product_id`
AND
`j25_virtuemart_products`.`virtuemart_product_id`=`j25_virtuemart_product_customfields`.`virtuemart_product_id`)
AND
`j25_virtuemart_products`.`virtuemart_product_id`=`j25_virtuemart_product_medias`.`virtuemart_product_id`
AND
`j25_virtuemart_product_medias`.`virtuemart_media_id`=`j25_virtuemart_medias`.`virtuemart_media_id`
GROUP BY `j25_virtuemart_products`.`virtuemart_product_id`
LIMIT 0, 1000;
Anyone know how to recover from that error - something like otimize this query or any other way
thank you
The problem is caused by the lack of disk space in /tmp folder.
The /tmp volume is used in queries that require to create temporary tables. These temporary tables are in MyISAM format even if the query is using only tables with InnoDB.
Here are some solutions:
optimize the query so it will not create temporary tables (rewrite the query, split it in multiple queries, or add proper indexes, analyze the execution plan with pt-query-digest and EXPLAIN <query>) See this Percona article about temporary tables.
optimize MySQL so it will not create temporary tables (sort_buffer_size, join_buffer_size). See: https://dba.stackexchange.com/questions/53201/mysql-creates-temporary-tables-on-disk-how-do-i-stop-it
make tables smaller. If possible, delete unneeded rows
use SELECT table1.col1, table2,col1 ... instead of select * to use only the columns that you need in the query, to generate smaller temp tables
use data types that take less space
add more disk space on the volume where /tmp folder resides
change the temp folder user by mysql by setting the TMPDIR environment variable prior to mysqld start-up. Point TMPDIR to a folder on a disk volume that has more free space. You can also use tmpdir option in /etc/my.cnf or --tmpdir in the command line of the mysqld service. See: B.5.3.5 Where MySQL Stores Temporary Files
Do these steps
Stop mysql service
rename the .myi file to x.old
Start mysql
REPAIR all the tables in query ,MySQL will rebuild key file
So many answers are above and the question owner already got the solution as suggested by #Hawili, and a long time has been passed since this problem was raised. But as this a common issue, I wanted to share my experience so that if someone got this issue again due to different reasons then can get solution from here.
Case 1:
Most common reason is that a query is fetching data greater than the size of your /tmp partition. Whenever you get this issue during a query, look at your /tmp folder size. Temporary tables are created and removed automatically, and if available space here gets down to 0 during this query it means either you need to optimize your query or need increase /tmp's partition size.
Note: Sometimes it isn't an individual query: if a combination of heavy queries do this at the same time on the same server then you can get this issue, where normally the individual queries would execute without any error.
Case 2:
In case of a corrupt myisam table where you need to repair, the directory path will be different than /tmp in the error message.
Case 3: (rare case)
Sometimes due to an incorrect table join you can get this error. This is actaully a syntax error but mysql can throw this error instead. You can check details here at below link-
Incorrect key file for table '/tmp/#sql_18b4_0.MYI'; try to repair it
Check the location of your tmp dir, by running df -h. Make sure there's enough space to grow the temp file, it could be several gigs.
Edit: If you have enough free space, I'd check to make sure every column you're indexing on or including in the WHERE clause is indexed.
Check your database server that you have enough disk space on it. If disks are full this error is displayed. Now, which folders you should look at it depends on your setup.
I used this following command and the error was gone:
mysqlcheck --all-databases -r #repair
I got this solution from the cpanel forum
Same problem to me
Run df -h to see if your partition /tmp have enough space
In my case /tmp was a overflow filesystem:
overflow 1,0M 24K 1000K 3% /tmp
What happened:
I had some problem here and my partition / was full, then Debian distro created a new partition /tmp in RAM memor, to use temporarily. This /tmp 1MB partition is not big enough for system use.
Solution:
Run the following command to remove this temporary created partition /tmp
sudo umount -l /tmp
Restart MySQL service using this command line /etc/init.d/mysqld restart in terminal.

In MySQL, MyISAM, Is it possible to alter table with partition into multiple hard drive?

In MySQL, MyISAM, Is it possible to alter table with partition into multiple hard drive ?
My hard drive is nearly used up its disk space.
is there anyone facing this problem and how do you solve it?
Yes, it's possible to partition a table over multiple drives. Have a look at the official manual, which covers this subject in depth.
http://dev.mysql.com/doc/refman/5.5/en/partitioning.html
Here's an example to partition an existing table over multiple drives:
ALTER TABLE mytable
PARTITION BY RANGE (mycolumn)(
PARTITION p01 VALUES Less Than (10000)
DATA DIRECTORY = "/mnt/disk1"
INDEX DIRECTORY = "/mnt/disk1",
PARTITION p02 VALUES Less Than (20000)
DATA DIRECTORY = "/mnt/disk2"
INDEX DIRECTORY = "/mnt/disk2",
PARTITION p03 VALUES Less Than MAXVALUE
DATA DIRECTORY = "/mnt/disk3"
INDEX DIRECTORY = "/mnt/disk3"
);
Mind that this needs NO_DIR_IN_CREATE to be off. It doesn't seem to work in windows, and it doesn't seem to work with InnoDB.
If you run out of diskspace on your last partition, you can split it with following statement:
ALTER TABLE mytable REORGANIZE PARTITION p03 INTO
(
PARTITION p03 VALUES Less Than (30000)
DATA DIRECTORY = "/mnt/disk3"
INDEX DIRECTORY = "/mnt/disk3",
PARTITION p04 VALUES Less Than MAXVALUE
DATA DIRECTORY = "/mnt/disk4"
INDEX DIRECTORY = "/mnt/disk4"
);
You can move the entire mysql data directory. Change the datadir option in /etc/mysql/my.cnf.
If you want to just move one database, you can stop the server, move the directory (/var/lib/mysql/DATABASE_NAME) somewhere else, then add a symlink to the new location (ln -s NEW_LOCATION /var/lib/mysql/DATABASE_NAME).
Make certain to make backups!!!!!! Before messing with this!!!!!
(mysqldump --all-databases as a user that has access to everything, root probably.)
The trick is to create partitions like you'd usually do, move selected partitions to other harddrives and symlink them back to /var/lib/mysql.