I have a TokuDB table that for some reason has a missing ***_status.tokudb file.
I am not yet sure, whether the file is missing due to a TokuDB crash or not.
Question is:
Is there a way to recover or recreate the status file from the main and key files (which I can see are present from tokudb_file map.)??
How can I debug what caused the tokuDB status file to get deleted ?
Is this really frequent or a known bug ?
https://github.com/percona/tokudb-engine/wiki/Broken-tables-caused-by-non-transactional-table-operations#unexplained-inconsistency-problems-with-tokudb
So, I was able to recover my files from the main files.
I still don't know what deleted the status files though.
The toku-ft repository has an internal debugging tool called tokuftdump.
After it parses the tree, it dumps bytestreams on the leaf entries that are unpacked. Some quick hex editing on the converted hexstreams reveals the structure, and then you can modify the utility to dump the exact values post parsing as revealed by the structure.
Since toku has message buffers on nodes, you also may need some additional message processing. In my case this was simple since I only had inserts...
Update: More details can be found here.
http://kshitij.learnercafe.com/TokuDB-Recovery-From-Files
Related
Serialising JSON to a file is a convenient way for storing arbitrary data structures in a persistent manner. At work, I see this happening a lot, and I think it's understandable to do this instead of using something like SQLite because it's just so damn easy.
The problem with this is that when you modify the file programmatically, you might end up corrupting the file and you might lose your data or the software might be unable to proceed after the data has been corrupted. E.g. the file could be only partially written due to an abrupt power failure or a crash. Also, if there are multiple processes modifying the file, it will need some locking, but this is rarely the case.
A few years back, I came up with what I believe is a failsafe approach to modifying JSON files on Linux, but I am not a database expert and people say that "you should never write your own database". Thus, I'd appreciate some feedback from database experts on the matter. Is this really a failsafe approach?
For the single consumer, single producer case, it goes like this:
Read the JSON file and parse it
Change a node in the object tree
Serialise the new object tree to a file on the same file system.
E.g. if the path to the old file is /etc/example/config.json then the new file should be at /etc/example/config.json.tmp.
It's important to keep the temporary file on the same filesystem, and not on e.g. /tmp because this makes the rename() system call atomic with regard to filesystem operations.
This means that after a rename(), the file is guaranteed to be complete.
If the system experiences a power failure at the time of rename() the change may be lost but the old file will still be complete and not corrupted.
Run fdatasync() or fsync() on the new file.
This can take a while. Don't run this in the main loop.
When this call returns, the file is guaranteed to have been written to persistent storage
(optional) Read the new file and verify that it's valid JSON and that it fits the schema
Rename the new file to the name of the old file using the rename() system call
We almost never share files between processes but in the multiple producers case, one might use something like flock(), but read-locks are not necessary because of the rename() logic described above. The file is guaranteed to always be complete.
I accidentally truncated my table from online server and I wasn't able to back up it. Please anyone help me on what should I do.
Most viable, least work:
From a backup
Check again if you have one
Ask your hoster if they do backups; their default configuration for some setups might include a backup that you are unaware of, e.g. a database backup for wordpress or a file backup if you have a vm
Viable in some situations, little work if applicable:
From binary logs. Check if they are enabled (maybe as part of your hosters default configuration, also maybe only the hoster can access them, so you may need to ask them). They contain the most recent changes to your database, and, if you are lucky, "recent" might be long enough to include everything
Less viable, more work:
Try to recover from related data, e.g. history tables, other related tables or log files (e.g. the mysql general query log or log files that your application created); you can try to analyze them to figure out what should be in your table
Least viable, most work, most expensive:
In theory, since the data is still stored on the harddrive until it is overwritten by new data, you can try to recover the data, similar to tools that find lost blocks or deleted files on your harddrive
You need to stop any activity on your harddrive to increase probability of success. This will depend on your configuration and setup. E.g., in shared hosting, freed diskspace might be overwritten by other users beyond you control, on the other hand, if you are using innodb and disabled innodb_file_per_table, the data is stored in a single file (and the disk space is not freed), so stopping your mysql server should prevent any remaining recoverable data from being overwritten.
While there are some tools to help you with that, you will likely have to pay someone to do it for you (and even then you only get back the data that hasn't been overwritten so far), so this option is most likely only viable if your data is very valuable
Is it possible to restore table to last time with data if all data was deleted accidentally.
There is another solution, if you have binary logs active on your server you can use mysqlbinlog
generate a sql file with it
mysqlbinlog binary_log_file > query_log.sql
then search for your missing rows.
If you don't have it active, no other solution. Make backups next time.
Sort of. Using phpMyAdmin I just deleted one row too many. But I caught it before I proceeded and had most of the data from the delete confirmation message. I was able to rebuild the record. But the confirmation message truncated some of a text comment.
Someone more knowledgeable than I regarding phpMyAdmin may know of a setting so that you can get a more complete echo of the delete confirmation message. With a complete delete message available, if you slow down and catch your error, you can restore the whole record.
(PS This app also sends an email of the submission that creates the record. If the client has a copy, I will be able to restore the record completely)
As Mitch mentioned, backing data up is the best method.
However, it maybe possible to extract the lost data partially depending on the situation or DB server used. For most part, you are out of luck if you don't have any backup.
I'm sorry, bu it's not posible, unless you made a backup file earlier.
EDIT: Actually it is possible, but it gets very tricky and you shouldn't think about it if data wasn't really, really important. You see: when data get's deleted from a computer it still remains in the same place on the disk, only its sectors are marked as empty. So data remains intact, except if it gets overwritten by new data. There are several programs designed for this purpose and there are companies who specialize in data recovery, though they are rather expensive.
For InnoDB tables, Percona has a recovery tool which may help. It is far from fail-safe or perfect, and how fast you stopped your MySQL server after the accidental deletes has a major impact. If you're quick enough, changes are you can recover quite a bit of data, but recovering all data is nigh impossible.
Of cours, proper daily backups, binlogs, and possibly a replication slave (which won't help for accidental deletes but does help in case of hardware failure) are the way to go, but this tool could enable you to save as much data as possible when you did not have those yet.
No this is not possible. The only solution will be to have regular backups. This is very important.
Unfortunately, no. If you were running the server in default config, go get your backups (you have backups, right?) - generally, a database doesn't keep previous versions of your data, or a revision of changes: only the current state.
(Alternately, if you have deleted the data through a custom frontend, it is quite possible that the frontend doesn't actually issue a DELETE: many tables have a is_deleted field or similar, and this is simply toggled by the frontend. Note that this is a "soft delete" implemented in the frontend app - the data is not actually deleted in such cases; if you actually issued a DELETE, TRUNCATE or a similar SQL command, this is not applicable.)
If you use MyISAM tables, then you can recover any data you deleted, just
open file: mysql/data/[your_db]/[your_table].MYD
with any text editor
I have received ib_logfile0 file.
Now I want to read the file. How can I do the same?
Main motive behind this is somehow I need to understand the schema for the list of tables that ib_logfile0 is tagged to. Is it worth putting efforts reading this file for the purpose mentioned.
Thanks for your time.
Thanks and Regards,
SachinJadhav.
ib_logfile0 is a redo log used for crash recovery. It contains a log of commited transactions that haven't been applied to the tablespaces.
It is important to keep this file, its not some throw away item and its contents a vitally important to the operation of innodb.
The contents aren't simple, and due to the abstraction probably not worth spending time on. In addition to the official docs, Jeremy Cole did a blog series, possibly a bit dated now, and set of tools to read files. Failing that there is always the source code.
The following command will give you a list of table names and some garbage too, but you should be able to recognize the table names it applies to.
strings ib_logfile0
I managed to wipe a server by mistake but PhotoRec was kind enough to recover the .frm and .myi files from the hard drive. I now have a desktop set up with the same version of MySQL to recover the data but my question is: what do I do? I have about 160 of these files. I haven't yet reinstalled the server in case I need anything else.
Also, as I'm using PhotoRec, it doesn't provide the original filenames. If this is important, how can I get the raw data out of the files and manually rebuild the database?
Edit: I managed to get ahold of the PhotoRec source and add the capability to recover the .myd files (which a bit of digging reveals to be the actual data files), but I can't get the thing to compile, and it ain't because of my mods! Can anyone help with a 'No rule to make target' error in PhotoRec? file_http.o's the culprit.
Thanks,
Rob
MYI files are useless, these are files with secondary indexes, not your data.
PhotoRec is a nice tool, I used it a lot for multimedia recovery etc. Although it claims MYD support it never worked for me. I doubt it possibly can extract MYD files.
I dont belive you can, photorec does not support MYD, im doing an attempt with ext3grep but it always segfaults.
just posting this so someone doesnt spend time better spent on using photorec for this purpose.