I accidentally truncated my table from online server and I wasn't able to back up it. Please anyone help me on what should I do.
Most viable, least work:
From a backup
Check again if you have one
Ask your hoster if they do backups; their default configuration for some setups might include a backup that you are unaware of, e.g. a database backup for wordpress or a file backup if you have a vm
Viable in some situations, little work if applicable:
From binary logs. Check if they are enabled (maybe as part of your hosters default configuration, also maybe only the hoster can access them, so you may need to ask them). They contain the most recent changes to your database, and, if you are lucky, "recent" might be long enough to include everything
Less viable, more work:
Try to recover from related data, e.g. history tables, other related tables or log files (e.g. the mysql general query log or log files that your application created); you can try to analyze them to figure out what should be in your table
Least viable, most work, most expensive:
In theory, since the data is still stored on the harddrive until it is overwritten by new data, you can try to recover the data, similar to tools that find lost blocks or deleted files on your harddrive
You need to stop any activity on your harddrive to increase probability of success. This will depend on your configuration and setup. E.g., in shared hosting, freed diskspace might be overwritten by other users beyond you control, on the other hand, if you are using innodb and disabled innodb_file_per_table, the data is stored in a single file (and the disk space is not freed), so stopping your mysql server should prevent any remaining recoverable data from being overwritten.
While there are some tools to help you with that, you will likely have to pay someone to do it for you (and even then you only get back the data that hasn't been overwritten so far), so this option is most likely only viable if your data is very valuable
Related
I am using MySql 8.0.11 and using keyring_file plugin, I have encrypted a particular table, say t1, in my database.
When I check the contents of t1.ibd file I can see that the contents are encrypted after successful encryption. But, I continue to see the table contents using query (select * from t1) even if the contents in the ibd file are encrypted.
So, does it mean that the encryption works on ibd files only (which contains the data and the indexes) but I will continue to see the table contents without any issue if I have the database credentials?
UPDATE
I read a couple of comments and would like to add the below questions to clarify my original query:
After encrypting the ibd files, if a hacker hacks into my system on which the database is hosted, the hacker would be able to see the actual data. So, how has encrypting the ibd files helped me secure the data?
An .ibd file contains all the data and all the indexes for a table. ("Tablespaces" can contain multiple tables; the principles are the same.)
With the plugin (etc), SELECT automagically goes through the decryption, making the encryption 'transparent'. Nevertheless it is real. You did have to do something to start the program, correct? That was when the 'key' was loaded into RAM for use.
Encrypting .ibd files protects you from one threat: someone grabs (or copies) your disk drive.
But beware. There are temp tables, binlogs, other logs, etc, that may or may not be encrypted. They temporarily hold some of the data. Early versions of MySQL encryption failed to include some of these.
The AES functions let you en/decrypt individual strings (such as one column in one row at a time). But this leaves it up to you to protect the en/decryption key. Or at least never have it on disk as plaintext.
Read about "encryption at rest" versus "encryption in flight". Encrypting the files is "at rest". A smart hacker will attack your code so that he can run a SELECT after the credentials have been loaded.
With or without encryption, "SQL injection" (qv) is a well established way to hack into the data, and even into the filesystem. The protection for this comes, for example, comes by validating/escaping/etc data that comes from an HTML <form>. Encrypting files is no protection against this.
I have given you a short list of threats against your data. The real list is much longer. You need to find such a list and decide which ones you are willing to invest in protecting against. Security is not simple.
I have a database that has InnoDB tables in it, and one of the InnoDB tables is marked as corrupt (and I know data is missing, etc.). However, when I restart MySQL, it doesn't crash.
I expected it to crash, however it doesn't . ( I read before that if innodb table is corrupted, mysql server will be stopped )
should it not be crashing now?
innodb is my default db engine.
A corrupt table doesn't necessarily cause a crash. You ought to repair the table and, if possible, reload the table from a backup, though. Operation on a corrupt table is flaky at best, and anyway it's wont to not give you the correct results, as you have already discovered.
Do not trust the fact that the system is not "exploding" -- a database has several intermediate states. The one you're in now could well be "I'm not exploding yet, I'm waiting for the corruption to spread and contaminate other tables' data". If you know the table is corrupt, act now.
About repairing InnoDB tables, see How do I repair an InnoDB table? .
To verify if an InnoDB table is corrupt, see https://dba.stackexchange.com/questions/6191/how-do-you-identify-innodb-table-corruption .
Detecting corruption
To do this you need an acceptance test that will examine a bunch of data and give it a clean bill of health -- or not. Exporting a table to SQL and seeing whether it's possible, and/or running checks on tuple cardinality and/or relations and... you get my drift.
On a table where no one is expected to write, so that any modification equals to a corruption, a MD5 of the disk file could be quicker.
To make things more efficient (e.g. in production systems) you can think file snapshots, or database replication, or even High Availability. These methods will detect programmatic corruption (e.g. a rogue UPDATE), but may not detect some kinds of hardware corruption on the master (giving a false negative: the checks on the slave pan out, and the data is still corrupt on the master) or may suffer mishaps in the slave (which fails and raises a false positive, since the data on the master is actually untainted).
It is important (and efficient) to monitor system vital statistics, both to catch the first symptoms of an impending failure (e.g. with SMART) and to supply data for forensic investigation ("Funny that every time the DB failed it was always shortly after a sudden peak in system load -- what if we ferreted out what caused that?").
And of course rely on full and adequate backups (and run a test restore every now and then. Been there, done that, got my ass handed to me).
Corruption causes [not related to original question]
Corruption source varies with the software setup. In general, of course, something must intrude in the server memory representation-writer process-OS handle-journaling-IOSS-OS cache-disk-disk cache-internal file layout chain and wreak havoc.
Improper system shutdown may mess at several levels, preventing data from being written at any stage of the pipeline.
Manhandling the files on disk messes with the very last stage (using a pipeline of its own, of which the server knows nothing).
Other more esotheric possibilities exist:
subtle firmware/hardware failure in the hard disk itself,
accidental and probably unrecoverable, due to disk wear and tear or defective firmware or even a defective firmware update (I seem to remember some years back, a Hitachi update for acoustic management that could be run against a slightly different model. After the update the disk "thought" it had more cache than it actually had, and writes to the nonexistent areas of the cache of course went directly to bit heaven).
"intentional" and probably recoverable: it is sometimes possible to stretch your hard disk too thin using hdparm. Setting the disk for the very top performance is all well and good if every component is suited to that level of performance and knows it or at least is able to signal if it is not. Sometimes all the "warning" you get is a system malfunction.
process space or IOSS corruption: saw this on a Apache installation where somehow, probably thanks to a CGI that was suid root, the access.log file was filling with a stream of GIF images supposed to go to the user's browser. Fixed and nothing happened, but if it had been a more vital file instead of a log...? Such problems may be difficult to diagnose, and you might need to inspect all log files to see whether some application noticed or did anything untoward.
hard disk sector relocation: fabled to happen, never seen it myself, but modern hard disks have "spare" sectors they will swap for defective sectors to keep sporting a "zero defect" surface. Except that if the defective sector happens to no longer be readable and is swapped for an empty one, the net effect is the same as that sector suddenly being zeroed. This you can easily check using SMART reporting (hddhealth or smartctl).
Many more other possibilities exist, of course, depending on setup. Googling for file corruption finds a jillion pages; useful terms to add to the query are filesystem (ext4, NTFS, brtfs, ...), hard disk make and model, OS, software suffering problems, other software installed.
I read somewhere that it is, so I was wondering.
Don't do that, the .frm/.myd/.myi may be much larger than the actual data and may cause crash (data not consistence) or very hard to transfrom/recover.
Use mysqldump to transfer MySQL database.
It might or might not work, depending on the state of the data.
it should work if you stop the database completely and restart it again after the backup, but there you have a downtime.
A slightly more detailed response at askers request.
Some details of the dangers of a straight file copy:
If the database is live the database might change some of the files
before others so when you copy them you copy them you may get copies
in that are inconsistent with each other. If the database is offline
this method is probably reliable.
Advantages of using the documented methods:
Should work on future version of DBMS
Should work consistently across underlying engines
Always a consistent snapshot like copy
Is it possible to restore table to last time with data if all data was deleted accidentally.
There is another solution, if you have binary logs active on your server you can use mysqlbinlog
generate a sql file with it
mysqlbinlog binary_log_file > query_log.sql
then search for your missing rows.
If you don't have it active, no other solution. Make backups next time.
Sort of. Using phpMyAdmin I just deleted one row too many. But I caught it before I proceeded and had most of the data from the delete confirmation message. I was able to rebuild the record. But the confirmation message truncated some of a text comment.
Someone more knowledgeable than I regarding phpMyAdmin may know of a setting so that you can get a more complete echo of the delete confirmation message. With a complete delete message available, if you slow down and catch your error, you can restore the whole record.
(PS This app also sends an email of the submission that creates the record. If the client has a copy, I will be able to restore the record completely)
As Mitch mentioned, backing data up is the best method.
However, it maybe possible to extract the lost data partially depending on the situation or DB server used. For most part, you are out of luck if you don't have any backup.
I'm sorry, bu it's not posible, unless you made a backup file earlier.
EDIT: Actually it is possible, but it gets very tricky and you shouldn't think about it if data wasn't really, really important. You see: when data get's deleted from a computer it still remains in the same place on the disk, only its sectors are marked as empty. So data remains intact, except if it gets overwritten by new data. There are several programs designed for this purpose and there are companies who specialize in data recovery, though they are rather expensive.
For InnoDB tables, Percona has a recovery tool which may help. It is far from fail-safe or perfect, and how fast you stopped your MySQL server after the accidental deletes has a major impact. If you're quick enough, changes are you can recover quite a bit of data, but recovering all data is nigh impossible.
Of cours, proper daily backups, binlogs, and possibly a replication slave (which won't help for accidental deletes but does help in case of hardware failure) are the way to go, but this tool could enable you to save as much data as possible when you did not have those yet.
No this is not possible. The only solution will be to have regular backups. This is very important.
Unfortunately, no. If you were running the server in default config, go get your backups (you have backups, right?) - generally, a database doesn't keep previous versions of your data, or a revision of changes: only the current state.
(Alternately, if you have deleted the data through a custom frontend, it is quite possible that the frontend doesn't actually issue a DELETE: many tables have a is_deleted field or similar, and this is simply toggled by the frontend. Note that this is a "soft delete" implemented in the frontend app - the data is not actually deleted in such cases; if you actually issued a DELETE, TRUNCATE or a similar SQL command, this is not applicable.)
If you use MyISAM tables, then you can recover any data you deleted, just
open file: mysql/data/[your_db]/[your_table].MYD
with any text editor
What do Repair and Compact operations do to an .MDB?
If these operations do not stop a 1GB+ .MDB backed VB application crashing, what other options are there?
Why would a large sized .MDB file cause an application to crash?
"What do compact and repair operations do to an MDB?"
First off, don't worry about repair. The fact that there are still commands that purport to do a standalone repair is a legacy of the old days. That behavior of that command was changed greatly starting with Jet 3.51, and has remained so since that. That is, a repair will never be performed unless Jet/ACE determines that it is necessary. When you do a compact, it will test whether a repair is needed and perform it before the compact.
So, what does it do?
A compact/repair rewrites the data file, elmininating any unused data pages, writing tables and indexes in contiguous data pages and flagging all saved QueryDefs for re-compilation the next time they are run. It also updates certain metadata for the tables, and other metadata and internal structures in the header of the file.
All databases have some form of "compact" operation because they are optimized for performance. Disk space is cheap, so instead of writing things in to use storage efficiently, they instead write to the first available space. Thus, in Jet/ACE, if you update a record, the record is written to the original data page only if the new data fits within the original data page. If not, the original data page is marked unused and the record is rewritten to an entirely new data page. Thus, the file can become internally fragmented, with used and unused data pages mixed in throughout the file.
A compact organizes everything neatly and gets rid of all the slack space. It also rewrites data tables in primary key order (Jet/ACE clusters on the PK, but that's the only index you can cluster on). Indexes are also rewritten at that point, since over time those become fragmented with use, also.
Compact is an operation that should be part of regular maintenance of any Jet/ACE file, but you shouldn't have to do it often. If you're experiencing regular significant bloat, then it suggests that you may be mis-using your back-end database by storing/deleting temporary data. If your app adds records and deletes them as part of its regular operations, then you have a design problem that's going to make your data file bloat regularly.
To fix that error, move the temp tables to a different standalone MDB/ACCDB so that the churn won't cause your main data file to bloat.
On another note not applicable in this context, front ends bload in different ways because of the nature of what's stored in them. Since this question is about an MDB/ACCDB used from VB, I'll not go into details, but suffice it to say that compacting a front end is something that's necessary during development, but only very seldom in production use. The only reason to compact a production front end is to update metadata and recompile queries stored in it.
It's always been that MDB files become slow and prone to corruption as they get over 1GB, but I've never known why - it's always been just a fact of life. I did some quick searching, I can't find any official, or even well-informed insider, explanations of why this size is correlated with MDB problems, but my experience has always been that MDB files become incredibly unreliable as you approach and exceed 1GB.
Here's the MS KB article about Repair and Compact, detailing what happens during that operation:
http://support.microsoft.com/kb/209769/EN-US/
The app probably crashes as the result of improper/unexpected data returned from a database query to an MDB that large - what error in particular do you get when your application crashes? Perhaps there's a way to catch the error and deal with it instead of just crashing the application.
If it is crashing a lot then you might want to try a decompile on the DB and/or making a new database and copying all the objects over to the new container.
Try the decompile first, to do that just add the /decompile flag to the startup options of your DB for example
“C:\Program Files\access\access.mdb” “C:\mydb.mdb” /decompile
Then compact, compile and then compact again
EDIT:
You cant do it without access being installed but if it is just storing data then a decompile will not do you any good. You can however look at jetcomp to help you with you compacting needs
support.microsoft.com/kb/273956