delete rows from table in mysql - mysql

if i write delete * from tablename in mysql ,all rows get delete it there any way to get the data back in mysql in that table

No, there isn't.
And the correct query is DELETE FROM tablename. (without *)

Restore from backup.

Depends, depends, depends.
I assume you don't have backups. If you do it's better to restore the table from the backup as it was pointed out before.
If the table was MyISAM , then the answer is rather NO than YES. The matter is MyISAM stores data in blocks. The normal block's header is 3 or 4 bytes. The deleted block's header is 20 bytes. So even if you extract records from MYD file the first column (which is your PK in most of cases) will be overwritten. To extract records from MYD file is almost impossible.
If InnoDB, chance is much higher. If MySQL was stopped/killed right after the accident then the chance is close too 100%. InnoDB doesn't delete right away. It marks the records as deleted and will physically purge the records when InnoDB rebuilds B+tree index (read - when you insert new records). So, you have some time. The first thing you'd do in cases like that is stop MySQL immediately, either gracefully or kill -9.
I don't know if I may insert URLs here. Google percona data recovery tool, take the latest revision from the trunk and check mysqlperformance blog (again, google it) for recipes.
Good luck.
UPDATE: Data recovery toolkit moved to GitHub

the easiest way would be to restore your latest backup.

Related

MYSQL/MardiaDB 'recycle bin'

I'm using MardiaDB and i'm wondering if there is a way how to install a 'recycle bin' on my server where if someone deleted a table or anything it gets shifted to the recycle bin and restoring it is easy.
Not talking about mounting things to restore it and all that stuff but litterly a 'save place' where it gets stored (i have more then enough space) until i decide to delete it or just keep it there for 24 hours.
Any thoughts?
No such feature exists. http://bugs.mysql.com takes "feature requests".
Such a feature would necessarily involve MySQL; it cannot be done entirely in the OS's filesystem. This is because a running mysql caches information in RAM that the FS does not know about. And because the information about a table/db/proc/trigger/etc is not located entirely in a single file. Instead extra info exists in other, more general, files.
With MyISAM, your goal was partially possible in the fs. A MyISAM table was composed of 3 files: .frm, .MYD',.MYI`. Still MySQL would need to flush something to forget that it know about the table before the fs could move the 3 files somewhere else. MyISAM is going away; so don't even think about using that 'Engine'.
In InnoDB, a table is composed of a .ibd file (if using file_per_table) plus a .frm file, plus some info in the communal ibdata1 file. If the table is PARTITIONed, the layout is more complex.
In version 8.0, most of the previous paragraph will become incorrect -- a major change is occurring.
"Transactions" are a way of undoing writes to a table...
BEGIN;
INSERT/UPDATE/DELETE/etc...
if ( change-mind )
then ROLLBACK;
else COMMIT;
Effectively, the undo log acts as a recycle bin -- but only at the record level, and only until you execute COMMIT.
MySQL 8.0 will add the ability to have DDL statements (eg, DROP TABLE) in a transaction. But, again, it is only until COMMIT.
Think of COMMIT as flushing the recycle bin.

How to retrieve deleted records from MySQL

Is there any methods to retrieve deleted records from a mysql database?
No.
Deleted records are gone (or munged so badly you can't recover them). If you have autocommit turned on, the system commits each statement as you complete it (if you have auto commit turned off, then do a rollback NOW - phew, you're saved -- but you are running with autocommit, aren't you?).
One other approach is to reply the activity that created the missing records - can you do that? You can either re-run whatever programs did the updates, or replay them from a binary log (if you still have the binary log). That may not be possible, of course.
So you need to recover the data from somewhere - either a backup of your db (made using mysqldump) or of your file system (the data files of MyISAM tables are all simply structured and on the disk - recovering InnoDB tables are complicated by the shared use of ibdata files).
There is a possible way to retrieve deleted records (depending upon your situation). Please check here:
https://stackoverflow.com/a/72303235/2546381

Converting a big MyISAM to InnoDB

I'm trying to convert a 10million rows MySQL MyISAM table into InnoDB.
I tried ALTER TABLE but that made my server get stuck so I killed the mysql manually. What is the recommended way to do so?
Options I've thought about:
1. Making a new table which is InnoDB and inserting parts of the data each time.
2. Dumping the table into a text file and then doing LOAD FILE
3. Trying again and just keep the server non-responsive till he finishes (I tried for 2hours and the server is a production server so I prefer to keep it running)
4. Duplicating the table, Removing its indexes, then converting, and then adding indexes
Changing the engine of the table requires rewrite of the table, and that's why the table is not available for so long. Removing indexes, then converting, and adding indexes, may speed up the initial convert, but adding index creates a read lock on your table, so the effect in the end will be the same. Making new table and transferring the data is the way to go. Usually this is done in 2 parts - first copy records, then replay any changes that were done while copying the records. If you can afford disabling inserts/updates in the table, while leaving the reads, this is not a problem. If not, there are several possible solutions. One of them is to use facebook's online schema change tool. Another option is to set the application to write in both tables, while migrating the records, than switch only to the new record. This depends on the application code and crucial part is handling unique keys / duplicates, as in the old table you may update record, while in the new you need to insert it. (here transaction isolation level may also play crucial role, lower it as much as you can). "Classic" way is to use replication, which, as far as I know is also done in 2 parts - you start replication, recording the master position, then import dump of the database in the second server, then start it as a slave to catch up with changes.
Have you tried to order your data first by the PK ? e.g:
ALTER TABLE tablename ORDER BY PK_column;
should speed up the conversion.

Innodb Performance Optimization

One of the portion of my site requires bulk insert, it takes around 40 mins for innodb to load that file into database. I have been digging around the web and found few things.
innodb_autoinc_lock_mode=2 (It wont generate consecutive keys)
UNIQUE_CHECKS=0; (disable unique key checks)
FOREIGN_KEY_CHECKS=0 (disable foreign key checks)
--log_bin=OFF turn off binary log used for replication
Problem
I want to set first 3 options for just one session i.e. during bulk insert. The first option does not work mysql says unknown system variable 'innodb_autoinc_lock_mode'. I am using MySQL 5.0.4
The last option, I would like to turn it off but I am wondering what if I need replication later will it just start working if I turn it on again?
Suggestions
Any other suggestions how to improve bulk inserts/updates for innodb engine? Or please comments on my findings.
Thanks
Assuming you are loading the data in a single or few transactions, most of the time is likely to be spent building indexes (depending on the schema of the table).
Do not do a large number of small inserts with autocommit enabled, that will destroy performance with syncs for each commit.
If your table is bigger (or nearly as big as) the innodb buffer pool you are in trouble; a table which can't fit in ram with its indexes cannot be inserted into efficiently, as it will have to do READS to insert. This is so that existing index blocks can be updated.
Remember that disc writes are ok (they are mostly sequential, and you have a battery-backed raid controller, right?), but reads are slow and need to be avoided.
In summary
Do the insert in a small number of big-ish transactions, say 10k-100k rows or each. Don't make the transactions too big or you'll exhaust the logs.
Get enough ram that your table fits in memory; set the innodb buffer pool appropriately (You are running x86_64, right?)
Don't worry about the operation taking a long time, as due to MVCC, your app will be able to operate on the previous versions of the rows assuming it's only reading.
Don't make any of the optimisations listed above, they're probably waste of time (don't take my word for it - benchmark the operation on a test system in your lab with/without those).
Turning unique checks off is actively dangerous as you'll end up with broken data.
To answer the last part of your question, no it won't just start working again; if the inserts are not replicated but subsequent updates are, the result will not be a pretty sight. Disabling foreign and unique keys should be OK, provided you re-enable them afterwards, and deal with any constraint violations.
How often do you have to do this? Can you load smaller datasets more frequently?

Generating a massive 150M-row MySQL table

I have a C program that mines a huge data source (20GB of raw text) and generates loads of INSERTs to execute on simple blank table (4 integer columns with 1 primary key). Setup as a MEMORY table, the entire task completes in 8 hours. After finishing, about 150 million rows exist in the table. Eight hours is a completely-decent number for me. This is a one-time deal.
The problem comes when trying to convert the MEMORY table back into MyISAM so that (A) I'll have the memory freed up for other processes and (B) the data won't be killed when I restart the computer.
ALTER TABLE memtable ENGINE = MyISAM
I've let this ALTER TABLE query run for over two days now, and it's not done. I've now killed it.
If I create the table initially as MyISAM, the write speed seems terribly poor (especially due to the fact that the query requires the use of the ON DUPLICATE KEY UPDATE technique). I can't temporarily turn off the keys. The table would become over 1000 times larger if I were to and then I'd have to reprocess the keys and essentially run a GROUP BY on 150,000,000,000 rows. Umm, no.
One of the key constraints to realize: The INSERT query UPDATEs records if the primary key (a hash) exists in the table already.
At the very beginning of an attempt at strictly using MyISAM, I'm getting a rough speed of 1,250 rows per second. Once the index grows, I imagine this rate will tank even more.
I have 16GB of memory installed in the machine. What's the best way to generate a massive table that ultimately ends up as an on-disk, indexed MyISAM table?
Clarification: There are many, many UPDATEs going on from the query (INSERT ... ON DUPLICATE KEY UPDATE val=val+whatever). This isn't, by any means, a raw dump problem. My reasoning for trying a MEMORY table in the first place was for speeding-up all the index lookups and table-changes that occur for every INSERT.
If you intend to make it a MyISAM table, why are you creating it in memory in the first place? If it's only for speed, I think the conversion to a MyISAM table is going to negate any speed improvement you get by creating it in memory to start with.
You say inserting directly into an "on disk" table is too slow (though I'm not sure how you're deciding it is when your current method is taking days), you may be able to turn off or remove the uniqueness constraints and then use a DELETE query later to re-establish uniqueness, then re-enable/add the constraints. I have used this technique when importing into an INNODB table in the past, and found even with the later delete it was overall much faster.
Another option might be to create a CSV file instead of the INSERT statements, and either load it into the table using LOAD DATA INFILE (I believe that is faster then the inserts, but I can't find a reference at present) or by using it directly via the CSV storage engine, depending on your needs.
Sorry to keep throwing comments at you (last one, probably).
I just found this article which provides an example of a converting a large table from MyISAM to InnoDB, while this isn't what you are doing, he uses an intermediate Memory table and describes going from memory to InnoDB in an efficient way - Ordering the table in memory the way that InnoDB expects it to be ordered in the end. If you aren't tied to MyISAM it might be worth a look since you already have a "correct" memory table built.
I don't use mysql but use SQL server and this is the process I use to handle a file of similar size. First I dump the file into a staging table that has no constraints. Then I identify and delete the dups from the staging table. Then I search for existing records that might match and put the idfield into a column in the staging table. Then I update where the id field column is not null and insert where it is null. One of the reasons I do all the work of getting rid of the dups in the staging table is that it means less impact on the prod table when I run it and thus it is faster in the end. My whole process runs in less than an hour (and actually does much more than I describe as I also have to denormalize and clean the data) and affects production tables for less than 15 minutes of that time. I don't have to wrorry about adjusting any constraints or dropping indexes or any of that since I do most of my processing before I hit the prod table.
Consider if a simliar process might work better for you. Also could you use some sort of bulk import to get the raw data into the staging table (I pull the 22 gig file I have into staging in around 16 minutes) instead of working row-by-row?