Archive Table Corrupt - mysql

An ARCHIVE table got corrupted in my production.
I tried
REPAIR TABLE TBL_NAME;
It wasn't able to repair the table. Does only MyISAM table support repairing?
I dropped the table, recreated it and then restored it from the dump I already had.
Q1: What could have been the better option to handle this scenario?
Q2: Why databases/tables getting corrupted so often?
Q3: What is the best that we could do to prevent tables from getting corrupt?

Q1: What could have been the better option to handle this scenario?
Given the circumstances I think that what you did was the best solution. There is an ARCHIVE engine recovery tool called archive_reader that might have been able to help you recover rows if you'd not had a backup
The fact that you had backups is good and saved you here. If you want to be able to perform a full recovery it could be worth enabling binary logging or adding a replicated slave server.
Q2: Why databases/tables getting corrupted so often?
In normal operation they shouldn't be. I would look in your MySQL error log to see if there are any error messages that corresponded to the time of the table crash. Disk or other problems on the server could make it more likely to corrupt tables. Perhaps you've found a bug in the ARCHIVE engine?
Q3: What is the best that we could do to prevent tables from getting corrupt?
As mentioned in Q2 have a good look for error messages. If you find that you can predictably replicate crashing a table be sure to file a MySQL bug report.

FOR MyISAM table:-
1) Identify all corrupted tables using myisamchk
2) Repair the corrupted table using myisamchk -r
If the tables are still getting used by your application and other tables.To avoid this error message, shutdown mysqld before performing the repair, if you can afford to shutdown the DB for a while. If not, use FLUSH TABLES to force mysqld to flush any table modification that are still in memory
You can also Perform check and repair together for entire MySQL database
Example :
myisamchk --silent --force --fast --update-state .....*.MYI

Related

mysql table crashes only when replica server running

I have a table that crashes often, but only seems to crash when the replica is running.
The table is MyISAM. The table has 2 mediumtext fields. The error I get when making a delete statement is this: "General error: 1194 Table 'outlook_emails' is marked as crashed and should be repaired".
I wonder if this has to do with the binary log. However, it doesn't seem to happen when the binary log is running but the replica is down.
Any idea what is happening or what I can do to solve it or investigate further?
Table '...' is marked as crashed and should be repaired".
That error occurs (usually) when the MySQL server as been rudely restarted, and the table is ENGINE=MyISAM.
The temporary fix is to run CHECK TABLE, which will then suggest that you run REPAIR TABLE. The tool myisamchk is a convenient way to do them, especially since there could be several tables so marked.
The Engine InnoDB has radically different internals. It avoids the specific issue that MyISAM has, and does a much more thorough job of recovering from crashes.
Switching to InnoDB requires ALTERing your tables. Here is more discussion: http://mysql.rjweb.org/doc.php/myisam2innodb

Mysql autorepair

I need MySQL to auto repair on damage. I have found that I can check and repair MySQL tables manually. Is there an option to make it repairing it self when ever needed automatically. Without explicit external effort?
We run several MySQL operations via cron jobs, so I suppose you could schedule a MySQL check periodically the same way. You may want to have a deeper look on the documentation for the differences between the checks/repairs operations.
I don't know any other "magical" solution.
It generally happens if you have innodb tables and and suddenly your database shut down.
do you run mysqlcheck database table? generally this command will recover your table. If it dont solve your problem you can log in your database and could run the following command to rebuild your index
alter table table_name engine=InnoDB;

Making new MYSQL replication

I need to make working mysql replication from master to slave. (tried it once already)
The database is quite large (over 100GB) and it will take some hours to make it ready for new slave.
The database has MyIsam and innoDB engine and both are being written
I think my only choice is to copy the data files from master to a new slave? (or make a database dump which im referring later in the topic of ROUND 2)
Before that I have to run down all the services which uses the database and
make writelock for tables or should i shut down the whole database?
After data directory sync to the new replication server I started it up and the database with the tables was there. First error that I got rid off by changing bin.log to 007324 and position to 0.
Error 1:
140213 4:52:07 [ERROR] Got fatal error 1236: 'Could not find first log file name in binary log index file' from master when reading data from binary log
140213 4:52:07 [Note] Slave I/O thread exiting, read up to log 'bin-log.007323', position 46774422
After that I got new problems from database and this error came out from every table.
Error 2:
Error 'Incorrect information in file: './database/table.frm'' on query. Default database: 'database'.
Seems that something went wrong.
ROUND 2!
After this scene I started to think that can this be done without long service break.
Master database has been already configured and it works ok to another slave.
So i did some googling and this is what i came up with.
Making read lock to tables:
FLUSH TABLES WITH READ LOCK;
Taking dump:
mysqldump --skip-lock-tables --single-transaction --flush-logs --master-data=2 -A > dbdump.sql
Packaging and moving:
gzip (pigz) the the dbdump and moving it to slave server after that finding the MASTER_LOG_FILE and MASTER_LOG_POS from the dump.
After that i don't think that i want to import the dbdump.sql because its over 100GB and
will take time. So i think SOURCE would be ok option for it.
On SLAVE server:
CREATE DATABASE dbdump;
USE dbdump;
SOURCE dbdump.db;
CHANGE MASTER TO MASTER_HOST='x.x.x.x',MASTER_USER='replication',MASTER_PASSWORD='slavepass',
MASTER_LOG_FILE='mysql-bin.000001',MASTER_LOG_POS=X;
start slave;
SHOW SLAVE STATUS \G
I haven't tested this yet, am I on to something?
--bp
Realize that issuing a SOURCE command is the same as running an import of the dumped SQL from shell. Either way, it is going to take a long time. Outside of that, you have the steps correct - flush table with read lock on master, make a database dump of master, make sure you note master binlog coordinates, import dump on slave, set binlog coordinates, start replication. Do not work with the raw binaries unless you REALLY know what you are doing (especially for INNODB tables).
If you have a number of large tables (i.e. not just one big one), you could consider parallelizing your dumps/imports by table (or groups of tables) to speed things along. There are actually tools out there to help you do this.
You CAN work with the raw binaries, but it is not for the faint of heart. In the past, I have used rsync to differentially update the raw binaries between master and slave (you still must use flush table with read lock and gather master binlog coordinates before doing this). For MyISAM tables this works pretty well actually. For InnoDB, it can be more tricky. I prefer to use the option to set InnoDB to write index and data files per table. You would need to rsync the ibdata* files. You would delete ib_logfile* files from slave.
This whole thing is a bit of a high wire act, so I would not resort to doing this unless you have no other viable options. Absolutely take a traditional SQL dump before even thinking about attempting a binary file sync, and each time until you are VERY comfortable that you actually know what you are doing.

mysql cluster lost data after restore

all,
I use mysqldump to backup mysql cluster data with 10 million lines data daily. Recently, our cluster is crashed after a update, then we restore the .sql file generated by mysqldump. When restoring the database, we got key duplication errors/problem, and then I use "-f" to force the restore process. And finally, the restore process completed and all tables is back. Some tables are smaller, we think that is because the duplicate lines are ignored.
But recently, we find some data is missing, it seems that some duplicated data dose not restored correctly.
May I know whether there is a nice way to avoid this in restore process or how to check whether we have duplication before mysqldump?
Couple of suggestions - take a look at the errors that are generated when not using the force option and see if you can figure out how to fix the root cause. Using the force option allows the restore to continue after the error but the failed rows will still be lost.
Is there a reason why you're using mysqldump rather than the backup command within ndb_mgm - which is an online operation? If using the native Cluster (on-line!) backup then you use the ndb_restore command to restore your data.

MySQL Export exports Views first - and crashes immediately on Restore

I have a production MySQL server that I need to dump a database out of. The problem is that whenever I make this dump, it generates the View information first. When I try to restore this backup, it errors immediately as the tables that back it don't exist yet.
Is there any quick fix to this issue? I'm dumping the database via PHPMyAdmin.
Thanks!
Rob
phpmyadmin generates incorrect backups (no table locks or REPEATABLE READ isolation level), so for serious backups you should really use mysqldump. It is also much, much faster.