Mysql Table crashed. Should I repair daily? - mysql

Out of blue moon, all of sudden, 1 of my database is crashed. This is not the first time, last time I use the "repair table" command and was lucky able to fix it. But it happens again, same table, same error, same solution.
Error:
1194: Table 'users' is marked as crashed and should be repaired
Do I need to repair my tables every day/week/month? Is there a permanent solution for "table-crashing" problem? Its really scary!!! Please help

This shouldn't happen normally, if your tables are crashing it means something is wrong with your system. Likely a bad disk or bad DRAM (or one of lots of unlikely things, like bad pci->sata bridge, etc...). If you have another system around, try migrating to that system and see if your errors continue.
You do have another equivalent system around, right? In case the primary DB crashes hard, and you need to restore a backup ASAP?
I suppose it could also be a bug in mysql, make sure the version you are using doesn't have known bugs.

To debug this problem, let alone just about any other problem, start looking in your logs. If you MySQL server runs on Windows, there are event logs for application and system, and I would look at those. If your MySQL server runs on Linux, logs are usually found under /var/log, and you usually have to be root to view them. A good way to view on linux is
tail (where is replaced by the real name of the file).
As you have been advised in another answer, you might have disk issues. Run a hardware diagnostic on the disks.
Whatever you decide to do, please do not depend on repairing a table; because you soon might not be able to. Using an Informix SE database, I had to rebuild a table, and there was nothing apparently wrong with the data in the table. It was a nightmare, and took the better part of a weekend. I could not export the database, on which our test and development systems depend.

Related

How to Deploy Database Changes to a Live Server?

I have a script that is part of my deployment process to push DB changes to the production server. If the script corrupts my data for some reason (a bad update), it is tough to recover.
One way to solve this is to shut down the application to users while updating, so if a problem occurs, just go back to the backup I made before deploying.
But I have heard of others who deploy and keep their site live... how would you go about doing this, and if you failed, how could you recover the data that came in since you took your backup before deploying?
This is a tricky problem in general, like with many things in database administration. There are basically three ways to approach this:
Avoid failure at all costs.
Lock everything down (and make the upgrade really fast).
It's OK to lose data.
If you have a complex system, isolate your components according to these or similar categories.
Have a staging system to test upgrades. The staging system is more or less a copy of the production system; it's separate from the test system. Another thing is to have an audit or logging system that you can refer to if you need to replay data.
The real problem is if you notice much later that your upgrade was faulty. Then you're quite screwed.
How big is your database? Can you afford to lose data that was updated while the customer were using it and before you had to go to the backup? Every plan for deployment involves some compromises somewhere, and you've got to decide which compromises are the least painful for what you want to do.
For simple websites running just pgsql, you can disconnect clients, and run the entire update in one big transaction. If any part fails, the whole thing rolls back and it's like you never did anything. Sadly this doesn't work exactly the same for other dbs, but with flashback or whatever oracle calls it you can get something similar.
For bigger, complex websites running on top of a replicated db server set, things get much more complex much more quickly. Where I've worked we've used Slony, and it doesn't take nicely to playing with others when you're deploying DDL changes, and you pretty much HAVE to take all the customers offline while deploying DDL. However, the downtime is measured in minutes for us, even with databases approaching 1TB in size.

mySQL Table Sync

I am looking for suggestions on the best way to sync mySQL tables (myISAM) from 2 different databases.
Currently we use Navicat to sync tables from our production server to our test server but we have been running into many problems. Just about everyday we have been running into a sync failure on a table.
We get the error below a lot of the times, not to mention Navicat spams our e-mails with successful and unsuccessful syncs(is there anyway to just receive only the unsuccessful syncs?). I also know altering the table in anyway will cause a failure to sync. So altering the table in anyway must be done to the master first (This makes sense but is there any way around this?).
-[Sync] Finished - Unsuccessful Synchronization: List index out of bounds (0)
Is there any reason to not use the Navicat sync? My boss suggested using mySQL replication instead but my first concern is finding why we have so many problems because it seems like we just are misusing the sync thus giving us all these problems.
Thanks.
sync tables from our production server to our test server
It sounds like you're trying to replicate your production environment in your test environment, right?
A common pattern to follow in this situation is using a tool like mysqldump to create a backup of the entire database, then later import the backup into the test environment. By doing a complete backup and restore, you're not only ensuring that you have at least one backup method that's known to work, you're also ensuring that the test database can never contain modifications that a sync tool might miss. (Sync tools generally require a primary or unique key on each table to operate effectively.)
The backup and reimport process should be an easy thing for you to automate. At my workplace, we perform a mysqldump-based database dump every night, and perform optional imports into each developer's personal copy of the dev environment early the following morning.

How to make my MySQL databases available at all times? Some expert DB advice needed!

I've been doing a lot of research, reading on replication, etc but just not sure as to what mysql solution would work.
This is what I'm looking at:
when my mysql fails for some reason or there are certain queries that are taking really long to execute and locking some tables, I want the other insert/update/select queries to still function at normal speed without having to wait for locks to be released or for the main database to be back up. I'm thinking there should be a second mysql server for this to happen, but is what I mentioned possible even if there is and would it involve a lot of change in my existing programming logic?
when my database is being backed up, I would still like my site to function normally, all inserts/selects/updates should function as normal.
when I need to alter a large table, I wouldn't like it to affect my application, there should be a backup server to work from.
So what do I need to do to get all this done and also would it require changing plenty of existing coding to suit the new set up? [My site has a lot of reads and writes]
There's no easy way. You're asking for a highly-available MySQL-based setup, and that requires a lot of work at the server and client ends.
Some issues, for example:
when I need to alter a large table, I wouldn't like it to affect my application, there should be a backup server to work from.
If you're altering the table, you can't trivially create a copy to work from during the update. What about the changes that are made to your copy while the first update is taking place?
Have a search for "High Availability MySQL". It's mostly a solved problem, but the solution depends heavily on your exact requirements. You cannot just ask for "I want my SQL server to run at full speed always forever no matter what I throw at it".
Not a MySQL specific answer, but a general one. Have a read only copy of your DB for site to render, which will be synced to the master DB regularly. This way, you can keep your site working even if the master DB is under load/lock due to insert/delete. For efficiency, keep this copy as denormalized as you can.

What causes tables to need to be repaired?

Every so often I get an error saying one of my tables "is marked as crashed and should be repaired". I then do a REPAIR TABLE and repair it. What causes them to be marked as crashed and how can I prevent it? I am using MyISAM tables with MySQL 5.0.45.
There can be a few reasons tables get corrupted, it is discussed in detail in the manual.
To combat it, the following things work best:
Make sure you always MySQL shutdown properly
Consider using --myisam-recover option to automatically check/repair your tables in the event that shutdown wasn't done properly
Make sure you are on the most recent versions as known corruption bugs are normally fixed ASAP
Double check your hardware with a test to see if it is causing problems. Tools like sysbench and memtest86 can often help verify if things are working as they should.
Make sure nothing is touching the data directory externally, such as virus checkers, backup programs, etc...
Usually, it happens when the database is not shut down properly, like a system crash, or hardware problem.
I used to get errors from mysql just like you.
I solved my problems in this way
Convert to all myisam tables to InnoDB (you can search "myisam vs InnoDB" in stackoverflow.com and search engines to find out why)
For getting best performance from MySQL, use a third-party program MONyog
(MySQL Monitor and Advisor) and check performance tips
These two steps saved me. I hope these also help you lot.
It could be many things, but MySQL Performance Blog mentions bad memory, OS or MySQL bugs that could cause hidden corruption. Also, that and another article mention several things to keep in mind when doing crash recovery.

How do I oversee my MySQL replication server?

I've had a tough time setting up my replication server. Is there any program (OS X, Windows, Linux, or PHP no problem) that lets me monitor and resolve replication issues? (btw, for those following, I've been on this issue here, here, here and here)
My production database is several megs in size and growing. Every time the database replication stops and the databases inevitably begin to slide out of sync i cringe. My last resync from dump took almost 4 hours roundtrip!
As always, even after sync, I run into this kind of show-stopping error:
Error 'Duplicate entry '252440' for key 1' on query.
I would love it if there was some way to closely monitor whats going on and perhaps let the software deal with it. I'm even all ears for service companies which may help me monitor my data better. Or an alternate way to mirror altogether.
Edit: going through my previous questions i found this which helps tremendously. I'm still all ears on the monitoring solution.
To monitor the servers we use the free tools from Maatkit ... simple, yet efficient.
The binary replication is available in 5.1, so I guess you've got some balls. We still use 5.0 and it works OK, but of course we had our share of issues with it.
We use a Master-Master replication with a MySql Proxy as a load-balancer in front, and to prevent it from having errors:
we removed all unique indexes
for the few cases where we really needed unique constraints we made sure we used REPLACE instead of INSERT (MySql Proxy can be used to guard for proper usage ... it can even rewrite your queries)
scheduled scripts doing intensive reports are always accessing the same server (not the load-balancer) ... so that dangerous operations are replicated safely
Yeah, I know it sounds simple and stupid, but it solved 95% of all the problems we had.
We use mysql replication to replicate data to close to 30 servers. We monitor them with nagios. You can probably check the replication status and use an event handler to restart it with 'SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; Start Slave;'. That will fix the error, but you'll lose the insert that caused the error.
About the error, do you use memory tables on your slaves? I ask this because the only time we ever got a lot of these error they where caused by a bug in the latests releases of mysql. 'Delete From Table Where Field = Value' will delete only one row in memory tables even though they where multiple rows.
mysql bug descritpion