MySQL Crippling performance - indexes - mysql

Just wondering if anyone knows a quick way to check the health of some indexes on a table. The one we are having trouble with is quite a large table, but it has indexes so should be ok ("show indexes from mytable" shows them as present).
But it's going really slowly whenever we try to access this table, so wondering if we need to rebuild the indexes or something. None of us here are DBA's so really really appreicate any tips, it's getting quite urgent :(
It's a MyISAM table by the way, dumped from a v4 DB to a v5 database.
Thanks

Check Table
Turn on slow query logging if it's not already on.
Run explain on the slow queries to investigate why they're running slow.

MyISAM tables does not always update index distribution information. Because of this sometimes we need to do it manually: http://dev.mysql.com/doc/refman/5.0/en/analyze-table.html

Thanks for the help everyone, really appreciate it (I know it's a week or so ago since I posted this, been a very busy time...). It turned out to be that the indexes were fine, but were disabled. We think it happened because when we took a backup, the backup crashed half way through. Apparently a backup disables the indexes, then re-enables them afterwards. Since it crashed, they never got re-enabled. Once we turned them back on, it's super quick, phew....
Hope that's useful for someone else

Related

InnoDB: breaking and fixing indexes

From time to time it happens that some indexes in our tables get broken and the DB start consuming 100% CPU load and in some time it gets completely stuck. Even simple queries won't finish and restarts don't help.
What I found is to either drop and recreate indexes one by one (which might take a loooong time and lot of investigation) or just calling alter table mytable engine=innodb; on suspicious table. This works actually quite well, it fixes everything and everything gets back to normal. But I have no idea what actually happens in background and why it helps. Also – would it help to do this manually once a month? Is it a good idea to automatize this? Is there some way to do some DB health check?
A guess...
You have an older version of MySQL/Percona, one that either does not have "persistent statistics" or does not have it enabled.
And you have a nasty query that sometimes leads the Optimizer to pick the wrong query plan.
The quick fix (that may or may not work) is to run ANALYZE TABLE of the table(s) in the slow query.
A better fix may be to upgrade the version.
Meanwhile, let's see the query, its EXPLAIN, and SHOW CREATE TABLE for each table involved. The may be a way to reformulate it to be less flaky.

Indexes broken after database copy/move

I recently consolidated several databases on a single much more powerful server. They have several dozen tables each with the larger ones having 2-6 million rows each. I noticed that some queries that were running in around 15ms were now taking a full 10 seconds to finish.
I ran mysqlcheck -c on the databases which reported everything was okay with each table. I then tried to optimize the tables anyways. That did not work. What did work was manually deleting every single index and recreating it.
I'm a novice when it comes to DBA. Why isn't optimize fixing any broken indexes? Is there hopefully a better way to do this than having to manually delete a little over 1000 indexes and recreate them?
Thanks for your help and replies.
In mysql the indexes are always balanced trees. The engine is updating them and keeping them optimized. You should not need to delete and recreate them. Probably the performance will not change if you recreate them all.
Did you analyze the queries that are now slower? Try to optimize them and the indexes that are related to them. Use explain to show the query execution plan.

How do Indexes work in MySQL?

I found a lot of information on how indexes works in MySQL by looking at the following SO link: How do MySQL indexes work? However, I am facing a mysql issue I can not resolve, and I'm unsure whether it is related to indexing or not.
The problem is: I used multiple indexes in most of my tables, and everything seems to be working fine. However, when I restore the old back up data to my existing data, the size of the db keeps getting larger (it almost doubles each time).
Example: I was using a mysql db named DB1 last week, I made a backup and continued to use DB1. A few days later, I needed to continue from that backup db, so I restored it to DB1.
Before the restore, DB1's size was 115MB, but afterward it was suddenly 350MB.
Can anyone help shed some light on what might be happening?
This is not surprising. If you have lots of indexes, it's not unusual for them to take up as much space as the data itself.
When you are talking about 115MB vs. 350MB though, I'd guess the increase in query speed you get is probably worth that extra couple hundred megs of disk space. If not, then you might want to take a closer look at your indexes and make sure they are all actually providing some benefit.

MySQL 24x7 - InnoDB ALTER TABLE blocks (TABLE LOCK)

we are trying to minimize (maintenance) downtimes of our mysql based application.
It seems that InnoDB hotbackup will give us the possibility to do regular backups without stopping the server; Master/Slave replication will give us failover capabilities (loosing a few seconds of data due to replication lag is not great, but not a showstopper also).
So far for backup and unexpected downtimes. Now to expected downtimes -
As far as I understand from reading online documentation and books an ALTER TABLE on an InnoDB table will require a TABLE LOCK thus blocking all reads and writes to this table. Effectively this will mean downtime to the application. Some large tables may take hours to be updated.
Are there any known workarounds to this? The perfect workaroudn would be of course a non-blocking ALTER TABLE. But anything to make ALTER TABLE faster is also interesting.
Thanks in advance!
PS - commercial (non-free) tools would be ok also, free solutions are of course also welcome
Since you have replication setup, it is normally possible to do some trickery with ALTER TABLE on the slave, let the slave catchup after it is done, swap roles, and then ALTER on the former master. This doesn't work for all ALTER TABLE commands, but it can handle the majority of them.
There is also a third-party tool at here, but I'm not sure how commonly it is used, how well it works, etc...
The best workaround would be to not alter your tables.
The only time a schema change should be required is if you're adding functionality, or somehow forgot an index.
If you're adding functionality, you'll likely have downtime anyway, to stage your production server.
If you forgot an index, then the database is likely slow anyway, so your users shouldn't mind downtime to fix the performance issue. You should run all you queries through an EXPLAIN to make sure you have the proper indexes declared already.
If you're afraid that you'll be altering tables frequently you might want to re-examine your schema.

How to tell if a MySQL process is stuck?

I have a long-running process in MySQL. It has been running for a week. There is one other connection, to a replication master, but I have halted slave processing so there's effectively nothing else going on.
How can I tell if this process is still working? I knew it would take a long time which is why I put it on its own database instance, but this is longer than I anticipated. Obviously, if it is still doing work, I don't want to kill it. If it is zombied, then I don't know how to get the work done that it's supposed to be doing.
It's in the "Sending data" state. The table is an InnoDB one but without any FK references that are used by the query. The InnoDB status shows no errors or locks since the query started.
Any thoughts are appreciated.
Try "SHOW PROCESSLIST" to see what's active.
Of course if you kill it, it may then want to take just as much time rolling it back.
You need to kill it and come up with better indices.
I did a job for a guy. Had a table with about 35 million rows. His batch process, like yours, had been running a week, with no end in sight. I added some indexes, made some changes to the order and methods of his batch process, and got the whole thing down to about two and a half hours. On a slower machine.
Given what you've said, it's not stuck. However, the is absolutely no guarantee that it will actually finish in anything resembling a reasonable amount of time. Adding indicies will almost certainly help, and depending on the type of query refactoring it into a series of queries that use temp tables could possibly give you a huge performance boost. I wouldn't suggest waiting around for it to maybe finish.
For better performance on a database that size, you may want to look at a document based database such as mongoDB. It will take more hard drive space to store the database, but depending on your current schema, you may get much better performance.