I'm experiencing this weird problem reading data from a table called nodes. Any select query is taking forever to execute. I don't even know if its going to return eventually but it doesn't seem like that. It used to return quickly before. Nothing has changed as far as I know. No new records are inserted and none are deleted too. Couple of queries that I tried.
select count(1) from nodes;
select node_id, type from nodes where node_id='abc';
node_id is a primary key in nodes table if that helps. The previous day all of select queries used to return in no time like in 0.01ms etc.
My guess is that somehow a lock is placed on the table preventing my queries from proceeding.
I appreciate if someone can let me know or give a pointer to how to find locks on a particular table in MYSQL 5.0.90-log FreeBSD.
What could be other possibilities?
Thanks a bunch.
I escalated this problem to a senior staff in our group. He figured that when mysqld tried to log, file system has become full. This is solved by stopping mysqld and creating a symbolic link to a file on a different file system that has lots of free space. started mysqld and all select queries are running without blocking.
Try to look at processes on you server, maybe you have to kill someone of them to free the table.
Here is an interesting link about that:
http://mysqlpreacher.com/wordpress/2009/07/mysql-processlist-showkill-processes/
First try to restart the sql service if that doesn't help, rebuild the indexes .
May be indexes got curropted. Try rebuilding all indexes.
Related
From time to time it happens that some indexes in our tables get broken and the DB start consuming 100% CPU load and in some time it gets completely stuck. Even simple queries won't finish and restarts don't help.
What I found is to either drop and recreate indexes one by one (which might take a loooong time and lot of investigation) or just calling alter table mytable engine=innodb; on suspicious table. This works actually quite well, it fixes everything and everything gets back to normal. But I have no idea what actually happens in background and why it helps. Also – would it help to do this manually once a month? Is it a good idea to automatize this? Is there some way to do some DB health check?
A guess...
You have an older version of MySQL/Percona, one that either does not have "persistent statistics" or does not have it enabled.
And you have a nasty query that sometimes leads the Optimizer to pick the wrong query plan.
The quick fix (that may or may not work) is to run ANALYZE TABLE of the table(s) in the slow query.
A better fix may be to upgrade the version.
Meanwhile, let's see the query, its EXPLAIN, and SHOW CREATE TABLE for each table involved. The may be a way to reformulate it to be less flaky.
Apologize but I really don't have much information for the question.
I have a single MySQL MySIAM engine table that's holding around 80K records (continually increase).
Today it's suddenly stop responding.
I can't even do a single query (e.g. SELECT * FROM table LIMIT 1), the server just spend time executing and look like will never stop.
I can't dump table to make backup.
However, another tables in the same database, same engine (MySIAM) are working just fine.
I'm not sure where to go from here. Not sure if it's DEADLOCK or anything.
All data in that table is really important. You direction pointing to help me identify the problem would be really appreciated. For example, are there any command to check table if it's corrupt by what reasons, etc.
UPDATE:::::
I can't use CHECK TABLE neither, it also take forever execution time.
UPDATE ::::
I did research and come up with something about REPAIR TABLE.
However, it's suggested that I should do the backup first.
As I can't do the back for this table, would it be OK to use the REPAIR command anyway?
::::::::::::: SOLVED :::::::::::
Follow Cristian's help, use SHOW PROCESSLIST; command. I see that there is a process with state 'Copying to tmp table' that hold another process.
So I use KILL <process id> to kill that process and everything released to normal.
Cheers
Chanon
Sorry but I can't comment your question... :)
Exactly which version of MySQL you run, 5.1.xx?
Can you post you SHOW PROCESSLIST; status?
UPDATE: Chanon, after this event, and to prevent this problem, you have to review and optimize the query that send MySQL in "Copying to tmp table" state, in order to avoid slowness and a risk of "disk full" for your temporary partition.
What can be done to identify the reason for DB slowness?
When i ran the query in the morning it ran quickly & i got the output.
When i run the same query after 1 hr, it took more than 2mins.
What can be checked to identify this slowness?
All the tables are properly indexed.
If it's just a single query which is running slowly, EXPLAIN SELECT... as mentioned by arex1337 may help you see the reason.
It would also be worth looking at the output of e.g. vmstat on the box whilst running the query to see what it's doing - you should be able to get a feel for whether the machine is swapping, IO-bound, CPU-bound etc.
Check also with top to look for any rogue processes hogging CPU time.
Finally, if the machine is using RAID, it's possible that, if a drive has failed, the RAID array could be in a degraded state, which could make disc access slower (this is only applicable in certain RAID configurations, but worth considering and ruling out).
You can use EXPLAIN <your query> to get information about how MySQL executes your query. Maybe you get some hints about why it's slow.
EXPLAIN SELECT ... FROM ... WHERE ...;
Also, maybe you just have a slow query, and it was fast the second time because the result was cached?
How much time does MySQL need to build an index of a table with 30,000,000 entries that are strings of length 256?
At the moment it seems to take hours and I don't know how long I should wait till I conclude that MySql simply failed at building an index.
You may run SHOW PROCESSLIST \G in mysql console to watch its state. I had a similar problem just a couple of hours ago, but my table was much smaller.
Here a list of thread states you will definitely need. After an hour of waiting I realized that ALTER TABLE CREATE INDEX is in Locked state, I needed to restart mysqld and run the statement once again. That time I had index built in 15 minutes.
By the way, I recommend to run index creation from mysql console, GUI tools may add some spices to the process.
it could easily take hours. it all depends on the machine specs, load, etc etc. to see whether it's failed, check something like top or watch your hard drives - if they're going mad it's still indexing.
Depending on your OS you may check for disk activity (i.e. does it reads/writes DB files) to find out if it failed or not.
I have a long-running process in MySQL. It has been running for a week. There is one other connection, to a replication master, but I have halted slave processing so there's effectively nothing else going on.
How can I tell if this process is still working? I knew it would take a long time which is why I put it on its own database instance, but this is longer than I anticipated. Obviously, if it is still doing work, I don't want to kill it. If it is zombied, then I don't know how to get the work done that it's supposed to be doing.
It's in the "Sending data" state. The table is an InnoDB one but without any FK references that are used by the query. The InnoDB status shows no errors or locks since the query started.
Any thoughts are appreciated.
Try "SHOW PROCESSLIST" to see what's active.
Of course if you kill it, it may then want to take just as much time rolling it back.
You need to kill it and come up with better indices.
I did a job for a guy. Had a table with about 35 million rows. His batch process, like yours, had been running a week, with no end in sight. I added some indexes, made some changes to the order and methods of his batch process, and got the whole thing down to about two and a half hours. On a slower machine.
Given what you've said, it's not stuck. However, the is absolutely no guarantee that it will actually finish in anything resembling a reasonable amount of time. Adding indicies will almost certainly help, and depending on the type of query refactoring it into a series of queries that use temp tables could possibly give you a huge performance boost. I wouldn't suggest waiting around for it to maybe finish.
For better performance on a database that size, you may want to look at a document based database such as mongoDB. It will take more hard drive space to store the database, but depending on your current schema, you may get much better performance.