Table In Use - MySQL - mysql

I have a serious issue with an 'in use' table.
An admin for the site attempted to back up the database with mysqldumper but an important table remains in use. The entire site is down.
Have tried to repair the table from CPanel and PHPMyAdmin but no luck. The same file not found error is returned.
The site is ran through rented Web host so shell might be out of the question.
Have tried just about everything without luck.
I appreciate that it's not about to go on, any ideas on this one please?

Run this command in the Database directory:
recover --safe-recover --force [Table Name]
Depending on table size, this can take a long time. On large tables it's much faster to restart the MySql service. That sometimes works, though it did not work for the OP.
(Yes, I know it's a year late. Traffic was terrible.)

Related

django - Records not saved in MySQL database but AutoIncrement gets increased

I'm facing a weird one here.
I'm using django inside a docker container. When the system tries to save some records (mostly it works!) they don't end up in the database.
When I view the process in the debugger, the django objects gets saved, it gets an ID from the database and inside the django world everything seems fine.
But the record never shows up in the database! Even weirder: If I look at the table's meta options I see that per record which should have been created the AutoIncrement is increased. So it appears the database knows about the records but dedides not so save them...
I tried it on my Windows machine, the server runs UNIX, both have the same issue.
I'm using django==2.0.9 and mysqlclient==1.3.13 connecting to a MariaDB.
When I switch the table engine from InnoDB to MyISAM then it works.
Reproducibly it happens for the django-axes tables but I experienced the issue in other parts of the system as well. The calls come via the django-restframework to django.
Frankly I have no idea where to start looking for the issue.
On Windows and (server's) UNIX it occurs, on another's developers UNIX, it works
I downgraded latest mysqlclient, no effect
Google doesn't say anything or I'm not asking the right questions
MySQL error log keeps silent.
I also tried the latest MariaDB image 10.3.
Ok, without ever finding out what I can do about it, I switched the database engine for these explicit tables to MyISAM, now it works. Super weird.

Google Cloud SQL Incorrect Key File

On several occasions, I've tried to do a query on Google Cloud SQL that involves an order by statement, only to have the query fail with the error
Incorrect key file for table '/cloudsqltmp/#sql_44f4_1.MYI'; try to repair it
This sounds like the /cloudsqltmp/ partition is filling up with my temporary table. The result set isn't that big, however, and the program doing that query has done so on several other occasions, so I suspect that the space is actually filling up with someone else's temporary table. I was able to clear this by restarting the instance several times (I assume it finally gave me a new machine, or the space cleared up), but this seems very inelegant.
Is there a better way to handle this if it happens again?
Is there a better way to prevent this from happening?
If my assumption of what happened is wrong - what actually happened?
What I would recommend doing is:
Creating a Google Compute Engine instance and run MySQL on it. This way if you run into this problem again, you can use the follow solution below.
1) Modify the /tmp partition to have more space. There is a very useful link here on how to do it.
2) Create a CronJob to clear the /tmp. If you are not comfortable with CronJob, I found another tool called tmpreaper. To install it, run sudo apt-get install tmpreaper. Also use InnoDB instead of MyISIAM. Google recommends using it.
3) You are correct to assume that /tmp is getting full since restarting the instance resolves the issue
I hope my recommendations were helpful.

mysql large query issue

Hey guys, i had posted this question in another question but i didnt get a helpful response. Partly because i dont think i explained myself properly. So im going to try again.
I've been programming a back end server in vb.net and its been using a mysql database to store information. I was up until a couple of days ago using a webhost's mysql server to do this.
I did not care to renew my webhost so ive moved everything to a home server to continue work on my program. I've got mysql 5.5 installed (which is a newer version then the one on my previous webhost) and everything is working perfectly except for one thing so far.
This program when starting up for the first time sends a query about a million table inserts large. the query looks something like "INSERT into blah VALUES(1,1,1,1,1);INSERT into...." and so on. This used to take about 5-10 mins or so on my webhost (which i had my server program running on my home server machine and it was sending the info via the net to the webhost's mysql.)
Now doing everything locally i was hoping for this to be faster, but it didnt really matter i just needed it to work. So when i send this query it just locks up for a minute or so and then returns a timeout. When i check the table in the database it has loaded exactly 1000 items everytime i try this.
Now im assuming this is some sort of setting issue, however i've played with my "my.ini" to see if it would help, it didnt. I had tried switching to some of the pre-packaged my-huge or my-innodb etc and those did not help either. I would assume that if anything it would just take longer or something, not just timeout immediately after 1000 inserts.
And just for some background info the server machine im using has a quad core core i7, 8gb ram, and a 1tb hd in it running windows 7.
Any help would be great thank you.
Probably your query is too large, you can tune MySQL behavior via max_allowed_packet option.
You can also save some bytes by combining several insert queries into one, like this: INSERT INTO blah VALUES(1,1,1,1,1),VALUES(2,2,2,2,2),VALUES(3,3,3,3,3). But if this large combined query fails, then data in that insert will fail too.
But in my opinion it's not the smartest way to do it. If your application should import huge sql dump on start, it can possible use mysql executable like this mysql -uroot -ppassword db_name < dump.sql and you're done. It will be possibly the most effective way to accomplish this task.

Mysql Table crashed. Should I repair daily?

Out of blue moon, all of sudden, 1 of my database is crashed. This is not the first time, last time I use the "repair table" command and was lucky able to fix it. But it happens again, same table, same error, same solution.
Error:
1194: Table 'users' is marked as crashed and should be repaired
Do I need to repair my tables every day/week/month? Is there a permanent solution for "table-crashing" problem? Its really scary!!! Please help
This shouldn't happen normally, if your tables are crashing it means something is wrong with your system. Likely a bad disk or bad DRAM (or one of lots of unlikely things, like bad pci->sata bridge, etc...). If you have another system around, try migrating to that system and see if your errors continue.
You do have another equivalent system around, right? In case the primary DB crashes hard, and you need to restore a backup ASAP?
I suppose it could also be a bug in mysql, make sure the version you are using doesn't have known bugs.
To debug this problem, let alone just about any other problem, start looking in your logs. If you MySQL server runs on Windows, there are event logs for application and system, and I would look at those. If your MySQL server runs on Linux, logs are usually found under /var/log, and you usually have to be root to view them. A good way to view on linux is
tail (where is replaced by the real name of the file).
As you have been advised in another answer, you might have disk issues. Run a hardware diagnostic on the disks.
Whatever you decide to do, please do not depend on repairing a table; because you soon might not be able to. Using an Informix SE database, I had to rebuild a table, and there was nothing apparently wrong with the data in the table. It was a nightmare, and took the better part of a weekend. I could not export the database, on which our test and development systems depend.

Help! Why did MySql just screech to a halt?

Our relatively high traffic website just screeched to a halt, and we're totally stumped. We run on Django and Mysql (InnoDB), and we're trying to figure out why it's all of a sudden totally slow.
Here's what we know so far:
On our mysql server, a simple query (from django shell) runs fast.
On our app server, a simple query (from django shell) runs very slow.
Without having any details on the query or on the tables involved in the query, it is quite difficult to answer this question.
Most likely it is because of a lot of data in the table and a missing index on the field you are querying.
This would explain why it is slow on the production box, but fast on the dev box (since there's less data).
To answer the question better, could you provide us with more details? Table structure, query, number of rows in the table, etc. ?
More assumptions: Disk I/O on the app server could be a problem, maybe the log files in MySql are not properly configured (especially with InnoDB this could lead to a problem). Maybe there's a load-heavy query running too often? Table locks when multiple users write to/read from the same tables?
As I said, without having more details, it is quite difficult to guess. But I hope, at least I could point you in the right direction.
Run EXPLAIN on the SELECT.
Study this page carefully:
http://dev.mysql.com/doc/refman/5.0/en/using-explain.html
Understanding the concepts on that page are key to properly index your tables.
Thanks for the responses everyone.
Turns out it was a DNS issue (which was a regression). MySQL is really stupid in that the default is to use DNS lookups. They got really slow, which killed all the network flow between the app server and the db server. It was as simple as adding "skip-name-resolve" to our my.cnf.
Are the 'mysql server' and 'app server' on the same box and talking to the same DB instance?
Your question suggests not, so I'd look for a problem on the network - start by pinging the database server from each box and compare the results.
Once you've done that you'll need to be a little more specific about the problem - were the ping times the same, are you running the same query, etc...