MySQL queries hang; database down - mysql

I have one big dataset (1.6 GB) and I am writing MySQL queries to it. Some queries took long time (10 sec). After I send my app, that send this queries, to few people (cca 20) something went wrong and database is "corrupted" (don't know the better word). I cant send queries to this database anymore (query hang, process never ends).
I have tried to repair my database using "Repair database" in cpanel, but it returns:
Internal Server Error
500
No response from subprocess ( (cpanel)): The subprocess reported error
number 72,057,594,037,927,935 when it ended. The process dumped a core
file.
Do I have to delete database (supose I can, I didn't try), or is there any way to somehow restore (restart) database.
I am really not good at database management.

in my case you just need to update your cpanel.
or you have duplicate cpanel
try
/usr/local/cpanel/scripts/autorepair fix_duplicate_cpanel_rpms
/usr/local/cpanel/scripts/upcp --force

Related

Drop table times out for non-empty tables; already adjusted timeout interval

I'm having trouble deleting a table in MySQL v8.0 (on Windows 10) either from MySQL Workbench or via Python script (using mysql-connector-python). In both cases, the drop table command times out with "Error Code: 2013. Lost connection to MySQL server during query"
I previously set DBMS connection read timeout interval to 500 sec to try and work around this, but no luck.
The table in question has several hundred rows of data, and the entire .ibd file is 176kb. I suppose deleting the .ibd file directly isn't the greatest database practice?
I can create a new table and delete it, no problem. I'm running MySQL server locally.
Any suggestions on what to try next?
#obe's suggestion to restart the server resolved the issue. So it seems like that particular table got locked due to access from both Workbench and python. Database itself was not locked, since I could create/drop other tables.

Mysql full lock on big import

I've one MYSQL server with 5 databases.
I was using phpmyadmin csv import to load a very big amount of data in one table of one database.
I understand that all other operations in this machine may get slower due to the amount of processing take, but MYSQL is simple not responding to any other simultaneous request, even in other table or in other database.
And because of this apache doen't answer any request that need database connection (keeps loading forever)
after the import is finished, the apache and the mysql return to work normaly... i dont need to restart or execute any other command
my question is, Is this behavior normal? should mysql stop answering all other requests due a single giant one?
I'm afraid that if i've a big query running in one database in this server, all my other databases will be locked also and my applications stop working

MySQL is really unstable - Considering migration

How can I make a more robust database server? I know this is a pretty general question but I'm getting all sort of errors and downtimes using MySQL with a simple service that does some selects and 2 inserts. Nothing complex, no JOINs...
At first I got my host blocked from the DB server because of many connections, so I increased the max_used_connections parameter. And my client is actually closing the connections and handling the pool correctly.
For some days it worked OK. Today I woke up and the MySQL server was down. Tried to restart it with service mysql restart and it failed. Tried to connect and error 1040: too many connections.
So I couldn't even restart the server.
I did killall mysqld and tried to start the server with service mysql start. Service wouldn't start. Run su - mysql -s /bin/sh -c "/usr/bin/mysqld_safe > /dev/null 2>&1 &" and the server finally started.
I also increased max_connections limit because the server would just die after some seconds when starting it today.
Then one table was corrupt ERROR 144 (HY000): Table '<table_name>' is marked as crashed and last (automatic?) repair failed
Tried to REPAIR <table_name> but after some time the server died again ERROR 2006 (HY000): MySQL server has gone away. Tried this same process (killall up to here) about 3 times, everytime the server died.
Now I'm running REPAIR <table_name> QUICK and the server seems not to be dying, but it's taking a really long time to fix a 200k row table.
By the way, I only have two tables, one with 200k rows and another with 11M rows. I peak about 2000 concurrent users, which result in about 100 queries per second max.
Is it normal that MySQL crashes this easy? Should I migrate to other more scalable databases like Cassandra? I actually prefer a relational database for my case (table results has a request_id, FK of a requests table that has a key_id FK of a keys table), but wouldn't mind losing the relational benefits for an actually good uptime.
I'm not sure that your question (Is it normal that MySQL crashes this easy? Should I migrate to other more scalable databases like Cassandra? ) can be answered objectively, so here is my subjective answer:
No, it's not normal, and you probably shouldn't migrate, since you want a relational database and MySQL is well supported and really well known.
More info about your problem (also subjective):
MySQL is really really stable. I have MySQL services that have been running for years with regular use and have not had issues. I'm sure many others have had similar experiences. Many large companies use MySQL with great success.
What concerns me is that killall and service restart didn't work. Especially since you are running on an Ubuntu LTS version that is over 2 years old, and hence pretty stable itself. While I cannot say this with absolute certainty, it looks like you may have hardware issues, or at the very least a corrupt hard drive. You may also have software and/or kernel options or settings that are causing problems.
My suggestion to you would be to do a full hardware check (at a minimum check RAM and HDD), and do a full reinstall.

Cloudfoundry: VMC: Error 1317: Query execution was interrupted on mysqldump

It is quite important for me to dump the mysql database of my cloudfoundry deployment. I am working with cloudfoundry's vmc and connection to the service works well. However mysqldump always fails which puts me in an awful situation as I am essentially not able to dump the data to do local migration testing.
The error presented by cloudfoundry / vmc is:
mysqldump: Error 1317: Query execution was interrupted when dumping table 'foo' at row: 28
It appears that this results from some setting in cloudfoundry that kills any query that takes longer than 3 seconds. See for instance
mysqldump: Error 1317: Query execution was interrupted while running database Backup
MySql on CloudFoundry often fails with Query execution was interrupted;
Is there anyway to change the configuration or make cloudfoundry ignore the 3 second rule for mysqldump? Any suggestions?
PS: This timeout has also shown to be very destructive if the execution of a migration takes to long.
depending on the quality of the connection between yourself and CloudFoundry.com, these kind of timeouts can be an issue. It might be worth taking a look at a Ruby application I wrote to take routine backups of MySQL databases and upload them to a cloud service provider such as Amazon S3.
Take a look at the repository at https://github.com/danhigham/service_stash
The set up is pretty straight forward but if you get stuck then let me know.

Connection during MySQL import

What happens to a long lasting query executed from commandline via SSH if the connection to MySQL or SSH is lost?
Context:
We have 2 servers with a very large MySQL database on them. One server acts as the Master, and the other as Slave. During regular maintenance, the replication became corrupt, and we noticed data was missing from the slave, even though it reported Seconds_Behind_Master = 0.
So I am in the process of repairing the replication. I am, as we speak, importing one of two large dumps in to the slave. I am connected to MySQL through SSH, and used the MySQL "\. file.sql" command to import the dump.
Right now I am constantly getting results like so "Query OK, 6798 rows affected".
It has been running for probably 30 minutes now. My question and worry is, what happens if I lose connection through SSH while this is running?
I have another, even larger dump to import after this.
Thanks for the answer!
-Steve
if you lose your connection, all children of your bash process will die, including mysql.
to avoid this problem use the screen command.