Cloudfoundry: VMC: Error 1317: Query execution was interrupted on mysqldump - mysql

It is quite important for me to dump the mysql database of my cloudfoundry deployment. I am working with cloudfoundry's vmc and connection to the service works well. However mysqldump always fails which puts me in an awful situation as I am essentially not able to dump the data to do local migration testing.
The error presented by cloudfoundry / vmc is:
mysqldump: Error 1317: Query execution was interrupted when dumping table 'foo' at row: 28
It appears that this results from some setting in cloudfoundry that kills any query that takes longer than 3 seconds. See for instance
mysqldump: Error 1317: Query execution was interrupted while running database Backup
MySql on CloudFoundry often fails with Query execution was interrupted;
Is there anyway to change the configuration or make cloudfoundry ignore the 3 second rule for mysqldump? Any suggestions?
PS: This timeout has also shown to be very destructive if the execution of a migration takes to long.

depending on the quality of the connection between yourself and CloudFoundry.com, these kind of timeouts can be an issue. It might be worth taking a look at a Ruby application I wrote to take routine backups of MySQL databases and upload them to a cloud service provider such as Amazon S3.
Take a look at the repository at https://github.com/danhigham/service_stash
The set up is pretty straight forward but if you get stuck then let me know.

Related

MySQL queries hang; database down

I have one big dataset (1.6 GB) and I am writing MySQL queries to it. Some queries took long time (10 sec). After I send my app, that send this queries, to few people (cca 20) something went wrong and database is "corrupted" (don't know the better word). I cant send queries to this database anymore (query hang, process never ends).
I have tried to repair my database using "Repair database" in cpanel, but it returns:
Internal Server Error
500
No response from subprocess ( (cpanel)): The subprocess reported error
number 72,057,594,037,927,935 when it ended. The process dumped a core
file.
Do I have to delete database (supose I can, I didn't try), or is there any way to somehow restore (restart) database.
I am really not good at database management.
in my case you just need to update your cpanel.
or you have duplicate cpanel
try
/usr/local/cpanel/scripts/autorepair fix_duplicate_cpanel_rpms
/usr/local/cpanel/scripts/upcp --force

Good idea to use SQS to move thousands of databases?

We want to move from using MySQL on an EC2 instance to RDS and setup replication. Seems like a no-brainer, right? Well, I've got 30,000 databases to move (don't ask). While setting up replication seems to work well, the process of getting the 30,000 databases into RDS is a royal pain; it takes forever and something almost alway happens.
The nightly backup takes about two hours. I end up with a multi-GB SQL dump file. When I try to restore it, something almost always goes wrong: the RDS instance wasn't big enough memory-wise and crashed, the localhost ran out of swap space, the network connection went flaky. Whatever! I did get it to restore once; IIRC it took 23 hours (30K MySQL DBs are a ton of file IO).
So today, I decided to use mydumper. It generated 30,000 schema files for the database in about two hours, then suddenly, the source MySQL went into uninterruptible sleep according to top, I lost my client connections, strace showed it was still trying to read files, and the mydumper process crashed. I restarted the whole process and just checked the status; mysqld restarted 2.5 hours into it for some reason.
So here's what I'm thinking and I'd like your input: I write two python scripts: firstScript.py will run mydumper on a single database, update a status table, package up the SQL, put it onto an AWS SQS queue, repeating until no more databases are found; the secondScript.py reads from the queue, runs the SQL and updates the status table, repeating until no more messages are found.
I think this can work. Do you? The main thing I'm not sure of is this: can I simply run multiple secondScript.py by Ctrl-Z-ing them into the background?
Or does someone have a better way of moving 30,000 databases?
I would not use mysqldump or mydumper to make a logical dump. Loading the resulting SQL-format dump takes too long.
Instead, use Percona XtraBackup to make a physical backup of your EC2 instance, and upload the backup to S3. Then restore to the RDS instance from S3, setup replication on the RDS instance to your EC2 instance, and let it catch up.
The feature of restoring a physical MySQL backup to RDS was announced in November 2017.
See also:
https://www.percona.com/blog/2018/04/02/migrate-to-amazon-rds-with-percona-xtrabackup/
https://aws.amazon.com/about-aws/whats-new/2017/11/easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup/
You should try it out with a smaller instance than your 30k databases just so you get some practice with the steps. See the steps in the Percona blog I linked to above.

Node mysql query takes a great amount of time (during its first execution) but only on my local machine

Im using the official mysql node module in a node module.
The connecting to the database happens instantly. When I execute a query though, t takes almost ten/twenty seconds for a response. The next time the query (identical SQL) is executed it is near instantaneous.
This only happens on my local machine (speaking to the same mysql database) but not on my production server(where the mysql database is located).
I had to edit my timeout values on my local machine to allow the queries to not timeout.
Edit: to clarify, I am not running a local db. I'm speaking to the production DB when I'm running from my local machine.
What could be happening here?

Google Cloud SQL - How to Terminate Import

I have a 2nd generation instance of Cloud SQL that originally failed on import of a large CSV file (approx 18 million records) -- the first run.
I tweaked the file, re-uploaded and attempted another import -- the second run.
While the first run imported within hours, the second run is still running 2 days later. I am not able to kill the process via mysql 'show processlist; kill ' as the process owner is cloudsqlimport so I get an error of :
You are not owner of thread 8949
I am not able to delete the Cloud SQL instance as that presents the error:
Operation failed because another operation was already in progress.
So it appears that I have no way of knowing when and if the import process will terminate nor any way to just delete the Cloud SQL instance to not incur charges for an unusable instance of Cloud SQL.
Likewise, I am not able to delete the database.
Any guidance on this would be greatly appreciated as I seem to have no options for resolving this problem.
Thanks

Connection during MySQL import

What happens to a long lasting query executed from commandline via SSH if the connection to MySQL or SSH is lost?
Context:
We have 2 servers with a very large MySQL database on them. One server acts as the Master, and the other as Slave. During regular maintenance, the replication became corrupt, and we noticed data was missing from the slave, even though it reported Seconds_Behind_Master = 0.
So I am in the process of repairing the replication. I am, as we speak, importing one of two large dumps in to the slave. I am connected to MySQL through SSH, and used the MySQL "\. file.sql" command to import the dump.
Right now I am constantly getting results like so "Query OK, 6798 rows affected".
It has been running for probably 30 minutes now. My question and worry is, what happens if I lose connection through SSH while this is running?
I have another, even larger dump to import after this.
Thanks for the answer!
-Steve
if you lose your connection, all children of your bash process will die, including mysql.
to avoid this problem use the screen command.