I have a 2nd generation instance of Cloud SQL that originally failed on import of a large CSV file (approx 18 million records) -- the first run.
I tweaked the file, re-uploaded and attempted another import -- the second run.
While the first run imported within hours, the second run is still running 2 days later. I am not able to kill the process via mysql 'show processlist; kill ' as the process owner is cloudsqlimport so I get an error of :
You are not owner of thread 8949
I am not able to delete the Cloud SQL instance as that presents the error:
Operation failed because another operation was already in progress.
So it appears that I have no way of knowing when and if the import process will terminate nor any way to just delete the Cloud SQL instance to not incur charges for an unusable instance of Cloud SQL.
Likewise, I am not able to delete the database.
Any guidance on this would be greatly appreciated as I seem to have no options for resolving this problem.
Thanks
Related
I have one big dataset (1.6 GB) and I am writing MySQL queries to it. Some queries took long time (10 sec). After I send my app, that send this queries, to few people (cca 20) something went wrong and database is "corrupted" (don't know the better word). I cant send queries to this database anymore (query hang, process never ends).
I have tried to repair my database using "Repair database" in cpanel, but it returns:
Internal Server Error
500
No response from subprocess ( (cpanel)): The subprocess reported error
number 72,057,594,037,927,935 when it ended. The process dumped a core
file.
Do I have to delete database (supose I can, I didn't try), or is there any way to somehow restore (restart) database.
I am really not good at database management.
in my case you just need to update your cpanel.
or you have duplicate cpanel
try
/usr/local/cpanel/scripts/autorepair fix_duplicate_cpanel_rpms
/usr/local/cpanel/scripts/upcp --force
I've one MYSQL server with 5 databases.
I was using phpmyadmin csv import to load a very big amount of data in one table of one database.
I understand that all other operations in this machine may get slower due to the amount of processing take, but MYSQL is simple not responding to any other simultaneous request, even in other table or in other database.
And because of this apache doen't answer any request that need database connection (keeps loading forever)
after the import is finished, the apache and the mysql return to work normaly... i dont need to restart or execute any other command
my question is, Is this behavior normal? should mysql stop answering all other requests due a single giant one?
I'm afraid that if i've a big query running in one database in this server, all my other databases will be locked also and my applications stop working
I would have two questions related to cloud sql backups:
Are backups removed together with instance or maybe they are left for some days?
If no, is it possible to create new instance from backup of already gone instance?
I would expect it possible but looks like backups are only listable under the specific instance and there is no option to start new instance from existing backup.
Regarding to (2): It's actually possible to recover them if you are quick enough. They should still be there, even when Google says they're deleted.
If you know the name of the deleted DB run the following command to check if they are still there
gcloud sql backups list --instance=deleted-db-name --project your-project-name
If you can see any results, you are lucky. Restore them ASAP!
gcloud sql backups restore <backup-ID> --restore-instance=new-db-from-scratch-name --project your-project
And that's it!
Further info: https://geko.cloud/gcp-cloud-sql-how-to-recover-an-accidentally-deleted-database/
Extracted from Google Cloud SQL - Backups and recovery
Restoring from a backup restores to the instance from which the backup
was taken.
So the answer to (1) is they're gone and with regards to (2) if you didn't export a copy of the DB to your Cloud Storage, then No, you can't recover your deleted Cloud sQL instance content.
I noticed a change in this behavior recently (July 28, 2022). Part of our application update process was to run an on-demand backup on the existing deployment, tear down our stack, create a new stack, and then populate the NEW database from the contents of the backup.
Until now, this worked perfectly.
However, as of today, I'm unable to restore from the backup since the original database (dummy-db-19e2df4f) was deleted when we destroyed the old stack. Obviously the workaround is to not delete our original database until the new one has been populated, but this apparent change in behavior was unexpected.
Since the backup is listed, it seems like there are some "mixed messages" below.
List the backups for my old instance:
$ gcloud sql backups list --instance=- | grep dummy-db-19e2df4f
1659019144744 2022-07-28T14:39:04.744+00:00 - SUCCESSFUL dummy-db-19e2df4f
1658959200000 2022-07-27T22:00:00.000+00:00 - SUCCESSFUL dummy-db-19e2df4f
1658872800000 2022-07-26T22:00:00.000+00:00 - SUCCESSFUL dummy-db-19e2df4f
1658786400000 2022-07-25T22:00:00.000+00:00 - SUCCESSFUL dummy-db-19e2df4f
Attempt a restore to a new instance (that is, replacing the contents of new-db-13d63593 with that of the backup/snapshot 1659019144744). Until now this worked:
$ gcloud sql backups restore 1659019144744 --restore-instance=new-db-13d63593
All current data on the instance will be lost when the backup is
restored.
Do you want to continue (Y/n)? y
ERROR: (gcloud.sql.backups.restore) HTTPError 400: Invalid request: Backup run does not exist..
(uh oh...)
Out of curiosity, ask it to describe the backup:
$ gcloud sql backups describe 1659019144744 --instance=dummy-db-19e2df4f
ERROR: (gcloud.sql.backups.describe) HTTPError 400: Invalid request: Invalid request since instance is deleted.
It is quite important for me to dump the mysql database of my cloudfoundry deployment. I am working with cloudfoundry's vmc and connection to the service works well. However mysqldump always fails which puts me in an awful situation as I am essentially not able to dump the data to do local migration testing.
The error presented by cloudfoundry / vmc is:
mysqldump: Error 1317: Query execution was interrupted when dumping table 'foo' at row: 28
It appears that this results from some setting in cloudfoundry that kills any query that takes longer than 3 seconds. See for instance
mysqldump: Error 1317: Query execution was interrupted while running database Backup
MySql on CloudFoundry often fails with Query execution was interrupted;
Is there anyway to change the configuration or make cloudfoundry ignore the 3 second rule for mysqldump? Any suggestions?
PS: This timeout has also shown to be very destructive if the execution of a migration takes to long.
depending on the quality of the connection between yourself and CloudFoundry.com, these kind of timeouts can be an issue. It might be worth taking a look at a Ruby application I wrote to take routine backups of MySQL databases and upload them to a cloud service provider such as Amazon S3.
Take a look at the repository at https://github.com/danhigham/service_stash
The set up is pretty straight forward but if you get stuck then let me know.
I have two MSSQL 2012 databases.
I have snapshot replication configured where the first server is a publisher and distributer, and the other is a subscriber.
I would like to be able to execute a command on the publisher just before the replication job occurs, and then another command on the subscriber just after the replication finishes.
I belive this should be a pull snapshot replication, so that the agent is located on the subscriber server.
Is this even possible?
EDIT. Due to the nature of snapshot replication, i switched to using transactional replication, thus removing my ability to execute scripts on replication-start and -stop.
I never did find a way to execute commands successfully when data is replicating, as i switched to transactional replication. The job handling this transaction type, will start and then just keep running, and not like snapshot replication where the job starts - replicates data - stops.
Instead i set up the jobs i needed executed, using the task scheduler. My services transfers files, to and from a webserver, through the database. And will only transfer files if not already present.
Using task scheduler is working pretty good, and it is MUCH more simple and stable than having something execute a sql script, which would then execute a powershell remoting command to connecto to the server and execute the service.
I just thought i would add this if anyone else stumbles on a similar problem :)