How do we retrieve log file path in Google Cloud SQL - mysql

I am writing an API to fetch the log files from Google Cloud SQL. I need the log file name and path to fetch this information. I can see that the log file names are mysql.err, mysql-general.log and, mysql-slow.log from the log viewer interface.
Are those values fixed? Or Is it possible to change those values? If a user can change the log path, how do we retrieve the path from Google Cloud SQL?

Cloud SQL instances run under “protected” VM instances that are internally managed. As you stated, you can view the logs via stackdriver, but you can’t actually personalize paths or even access the actual log file. If for any reason you do need a log file, what you can do is create an export from stackdriver to Cloud Storage. From there you can simply use any Cloud Storage client library to fetch the files.
If you do need to have this flexibility, another option is to create a Compute Engine instance and run you own mysql from there having full access.

Related

What is the best way to organize SQLite sync log between local and file server's copy?

I have a document, which is SQLite DB, stored locally. Need the way to organize some Cloud service, to sync all changes made to document, incrementally. Idea is to store log of transactions in JSON format and in certain moment (manually or by time) upload this log file to Cloud server, where changes will be applied to server copy. Cloud server should be as simple as possible, not true SQL server, like MySQL etc. Are there any pitfalls with this approach?

Is there any way to use a database url with the mysqldump command?

Let's say I have a cloud hosting service that has a database. The way the hosting service's CLI provides the database connection info is a url in the typical form: mysql://user:pass#hostname:port/db-name. I want to give this info to the mysqldump command to pipe the contents of that database into other commands. But it appears that there is no option to do that, just options to provide the credentials separately. Google provides no help. Do I need to parse the URL first?

What is the best approach to migrate MySQL database between GCP Accounts?

I need to clone a MySQL database in one GCP to another GCP account.
The most obvious way I can think of is exporting MySQL and then importing it another account.
What are other alternatives?
Go to Cloud SQL page in the console and chose migrate data. Here you have several cases of migration, and among them this one which match your requirement:
Google Cloud project to Google Cloud project
Move an instance from another Google Cloud project into this one
You can choose to set this read replica as master (and thus to finish the migration), or you can keep the state of read replica and your clone will be always the image of your original project. Here the described steps:
The Cloud SQL Migration Assistant will guide you through the following steps:
Providing details on your data source
Creating a Cloud SQL read replica
Synchronising the read replica with the source
Promoting the read replica to your primary instance (optional)
1. Export data to Cloud Storage in the source account
Choose Cloud Storage export location, Format
The SQL export process may take a long time (possibly an hour or more
for large instances). You will not be able to perform operations on
your instance for the entire duration of the export. Once begun, this
process cannot be canceled.
2. Copy the exported dump file to destination account
a. Create a bucket
b. Edit bucket permissions
c. Add member
d. Enter the mail source account
e. Select Role
Copy the file from soure account to destination account :
gsutil mv gs://source/export gs://destination/export
If the dump file is to big use: Cloud Data Transfer
3. Select Cloud SQL Migrate data
Begin Migration
a. Choose Data Source Details: Name of data source, Public IP address of source, Port number of source, MySQL replication username, Password,
b. Create and configure a new Cloud SQL read replica of the external primary instance. Choose Read replica instance ID, Location, Region, Machine type, Storage type, Storage capacity, Import SQL dump from Google Cloud Storage
c. Data synchronization
d. Read replica promotion (optional)

Azure App Service Backup Partial for Wordpress?

App Service on Standard plan, using MySQL in-app database. App is stopped, and a manual backup always completes as "partial". The configuration for the backup blade shows no database exists. I am concerned that the database in the filesystem is not being included, so the restore will fail.
How can I be confident in Azure App Service Backup?
Thanks.
Added Information: Backup Log
CorrelationId: 19a70ee5-7158-49e9-8f58-35e39f231a34
Creating temp folder.
Retrieve site meta-data.
Backing up the databases.
Failed to backup in-app database. Skipping in-app database backup.
Backing up site content and uploading to the blob...
Uploading metadata to the blob.
Partially Succeeded means that there were likely some files which could not be backed up, because they were locked for some reason. When this happens the backup process skips them and backs up the rest of the site and database if configured. You should be able to see which files were skipped in the log file. If for some reason you do not need these files backed up you can skip them by following the instructions in section “Backup just part of your app” here.
Stopping Locked On-Demand/Triggered Azure Webjob sometimes is the reason for a Partially Succeeded backup status
Using Azure website backup feature means you backup below things
1. Your web app configurations
2. Your actual web app file contents
3. The Azure MySQL database - This is optional. i.e. you can choose to back it up or not.
You can also try using using schedule backups. There is two options for this either schedule a full backup or partial backup using kudu console.
You may refer to this doc https://learn.microsoft.com/en-us/azure/app-service/manage-backup

Attach a remotely stored database

Is it possible to attach a database that is stored on a remote server because when I mapped a drive and tried to attach it in management studio, the drive does not show up as an option. I moved it because of disk space and if I cannot what is the alternative suggestions?
You should be able to attach a database on UNC path (I wouldn't use a mapped drive - that drive is mapped for you, not the SQL Server service account), but you have to ensure that the SQL Server service account has read/write permissions on the remote folder, and you have to run trace flag 1807 (please read Brent Ozar's post about this).
Also don't use the GUI for this. Once you have the trace flag set, have restarted the service, and have configured permissions correctly, use a new query window, and run the following command:
CREATE DATABASE db_name
ON (Filename = '\\uncpath\share\file.mdf'),
(Filename = '\\uncpath\share\file.ldf')
FOR ATTACH;
(The UI is never going to offer you a UNC path no matter what trace flags you have set or what permissions are enabled.)
Be prepared to handle a corrupted and possibly unrecoverable database should the network share go down, of course.
If that sounds scary to you, good! It should! This is not a good idea at all. Instead you should free up some space, add a drive, or host the database on a different instance.