I need to clone a MySQL database in one GCP to another GCP account.
The most obvious way I can think of is exporting MySQL and then importing it another account.
What are other alternatives?
Go to Cloud SQL page in the console and chose migrate data. Here you have several cases of migration, and among them this one which match your requirement:
Google Cloud project to Google Cloud project
Move an instance from another Google Cloud project into this one
You can choose to set this read replica as master (and thus to finish the migration), or you can keep the state of read replica and your clone will be always the image of your original project. Here the described steps:
The Cloud SQL Migration Assistant will guide you through the following steps:
Providing details on your data source
Creating a Cloud SQL read replica
Synchronising the read replica with the source
Promoting the read replica to your primary instance (optional)
1. Export data to Cloud Storage in the source account
Choose Cloud Storage export location, Format
The SQL export process may take a long time (possibly an hour or more
for large instances). You will not be able to perform operations on
your instance for the entire duration of the export. Once begun, this
process cannot be canceled.
2. Copy the exported dump file to destination account
a. Create a bucket
b. Edit bucket permissions
c. Add member
d. Enter the mail source account
e. Select Role
Copy the file from soure account to destination account :
gsutil mv gs://source/export gs://destination/export
If the dump file is to big use: Cloud Data Transfer
3. Select Cloud SQL Migrate data
Begin Migration
a. Choose Data Source Details: Name of data source, Public IP address of source, Port number of source, MySQL replication username, Password,
b. Create and configure a new Cloud SQL read replica of the external primary instance. Choose Read replica instance ID, Location, Region, Machine type, Storage type, Storage capacity, Import SQL dump from Google Cloud Storage
c. Data synchronization
d. Read replica promotion (optional)
Related
I am looking for options to archive my old data from specific tables of an AWS RDS MySQL database.
I came across AWS S3, AWS Glacier and copy the data to either one using some Pipelines or Buckets, but from what I understood they copy the data to vault or backups the data, but don't move them.
Is there a proper option to archive the data by moving from RDS to S3 or Glacier or Deep Archive? i.e., deleting from the table in AWS RDS after creating an archive.
What would be the best option for the archival process with my requirements and would it affect the replicas that already exist?
The biggest consideration when "archiving" the data is ensuring that it is in a useful format should you every want it back again.
Amazon RDS recently added that ability to export RDS snapshot data to Amazon S3.
Thus, the flow could be:
Create a snapshot of the Amazon RDS database
Export the snapshot to Amazon S3 as a Parquet file (you can choose to export specific sets of databases, schemas, or tables)
Set the Storage Class on the exported file as desired (eg Glacier Deep Archive)
Delete the data from the source database (make sure you keep a Snapshot or test the Export before deleting the data!)
When you later wish to access the data:
Restore the data if necessary (based upon Storage Class)
Use Amazon Athena to query the data directly from Amazon S3
Recently I did build a similar pipeline using AWS lambda that runs on a cron schedule(Cloudwatch event) every month to take a manual snapshot of the RDS, export it to S3, and delete the records that are older than n days
I added a gist of the util class that I used, adding it here if it helps anyone
JS Util class to create and export Db snapshots to S3
PS: I just wanted to add it as a comment to the approved answer but don't have enough reputations for that.
I'm a bit stuck, I switched recently to Google Cloud MySQL and I would like to clone one of my database (not instance) for an external development environment for freelancers.
The idea is to clone/duplicate existing live database, then scrap sensitive datas (emails, etc...).
I know I need to use "gcloud" command line function but I don't really know to do it.
Can someone help me ?
The easiest way to do this would be to restore a backup made on the first instance to a new instance. I recommend you review the Cloud SQL documentation around backups
Example steps:
Create an on demand backup
gcloud sql backups create --async --instance [SOURCE_INSTANCE_NAME]
You can see a list of backup ids for the source instance with this:
gcloud sql backups list --instance [SOURCE_INSTANCE_NAME]
Restore to the new instance. After preparing the new instance (creating, ensuring it has no replicas, etc).
gcloud sql backups restore [BACKUP_ID] --restore-instance=[TARGET_INSTANCE_NAME] \
--backup-instance=[SOURCE_INSTANCE_NAME]
You can also do all of the above through the console.
Once the restore is complete, you can remove the backup. The easiest way to do this is through the console, but it can be done via the REST API if necessary.
Of course, there isn't a gcloud command to do the data cleanup you describe, you would need to do that yourself, based on your own data and anonymization requirements. Doing good anonymization can be tricky unless you have a very limited amount of sensitive data.
If instead you just want to export a single database, then you can use the export and import functionality. This is subject to some limitations, for example, triggers, stored procedures, and possibly views, etc, will need to be manually recreated.
Full instructions for export, but here's a quick summary.
You will need a cloud storage bucket to hold the output, and the service account for the database will need to be a writer on that bucket. Once that is in place:
gcloud sql export sql [INSTANCE_NAME] gs://[BUCKET_NAME]/[DUMP_FILE_NAME] \
--database=[DATABASE_NAME]
You can then either download the file and use it on a local database, or import it into a new instance, as so:
gcloud sql import sql [INSTANCE_NAME] gs://[BUCKET_NAME]/[DUMP_FILE_NAME] \
--database=[DATABASE_NAME]
Obviously, sanitizing the data is still up to you.
I am writing an API to fetch the log files from Google Cloud SQL. I need the log file name and path to fetch this information. I can see that the log file names are mysql.err, mysql-general.log and, mysql-slow.log from the log viewer interface.
Are those values fixed? Or Is it possible to change those values? If a user can change the log path, how do we retrieve the path from Google Cloud SQL?
Cloud SQL instances run under “protected” VM instances that are internally managed. As you stated, you can view the logs via stackdriver, but you can’t actually personalize paths or even access the actual log file. If for any reason you do need a log file, what you can do is create an export from stackdriver to Cloud Storage. From there you can simply use any Cloud Storage client library to fetch the files.
If you do need to have this flexibility, another option is to create a Compute Engine instance and run you own mysql from there having full access.
Is it possible for Mysql to encrypt its stored files (database scheme & data on disk) in a way that someone not be able to copy these files to another machine that and read them using his own installed Mysql root user?
if not is there a DBMS that be able to protect database stored files on disk by encryption?
Unfortunately, MySQL doesn't support data file encryption natively.
However, there are 3rd products out there like:
http://www.vormetric.com/products/vormetric_database_encryption_expert.html
To be honest, if the database content has any commercial value or contains personal data about individuals, you should really control who has access to the datafiles (whether encrypted or not).
To use the windows EFS encryption:
http://windows.microsoft.com/en-us/windows/encrypt-decrypt-folder-file#1TC=windows-7
Read more obout it:
http://www.petri.co.il/how_does_efs_work.htm#
!!! Don't forget to export the certificate !!!
If you are using windows EFS and starting MySQL as a service, you will need to do the following:
go to Services and find the MySQL service
stop the service
right-click -> properties -> LogON TAB
check "This account"
fill your windows account name eg. ".\username"
provide your password
start the service
The MySQL service should now start without errors.
I am using SQL Server Management Studio running on my local machine.
I can log on to a remote box (database engine) and use the
Studio to create a database backup which is saved to a drive on the remote box.
How do I get it to save the backup to a drive on my local machine?
See this MSDN article, for the section on backing up to a network share, e.g.:
BACKUP DATABASE YourDatabase
TO DISK = '\\SomeMachine\Backups\YourDatabase.Bak';
Backing Up to a File on a Network
Share For SQL Server to access a
remote disk file, the SQL Server
service account must have access to
the network share. This includes
having the permissions needed for
backup operations to write to the
network share and for restore
operations to read from it. The
availability of network drives and
permissions depends on the context is
which SQL Server service is running:
To back up to a network drive when SQL
Server is running in a domain user
account, the shared drive must be
mapped as a network drive in the
session where SQL Server is running.
If you start Sqlservr.exe from command
line, SQL Server sees any network
drives you have mapped in your login
session.
When you run Sqlservr.exe as
a service, SQL Server runs in a
separate session that has no relation
to your login session. The session in
which a service runs can have its own
mapped drives, although it usually
does not.
You can connect with the
network service account by using the
computer account instead of a domain
user. To enable backups from specific
computers to a shared drive, grant
access to the computer accounts. As
long as the Sqlservr.exe process that
is writing the backup has access, it
is irrelevant whether the user sending
the BACKUP command has access.
STEP 1 : From sql-server 2008 connect to remote server
STEP 2 : Right Click server database
STEP 3 : Select Export Option
STEP 4 : Follow Instructions, Import to local server database and Backup from Local database
In Microsoft SQL Server Management Studio you can right-click on the database you wish to backup and click Tasks -> Generate Scripts.
This pops open a wizard where you can set the following in order to perform a decent backup of your database, even on a remote server:
Select the database you wish to backup and hit next,
In the options it presents to you:
In 2010: under the Table/View Options, change 'Script Data' and 'Script Indexes' to True and hit next,
In 2012: under 'General', change 'Types of data to script' from 'Schema only' to 'Schema and data'
In 2014: the option to script the data is now "hidden" in step "Set Scripting Options", you have to click the "Advanced" and set "Types of data to script" to "Schema and data" value
In the next four windows, hit 'select all' and then next,
Choose to script to a new query window
Once it's done its thing, you'll have a backup script ready in front of you. Create a new local (or remote) database, and change the first 'USE' statement in the script to use your new database. Save the script in a safe place, and go ahead and run it against your new empty database. This should create you a (nearly) duplicate local database you can then backup as you like.
If you have full access to the remote database, you can choose to check 'script all objects' in the wizard's first window and then change the 'Script Database' option to True on the next window. Watch out though, you'll need to perform a full search & replace of the database name in the script to a new database which in this case you won't have to create before running the script. This should create a more accurate duplicate but is sometimes not available due to permissions restrictions.
You can't - the remote machine has no information about your local machine's setup and cannot see the drives on it. You'd have to setup a shared folder on your local machine, and make sure the remote machine has access to it (which will mean both the SQL Server Agent and SQL Server services on the remote machine will need access to it via domain accounts).
If the remote machine is on the same network as your machine, see AdaTheDev's answer.
Otherwise you'll have to RDP or FTP into the remote machine and transfer the backup manually. I recommend 7-zip'ing it by the way.