How to migrate parts from one active database to another active database? - mysql

I'm creating a content management system with a centralized database for all the users on the platform, a user could have one or more websites.
When a user's website is finished for filling content we put the website on a test domain with a centralized test database.
After filling the content the user gives us a call and we migrate the website to a new host and migrate/merge the data from our centralized test database to our centralized production database.
What's the best solution for doing this? Because I'm afraid this is going to cause a lot of problems in the future (data that isn't in sync, data that gets overwritten or worse the CMS breaks...).
To sum it all up: How to migrate parts from one active database to another active database

1. Use MySQLDump as:
mysqldump -hREMOTE_HOST -uroot -pREMOTEpwd --opt --compress REMOTEdb | mysql -uroot -pLOCALpwd LOCALdb
This command can be executed on the destination server and it will pull the contents of the source database, recreating all tables and data.
Using this approach would require taking the source
database server down to avoid loss of data.
2. As per subsequent requirement, you need incremental backup plan.
An incremental backup only backs up data that changed since the
previous backup. This technique provides additional flexibility in
designing a backup strategy and reduces required storage for backups.
Incremental backup is enabled through an option to the mysqlbackup command.
Sample command line arguments to start mysqlbackup are:
# Information about data files can be retrieved through the database connection.
# Specify connection options on the command line.
mysqlbackup --user=dba --password --port=3306 \
--with-timestamp --backup-dir=/export/backups \
backup
# Or we can include the above options in the configuration file
# under [mysqlbackup], and just specify the configuration file
# and the 'backup' operation.
mysqlbackup --defaults-file=/usr/local/mysql/my.cnf backup
# Or we can specify the configuration file as above, but
# override some of those options on the command line.
mysqlbackup --defaults-file=/usr/local/mysql/my.cnf \
--compress --user=backupadmin --password --port=18080 \
backup
The --user and the --password we specify are used to connect to the MySQL server.
The --with-timestamp option places the backup in a subdirectory created under the directory we have specified above. The name of the backup subdirectory is formed from the date and the clock time of the backup run.
See the full reference here and here.

Related

MySQL Data Directory on Network Drive?

I am new to databases and MySQL and am still in the process of learning it. I have been tasked to see if it is possible to store the MySQL Data Directory in a Network Drive... The purpose is to have a backup of the directory and allowing multiple users to point to that particular directory.
I have been able to successfully move the data directory to a different location on my PC but have been unsuccessful when I tried moving the data directory into a Network Drive.
Is it possible to move the data directory into a shared Network Drive, and if so, what steps should I take?
Notes:
Windows 10
Attempted moving the directory and editing the my.ini
file
Perhaps your approach is not optimal or I'm misunderstanding the question (or whoever gave you the task isn't clear on the best ways to back up MySQL databases). If I were you, I'd put it to whoever asked you to do this task that making plain-text SQL (*.sql) dumps of the databases and putting those into the backup directory would be easier/simpler than making a backup of the data directory itself, which contains binary file representations of the databases.
From the MySQLdump manual page:
To dump all databases:
$ mysqldump --all-databases > dump.sql
To dump only specific databases, name them on the command line and use the --databases option:
$ mysqldump --databases db1 db2 db3 > dump.sql
The --databases option causes all names on the command line to be treated as database names. Without this option, mysqldump treats the first name as a database name and those following as table names.
With --all-databases or --databases, mysqldump writes CREATE DATABASE and USE statements prior to the dump output for each database. This ensures that when the dump file is reloaded, it creates each database if it does not exist and makes it the default database so database contents are loaded into the same database from which they came. If you want to cause the dump file to force a drop of each database before recreating it, use the --add-drop-database option as well. In this case, mysqldump writes a DROP DATABASE statement preceding each CREATE DATABASE statement.
To dump a single database, name it on the command line:
$ mysqldump --databases test > dump.sql
Exercise for the reader: Write a script (crontab) or set up a scheduled task to dump the databases and move the output to the network drive.
If that's not what is required, but access to the database by multiple people is, create user accounts using the MySQL Server RDBMS instead. (You might need to configure the server to allow remote access. In that case, remove any test or anonymous/blank password accounts and change the root password to something more secure than root, admin or password1.)

Openshift 3 mysql.user table is damaged. please run mysql_upgrade openshift

Issue
When starting a mysql deployment on OpenShift V3 I get the following exception:
mysql.user table is damaged. Please run mysql_upgrade
I cannot run mysql_upgrade as pod isn't ready.
Questions
I have the following questions
How can I fix this, or
How can I backup the data
If the pod won't start, you can mount the volume with your data to another pod and download (oc rsync, interactive tutorial here) what you had mounted in the database pod under /var/lib/mysql/data/. Then, you can try recovering the data from that.
Generally, this could happen if you process an older database dump sql script (created using mysqldump) on a newer MySQL version. In that case, chances are the root user was removed from the table too (if it was not in the old database). If you created such a dump, still have the older dump, and it's "good enough", you should be able to proceed as follows to import the original data again and prevent this situation:
Create a backup copy of the database dump that you have previously created using mysqldump from the older MySQL database version, so that you can always get back to it, if things go south.
Edit the database dump sql file and remove all the content that manipulates the mysql.user table; that is, delete lines under the -- Table structure for table 'user' and -- Dumping data for table 'user' sections and save the modified file. I assume here that you have your user and password specified in environment variables, in the MySQL deployment configuration).
Scale down your database pod to 0 replicas.
Delete your mysql persistent volume claim; this will delete the database that you have hopefully downloaded after mounting the volume to another pod, as mentioned above.
Recreate the PVC, under the same name.
Scale up your MySQL pod to one replica. That will initialize the database and create a user as per the environment variables .
Copy the modified sql dump file created in step 2 (that is, the one not affecting the mysql.user table) to the database pod using oc rsync.
In the MySQL pod, restore the database using the uploaded file, as per this migration guide (step 6).
Grant all privileges to your user on the application database by GRANT ALL PRIVILEGES ON <database> TO '<mysql_user>'#'%' (Replace <database> and <mysql_user> appropriately.)
Exit the MySQL CLI on the pod, and run mysql_upgrade -u root in the shell.

MySQL database dump from remote host without temporary file

I'm trying to implement a database backup cron (other solutions welcome) in my job but I have a small problem:
I have a large database that is over 10GB in space and the current vm doesn't have space to store it in the temporary file that mysql creates.
I know I can use mysqldump with a host parameter, but my question is, when doing that does the temporary file generated by mysqldump stay at the machine that is running it or does it stay on the database server?
UPDATE:
I forgot to mention that I'm trying to backup a network of websites and that some of them are behind a firewall (needing VPN access), some need server hopping to get to the database server.
You can run a shell script from an archive host, where you've traded password-less ssh keys with the database server. This lets you transfer the file directly over ssh, without creating any temp files on the remote database server:
ssh -C myhost.com mysqldump -u my_user --password=bigsecret \
--skip-lock-tables --opt database_name > local_backup_file.sql
Obviously there are ways to secure that password on the command line, but this a method that could accomplish what you want. One advantage of this method is that it doesn't require the archive host to have access to port 3306 on the remote host.
This guy's version is cool because it also compresses the data on-the-fly before transferring it over the network, and then he uncompresses it before loading it into a local database.
ssh me#remoteserver 'mysqldump -u user -psecret production_database | \
gzip -9' | gzip -d | mysql local_database
But that's why my version uses ssh -C, which enables its own compression algorithm and avoids extra gzip pipes.
Depending on the circumstance it might be a better idea to use MySQL replication. Set up MySQL on your backup server and configure it as a slave of your production database (see http://dev.mysql.com/doc/refman/5.7/en/replication-howto.html). You can then dump the slave database easily.
An advantage of this approach is you're not transferring 10GB each time you want to backup, you're only transferring any changes to the database as and when they occur.
You'll need to keep an eye on the replication though, because if it fails your slave database will become stale.

Migrating existing database to Amazon RDS

How can I import existing MySQL database into Amazon RDS?
I found this page on the AWS docs which explains how to use mysqldump and pipe it into an RDS instance.
Here's their example code (use in command line/shell/ssh):
mysqldump acme | mysql --host=hostname --user=username --password acme
where acme is the database you're migrating over, and hostname/username are those from your RDS instance.
You can connect to RDS as if it were a regular mysql server, just make sure to add your EC2 IPs to your security groups per this forum posting.
I had to include the password for the local mysqldump, so my command ended up looking more like this:
mysqldump --password=local_mysql_pass acme | mysql --host=hostname --user=username --password acme
FWIW, I just completed moving my databases over. I used this reference for mysql commands like creating users and granting permissions.
Hope this helps!
There are two ways to import data :
mysqldump : If you data size is less than 1GB, you can directly make use of mysqldump command and import your data to RDS.
mysqlimport : If your data size is more than 1GB or in any other format, you can compress the data into flat files and upload the data using sqlimport command.
I'm a big fan of the SqlYog tool. It lets you connect to your source and target databases and sync schema and/or data. I've also used SQLWave, but switched to SqlYog. Been so long since I made the switch that I can't remember exactly why I switched. Anyway, that's my two cents. I know some will object to my suggestion of Windows GUI tools for MySQL. I actually like the SqlYog product so much that I run it from Wine (works flawlessly from Wine on Ubuntu for me).
This blog might be helpful.
A quick summary of a GoSquared Engineering post:
Configuration + Booting
Select a maintenance window and backup window when the instance will be at lowest load
Choose Multi-AZ or not (highly recommended for auto-failover and maintenance)
Boot your RDS instance
Configure security groups so your apps etc can access the new instance
Data migration + preparation
Enable binlogging if you haven't already
Run mysqldump --single-transaction --master-data=2 -C -q dbname -u username -p > backup.sql on the old instance to take a dump of the current data
Run mysql -u username -p -h RDS_endpoint DB_name < backup.sql to import the data into your RDS instance (this may take a while depending on your DB size)
In the meantime, your current production instance is still serving queries - this is where the master-data=2 and binlogging comes in
In your backup.sql file, you'll have a line at the top that looks like CHANGE MASTER TO MASTER_LOG_FILE=’mysql-bin.000003′, MASTER_LOG_POS=350789121;
Get the diff since backup.sql as an SQL file mysqlbinlog /var/log/mysql/mysql-bin.000003 --start-position=350789121 --base64-output=NEVER > output.sql
Run those queries on your RDS instance to update it cat output.sql | mysql -h RDS_endpoint -u username -p DB_name
Get the new log position by finding end_log_pos at the end of the latest output.sql file.
Get the diff since the last output.sql (like step 6) and repeat steps 7 + 8.
The actual migration
Have all your apps ready to deploy quickly with the new RDS instance
Get the latest end_log_pos from output.sql
Run FLUSH TABLES WITH READ LOCK; on the old instance to stop all writes
Start deploying your apps with the new RDS instance
Run steps 6-8 from above to update the RDS instance with the last queries to the old server
Conclusion
Using this method, you'll have a small amount of time (depending on how long it takes to deploy your apps + how many writes your MySQL instance serves - probably only a minute or two) with writes being rejected from your old server, but you will have a consistent migration with no read downtime.
A full and detailed post explaining how we (GoSquared) migrated to RDS with minimal downtime (including error debugging) is available here: https://engineering.gosquared.com/migrating-mysql-to-amazon-rds.
I am completely agree with #SanketDangi.
There are two ways of doing this one way is as suggested using either mysqldump or mysqlimport.
I have seen cases where it creates problem while restoring data on cloud gets corrupt.
However importing applications on cloud has became much easier now a days. You try uploading your DB server on to public cloud through ravello.
You can import your database server itself on Amazon using ravello.
Disclosure: I work for ravello.
Simplest example:
# export local db to sql file:
mysqldump -uroot -p —-databases qwe_db > qwe_db.sql
# Now you can edit qwe_db.sql file and change db name at top if you want
# import sql file to AWS RDS:
mysql --host=proddb.cfrnxxxxxxx.eu-central-1.rds.amazonaws.com --port=3306 --user=someuser -p qwe_db < qwe_db.sql
AWS RDS Customer data Import guide for Mysql is available here : http://aws.amazon.com/articles/2933
Create flat files containing the data to be loaded
Stop any applications accessing the target DB Instance
Create a DB Snapshot
Disable Amazon RDS automated backups
Load the data using mysqlimport
Enable automated backups again
If you are using the terminal this is what worked for me:
mysqldump -u local_username -plocal_password local_db_name | mysql -h myRDS-at-amazon.rds.amazonaws.com -u rds-username -prds_password_xxxxx remote_db_name
and then i used MYSQL WorkBench (free download) to check it was working because the command line was static after pressing submit, i could have probably put -v at end to see it's output
Note: there is no space after -p
Here are the steps which i have done and had sucess.
Take the MySQLdump of the needed database.
mysqldump -u username -p databasename --single-transaction --quick --lock-tables=false >databasename-backup-$(date +%F).sql
( Dont forget to replace the username as root – most of the times, and databasename -> Db name of database which you are going to migrate to RDS )
Once prompted, enter your password.
Once done, login to the RDS Instance from your MySQL server ( Make sure the security groups are configured to allow the connection from Ec2 to RDS )
mysql -h hostaddress -P 3306 -u rdsusername -p
( Dont forget to replace hostaddress with the address of your RDS Instance and rdsusernmae with username for your RDS Instance, when prompted give the password too )
You find that hostaddress under – Connectivity & security -> Endpoint & port under RDS Database From AWS Console.
Once logged in, create the database using MySQL commands :
create database databasename;
\q
Once Database is created in RDS, Import the SQL file created in Step 1 :
mysql -h hostaddress -u rdsusername -p databasename < backupfile.sql
This should import the SQL file to RDS and restore the contents into the new database.
Reference from : https://k9webops.com/blog/migrate-an-existing-database-on-mysql-mariadb-to-an-already-running-rds-instance-on-the-aws/

Replicate MYSQL data from stage to dev with a script

I have two versions of my application, one "stage" and one "dev."
Right now, "stage" is exposed to the real world for beta-testing.
From time to time, I want an exact replica of the data to be replicated into the "dev" database.
Both databases are on the same hosted Linux machine.
Sometimes I create "dummy" data in the development environment. At this stage, I'd be fine if it needs to get written over in stage.
Thanks.
Be sure to add security to your script so only the user you are authorizing is able to run that script. basically you want to use mysql and mysqldump commands.
mysqldump -u username --password=userpass --add-drop-database --add=locks --create-options --disable-keys --extend-insert --result-file=database.sql databasename
mysql -u username --password=userpass -e "source database.sql;"
The first command will make the backup the second command will bring the backup to another database engine. Be careful because if you run it on the same exact process of mysql you are only backing up the database adn then restoring it to the same database, you have to change the database name.
Hope this helps.
Just use mysqldump to create a backup of the staging database and then load the dump file over your dev database. This will give you an exact copy of the stage data.