copy/move all databases from one server to another mysql - mysql

Currently i have a test server 'X' which has many databases with data in it.I want to copy this data to another server ie 'Y'.Both are on the same mysql version.I read posts regarding the same.I know how to create a sql file and run it.But is there any other quick way without creating sql files for all databases.Should i use replication?is replication applicable in my scenario because there is no master/slave configratn here.

You can use below 2 approaches-
master/Slave: very less downtime hardly 1 to 10 minutes. As your slave catch the master then you can migrate your site/app to new server.
Binary Copy: You can copy all binary files from your mysql data directory and move to new server data directory. It will take time to copy and move your date from one server to another server.
Note: before binary copy you need to take a full dump backup so in case of any issue you are not loosing your data.

Related

SQL Server Agent Job - backup certain tables

I am trying to create a SQL Server Agent Job which performs the following functionality.
Creates a '.bak' file containing a backup of a set tables from within a database.
Zips this backup with a filename of something like "Database_20130111" \
Stores the zipped file in a specific location on the server.
Deletes all older zipped files only leaving the most recent.
Does anybody have any ideas as I am really struggling with this at the moment.

replicate a mysql table to another server without writing it

How can I configure MySQL to replicate a table to another server without writing the table on the first server? Can I write binary log but not commit the changes to the local table?
EDIT
I want to automatically (not one time) export all data inserted to a table to another mysql server, but to improve performance, I don't want to write the data to the master server, I want to directly send it to another server.
This can be achieved through standard replication. Just use the BLACKHOLE storage engine on the master server.

Backup MySQL on Apple time machine

not sure its a stack overflow question
I have a Mac and am hosting a Apache MySQL server on it using MAMP Pro. If I back up my data on the time machine, is MySQL database also backed up or do I have to create mysqldump and backup up as a cron job? In case of a crash do I do a normal restore in case it can be backed up on time machine.
Thanks
Make a regular dump with MySQL dump or use another specific database back-up tool. A copy of the data folder is not ok.
MySQL dump will really read the data and can be checked. It is not always true that all data is written completely to the data file and lockings give issues.
If you have a specific time of back-up just run a cron before that moment and verify whether it is safe and finished. MySQL will take care of lockings, changes, transactions etc.
Always, read always, verify your back-up with a restore test.

How can I recover MySQL tables from data files?

I've got a database (all MyISAM tables) and the machine where MySQL was running is no longer bootable. However, we have all the MySQL data files from the data directory. How can I restore the data from the MYD and FRM files, or whatever other files I should be looking at in the data directory?
I've been doing some searching on this and it sounds like for MyISAM I should just be able to copy the database subdirectory from the old MySQL data directory to the new MySQL data directory. However, that's not working for me. A database with the name of the database I'm trying to recover shows up in the list of databases in phpMyAdmin, but all the tables show "in use" and have no information (e.g., number of rows, number of bytes, column information, etc.). Any operation on those tables (e.g., SELECT * FROM {table}, REPAIR {table}, CHECK {table}) returns a "no such table" error.
One of the tools I ran across in my search is DBACentral by MicroOLAP. It's got component that's supposed to restore data from FRM/MYD files, but when I tried to run it, it didn't list any tables that it could recover from my FRM/MYD files.
This is on a developer workstation that's running Vista Business 32bit. MySQL version is 5.0.27. After fixing the machine, I went and got the exact same version of MySQL (v5.0.27), thinking that if I'm just going to drop in the binary data files I should do it with the same version of MySQL. It still didn't work.
Any insights would be greatly appreciated... thanks!
-Josh
Install the same version of mysql.
Remove mysql directory from data directory of the server and copy it from the crashed server. This is the key element
copy directory of database you want to recover into data directory of new server
start mysql.
switch to mysql database: USE mysql; and run REPAIR TABLE <table name> on every table.
Do the same with database you want to recover
tip: make sure the 2 directories have the same permissions like data directory
If you did not save mysql database (mysql directory in your old server's data dir, then you can try to:
create database with the same name as database you want to recover.
Then you can create each table (it would be good to use the same structure - you'd have bigger chance of recovery).
then stop mysql server and delete files from database directory and overwrite them with files from old server
start mysql and repair each table.
I wound up giving up. I think the answer is that, with my particular version of MySQL, this doesn't work. Hopefully things have improved since then.

mysql restore for files on another server

I have a test database on a separate remote server than my production DB. Every once in awhile, I want to try and test things by uploading a copy of my production DB to my testing DB. Unfortunately, the backup file is now half a gig and I'm having trouble transferring it via FTP or SSH. Is there an easy way that I can use the mysql restore command between servers? Also, is there another way to move over large files that I'm not considering? Half a gig doesn't seem that big, I would imagine that people run into this issue frequently.
Thanks!
Are the servers accessible to each other?
If so, you can just pipe the data from one db to another without using a file.
ex: mysqldump [options] | mysql -h test -u username -ppasswd
0.Please consider whether you really need production data (especially if it contains some sensitive information)
1.The simplest solution is to compress the backup on the source server (usually gzip), transfer it across the wire, then decompress on the target server.
http://www.techiecorner.com/44/how-to-backup-mysql-database-in-command-line-with-compression/
2.If you don't need the exact replica of production data (e.g. you don't need some application logs, errors, some other technical stuff) you can consider creating a backup and restore on a source server to a different DB name, delete all unnecessary data and THEN take a backup that you will use.
3.Restore full backup once on your reference server in your Dev environment and then copy transaction logs only (to replay them on the reference server). Depending on the usage pattern transaction logs may take a lot less space as the whole database.
Mysql allows you to connect to a remote database server to run sql commands. Using this feature, we can pipe the output from mysqldump and ask mysql to connect to the remote database server to populate the new database.
mysqldump -u root -p rootpass SalesDb | mysql --host=185.32.31.96 -C SalesDb
Use an efficient transfer method, rather than ftp.
If you have a dump file created by mysqldump, on the test db server, and you update it every so often. I think you could save time (if not disk space) by using rsync to transfer it. Rsync will use ssh and compress data for the transfer, but I think both the local and remote files should/could be uncompressed.
Rsync will only transfer the changed portion of a file.
It may take some time to decide what, precisely, has changed in a dump file, but the transfer should be quick.
I must admit though, I've never done it with a half-gigabyte dump file.