RHEL5 rSync Mysql server - mysql

Objective: to be able to synchronize 2 linux server realtime.
My concern is after using Rsync to mirror the mysql server. The only thing it wasnt able to synchronize is the entries (ie. inserting data to the database using the insert query). How will I be able to solve this?
Things I've done:
scp the keys of the 2 server so that password wont be asked for each transaction
I used
rsync -avc /var/lib/mysql/ root#10.1.99.XXX:/var/lib/mysql/
to sync the database/tables, but wasn't able to sync the entries.

Isubaki,
It's not quite as simple as just using rsync, as mysql may have the files open at the time you are pushing them across. Linux will do the file copy ok, but using this technique, the table is locked in memory until the database is restarted.
I do have a script that will do the sync part, but it does require a database restart, which may not be what you want (you mention realtime sync)

Related

Manually copy MySQL 5.5 database to a different computer

My company uses a product that uses MySQL 5.5 for its backend database. The product automatically installs and configures MySQL during it's installation process. The product can be configured to run in a Hot Standby Redundant configuration. In these cases, the same installation process is performed on 2 separate servers and then during the products initial configuration redundant mode is selected. The product internally handles all the processes of duplicating the database data and keeping the 2 databases in sync. MySQL has know knowledge of the redundant setup. The MySQL installation on both server are identical, same location and same structure. The product does not have a very elegant/efficient way to sync a large, say 300G is size with 3K tables, database from the Primary server to the Backup server in cases where this is required, such as when creating a redundant system from a Single/Primary server config that has already been running for a while. My question is as follows.
Is there a safe/supported way to just manually copy the database/files from the Primary server to the Backup server considering that the MySQL installation on both servers are identical? BTW, this is on Production Windows Servers. I know I can do a full Export of the database from the Primary and then Import it on the BU server, but this can take hours. I am hoping there is a faster supported way to just copy the files from one server to the other, but in researching this I see conflicting info.
System Info
Windows
MySQL 5.5
Identical installation on both servers
"C:\ProgramData\MySQL\MySQL Server 5.5\data"
Innodb
File per table = true
Thanks in advance for any advice.
I once tried to just copy the Database Folder that contains all the innodb table files, "C:\ProgramData\MySQL\MySQL Server 5.5\data\Mydbase", from one server to another but mysql would not start up and had errors.
Yes: shut down the MySQL Server service on both computers. Then you can move the files in the datadir in any way you want. But this incurs some downtime while you do the file transfer.
If you must have no downtime, it's also possible, but requires more steps.
What I do is use Percona XtraBackup to make a physical backup of the source instance, but this won't work as easily for you because you are on Windows. XtraBackup doesn't work on Windows. Some people use tricks to run XtraBackup in a Docker container on Windows.
Then restore the XtraBackup to your new computer in the normal way, and configure it as a replica of the source instance. See https://docs.percona.com/percona-xtrabackup/8.0/howtos/setting_up_replication.html
By making the new instance a replica, you can let it get updated with the most recent changes that have occurred on the source instance while you were setting up the replica.
Then at some point you decide to switch to the new instance. Then you set the source instance to read-only mode, to prevent client applications from making any new changes. Let the replica catch up with the last final changes (this should only take a second if the replica was keeping up with changes already). Now you can change your client applications to use the replica instead of the former source. Then un-configure replication on the new instance with RESET SLAVE because the last thing you want is for any more changes to occur on the former source and replicate to the new instance.
If you try this procedure, I suggest you test it on a test instance — NOT your production instance — until you are comfortable with the tools.
P.S.: In addition to not supporting Windows, I have no idea if the current version of XtraBackup works with MySQL 5.5. That version was released in 2010, and reached its end of life in 2018. So I think you will need to research which version of XtraBackup still can read a MySQL 5.5 instance. You might have to use an old version of XtraBackup.

How to change MySQL 8.0 data folder. Can I use OneDrive folders?

I'm trying to create a DataBase for a very little program that just I and 2 more friends use, so the database will be little.
What I want to do is to share this database with these 2 friends, so I thought about storing the data in a OneDrive shared folder.
At the moment, I'm using a txt file as a "database". Which is placed in a shared OneDrive folder, so when my friends execute the program, it can read the data from there and make it sort of real time "online".
The thing is that I can't find the my.ini file in my
C:\Program Files\MySQL\MySQL Server 8.0\
directory, so I can't change the data folder.
Another trouble I have is that I also don't have the Data folder in that directory, instead I've found it in this one:
C:\Program Files\MySQL\MySQL Workbench 8.0 CE
Is it possible to do what I want to do? And how should I proceed?
Do you think I should be using MySQL 5.x version?
Thanks
If you want to share a database with a couple of other users, putting the datadir in OneDrive isn't going to work.
MySQL stores data in the files under the datadir, that's true. They're stored in files called tablespaces which have the .ibd extension.
But as you do INSERT/UPDATE/DELETE operations, data is temporarily stored in memory and in the transaction logs (ib_logfile*. MySQL has to do a delicate coordination between these, to save data in a durable way, while ensuring good performance. It works well, but only if the MySQL Server is the only process writing to the files.
OneDrive is not coordinating with MySQL at all. It will periodically check for files that have changed since it last did a sync. The interval OneDrive checks files is about every 10 minutes, and this is not configurable.
OneDrive may choose to sync the files at a moment after you've done some INSERT/UPDATE/DELETE operations, and the data is modified in RAM, but hasn't yet been updated in the tablespace files on disk.
Once you've committed a change, it must also be safely stored in the transaction log, even if it isn't updated in the tablespace. But if your friends receive the files in this state (transaction log contains changes which aren't present in the tablespace), they can reconstruct the data from your RAM that they didn't get. This is called InnoDB crash recovery. MySQL Server does if automatically if it starts up and finds the transaction log contains changes that aren't in the tablespace. It assumes you had a sudden reboot and lost what was in RAM.
If your friends try to just keep their MySQL Server running continuously, reading a datadir that is simultaneously being updated by their one OneDrive, it'll basically overwrite their files and MySQL will become confused. It only checks to see if it should do crash recovery as MySQL Server starts. So if the files change in unexpected ways while MySQL Server is already running, it'll just conclude that your hard drive has gotten corrupted. It'll probably report a fatal error and shut down MySQL.
Also if your friends try to make changes of their own to the database, their changes would conflict with the updates from OneDrive. Then their attempts to overwrite files would eventually be synchronized in the opposite direction via OneDrive, and would eventually corrupt your database too. This would happen without warning at 10 minute intervals, whenever OneDrive chose to do its file sync.
So I'm afraid OneDrive is not the solution to share your database.
Alternatives that do have a chance of working include:
Hosting a single instance of MySQL Server on a website that you all share, and giving each of you a client that can use the database. A popular free client is phpMyAdmin. That way there'd be only one instance of MySQL Server, one datadir, even though you each would be concurrent clients reading and writing data. This is the simplest solution, most likely to work.
Exporting the data from your MySQL Server instance using mysqldump periodically, and putting the export file on OneDrive, or sending it to your friends through email or any other means. Then they would have to import that data to their MySQL Server manually. This would overwrite any changes they had done to their database, but it wouldn't appear as corruption. If they want to send changes back to you, they could do a similar operation: export their database, put the dump file on OneDrive, then you'd get their dump file and import it to your MySQL Server instance, overwriting any changes you had done locally since the last time you sent your export to your friends.
Use a MySQL add-on for multi-server synchronous replication, such as InnoDB Group Replication or Percona XtraDB Cluster. But this is probably too complex for you to set up if you're a newbie with MySQL.

Good idea to use SQS to move thousands of databases?

We want to move from using MySQL on an EC2 instance to RDS and setup replication. Seems like a no-brainer, right? Well, I've got 30,000 databases to move (don't ask). While setting up replication seems to work well, the process of getting the 30,000 databases into RDS is a royal pain; it takes forever and something almost alway happens.
The nightly backup takes about two hours. I end up with a multi-GB SQL dump file. When I try to restore it, something almost always goes wrong: the RDS instance wasn't big enough memory-wise and crashed, the localhost ran out of swap space, the network connection went flaky. Whatever! I did get it to restore once; IIRC it took 23 hours (30K MySQL DBs are a ton of file IO).
So today, I decided to use mydumper. It generated 30,000 schema files for the database in about two hours, then suddenly, the source MySQL went into uninterruptible sleep according to top, I lost my client connections, strace showed it was still trying to read files, and the mydumper process crashed. I restarted the whole process and just checked the status; mysqld restarted 2.5 hours into it for some reason.
So here's what I'm thinking and I'd like your input: I write two python scripts: firstScript.py will run mydumper on a single database, update a status table, package up the SQL, put it onto an AWS SQS queue, repeating until no more databases are found; the secondScript.py reads from the queue, runs the SQL and updates the status table, repeating until no more messages are found.
I think this can work. Do you? The main thing I'm not sure of is this: can I simply run multiple secondScript.py by Ctrl-Z-ing them into the background?
Or does someone have a better way of moving 30,000 databases?
I would not use mysqldump or mydumper to make a logical dump. Loading the resulting SQL-format dump takes too long.
Instead, use Percona XtraBackup to make a physical backup of your EC2 instance, and upload the backup to S3. Then restore to the RDS instance from S3, setup replication on the RDS instance to your EC2 instance, and let it catch up.
The feature of restoring a physical MySQL backup to RDS was announced in November 2017.
See also:
https://www.percona.com/blog/2018/04/02/migrate-to-amazon-rds-with-percona-xtrabackup/
https://aws.amazon.com/about-aws/whats-new/2017/11/easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup/
You should try it out with a smaller instance than your 30k databases just so you get some practice with the steps. See the steps in the Percona blog I linked to above.

Using rsync to backup MySQL

I use the following rsync command to backup my MySQL data to a machine within the LAN network. It works as expected.
rsync -avz /mysql/ root:PassWord#192.168.50.180:: /root/testme/
I just want to make sure that this is the correct way to use rsync.
I will also like to know if the 5 minute crontab entry for this will work.
don't use the root user of the remote machine for this. In fact, never directly connect to the root user, that's a major security risk. In this case, simply create a new user with few privileges that may only write to the backup location
Don't use a password for this connection, but instead use public-key authentication
Make sure that MySQL is not running when you do this, or you can easily get a corrupt backup.
Use mysqldump to create a dump of your database while MySQL is running. You can then safely copy that dump.
I find a better way of doing backups of MySQL is to use the replication facility.
set up you backup machine as a slave of your master. Each transaction is then automatically mirrored.
You can also shut down the slave and perform a full backup to tape from it. When you restart the slave it synchronises with the master again.
I don't really know about your rsync command, but I am not sure this is the right/best way to make backup with MySQL ; you should probably take a look at this page of the manual : 6.1. Database Backups
DB backups are not necessarily as simple as one might think, considering problems suchs as locks, delayed write, and whatever optimizations MySQL can do with its data... Especially if your tables are not using the MyISAM engine.
About the "5 minutes crontab" : you are doing this backup every five minutes ? If your data is that sensible, you should probably think about something else, like replication to another server, to always have an up-to-date copy.

Migrating a MySQL server from one box to another

The databases are prohibitively large (> 400MB), so dump > SCP > source is proving to be hours and hours work.
Is there an easier way? Can I connect to the DB directly and import from the new server?
You can simply copy the whole /data folder.
Have a look at High Performance MySQL - transferring large files
Use can use ssh to directly pipe your data over the Internet. First set up SSH keys for password-less login. Next, try something like this:
$ mysqldump -u db_user -p some_database | gzip | ssh someuser#newserver 'gzip -d | mysql -u db_user --password=db_pass some_database'
Notes:
The basic idea is that you are just dumping standard output straight into a command on the other side, which SSH is perfect for.
If you don't need encryption then you can use netcat but it's probably not worth it
The SQL text data goes over the wire compressed!
Obviously, change db_user to user user and some_database to your database. someuser is the (Linux) system user, not the MySQL user.
You will also have to use --password the long way because having mysql prompt you will be a lot of headache.
You could setup a MySQL slave replication and let MySQL copy the data, and then make the slave the new master
400M is really not a large database; transferring it to another machine will only take a few minutes over a 100Mbit network. If you do not have 100M networks between your machines, you are in a big trouble!
If they are running the exact same version of MySQL and have identical (or similar ENOUGH) my.cnf and you just want a copy of the entire data, it is safe to copy the server's entire data directory across (while both instances are stopped, obviously). You'll need to delete the data directory of the target machine first of course, but you probably don't care about that.
Backup/restore is usually slowed down by the restoration having to rebuild the table structure, rather than the file copy. By copying the data files directly, you avoid this (subject to the limitations stated above).
If you are migrating a server:
The dump files can be very large so it is better to compress it before sending or use the -C flag of scp. Our methodology of transfering files is to create a full dump, in which the incremental logs are flushed (use --master-data=2 --flush logs, please check you don't mess any slave hosts if you have them). Then we copy the dump and play it. Afterwards we flush the logs again (mysqladmin flush-logs), take the recent incremental log (which shouldn't be very large) and play only it. Keep doing it until the last incremental log is very small so that you can stop the database on the original machine, copy the last incremental log and then play it - it should take only a few minutes.
If you just want to copy data from one server to another:
mysqldump -C --host=oldhost --user=xxx --database=yyy -p | mysql -C --host=newhost --user=aaa -p
You will need to set the db users correctly and provide access to external hosts.
try importing the dump on the new server using mysql console, not an auxiliar software
I have no experience with doing this with mysql, but to me it seems the bottleneck is transferring the actual data?
4oo MB isnt that much. But if dump -> SCP is slow, i dont think connecting to the db server from the remove box would be any faster?
I'd suggest dumping, compressing, then copying over network or burning to disk and manually transfering the data.
Compressing such a dump will most likely give you quite good compression rate since, most likely , theres a lot of repeptetive data.
If you are only copying all the databases of the server, copy the entire /data directory.
If you are just copying one or more databases and adding them to an existing mysql server:
create the empty database in the new server, set up the permissions for users etc.
copy the folder for the database in /data/databasename to the new server /data/databasename
I like to use BigDump: Staggered Mysql Dump Importer after Exporting my database from the old server.
http://www.ozerov.de/bigdump/
One thing to note though, if you don't set the export options (namely the maximum length of created queries) respective to the load your new server can handle, it'll just fail and you will have to try again with different parameters. Personally, I set mine to about 25,000, but that's just me. Test it out a bit and you'll get the hang of it.