SQL back up issues - mysql

I have been doing some research on best backup procedures for largish (27.678gb) MYSQL database tables.
Currently we are using a program called Rapidsync (which is a offsite backup tool) but it is slow and it locks the tables it's currently backing up therefore causing downtime/slowness of sql.
Our current server is running Windows 2008 r2 with SQL server 2008 (on the same box) also.
Hardware specs for the dedicated server are:
16gb Ram
CPU intel xeon E3-1230 V2 # 3.30GHz
1 TB hard drive
In terms of databases we have 58 in total varying in size in which some need to backed up weekly ideally or even daily.
Through a program we use called Navicat you can tunnel to a database using SSH and copy databases manually, is this a reliable and feasible option if we were to install it on our local machine and copy them across? Or would it be more secure/efficient to use SQL Dump maybe?
I hope I have given all of the necessary info but please do ask if you need to know more.
P.S Only free options at this moment as we are on a tight budget! :)
Thanks in advance

You mentioned SSH, so I suppose the backups can be done also on other server than the database server itself. For Unix, you can use great tool Percona xtrabackup, which is free and supports online (without locking) backup of InnoDB database files and also incremental backups. Maybe it is possible to compile it also on Windows (part of it is written in C, part in Perl).
So you can setup weekly full backup and daily incremental backups. The tool will keep track of which pages of the InnoDB datafile has been changed and will copy only those.

Related

Manually copy MySQL 5.5 database to a different computer

My company uses a product that uses MySQL 5.5 for its backend database. The product automatically installs and configures MySQL during it's installation process. The product can be configured to run in a Hot Standby Redundant configuration. In these cases, the same installation process is performed on 2 separate servers and then during the products initial configuration redundant mode is selected. The product internally handles all the processes of duplicating the database data and keeping the 2 databases in sync. MySQL has know knowledge of the redundant setup. The MySQL installation on both server are identical, same location and same structure. The product does not have a very elegant/efficient way to sync a large, say 300G is size with 3K tables, database from the Primary server to the Backup server in cases where this is required, such as when creating a redundant system from a Single/Primary server config that has already been running for a while. My question is as follows.
Is there a safe/supported way to just manually copy the database/files from the Primary server to the Backup server considering that the MySQL installation on both servers are identical? BTW, this is on Production Windows Servers. I know I can do a full Export of the database from the Primary and then Import it on the BU server, but this can take hours. I am hoping there is a faster supported way to just copy the files from one server to the other, but in researching this I see conflicting info.
System Info
Windows
MySQL 5.5
Identical installation on both servers
"C:\ProgramData\MySQL\MySQL Server 5.5\data"
Innodb
File per table = true
Thanks in advance for any advice.
I once tried to just copy the Database Folder that contains all the innodb table files, "C:\ProgramData\MySQL\MySQL Server 5.5\data\Mydbase", from one server to another but mysql would not start up and had errors.
Yes: shut down the MySQL Server service on both computers. Then you can move the files in the datadir in any way you want. But this incurs some downtime while you do the file transfer.
If you must have no downtime, it's also possible, but requires more steps.
What I do is use Percona XtraBackup to make a physical backup of the source instance, but this won't work as easily for you because you are on Windows. XtraBackup doesn't work on Windows. Some people use tricks to run XtraBackup in a Docker container on Windows.
Then restore the XtraBackup to your new computer in the normal way, and configure it as a replica of the source instance. See https://docs.percona.com/percona-xtrabackup/8.0/howtos/setting_up_replication.html
By making the new instance a replica, you can let it get updated with the most recent changes that have occurred on the source instance while you were setting up the replica.
Then at some point you decide to switch to the new instance. Then you set the source instance to read-only mode, to prevent client applications from making any new changes. Let the replica catch up with the last final changes (this should only take a second if the replica was keeping up with changes already). Now you can change your client applications to use the replica instead of the former source. Then un-configure replication on the new instance with RESET SLAVE because the last thing you want is for any more changes to occur on the former source and replicate to the new instance.
If you try this procedure, I suggest you test it on a test instance — NOT your production instance — until you are comfortable with the tools.
P.S.: In addition to not supporting Windows, I have no idea if the current version of XtraBackup works with MySQL 5.5. That version was released in 2010, and reached its end of life in 2018. So I think you will need to research which version of XtraBackup still can read a MySQL 5.5 instance. You might have to use an old version of XtraBackup.

What is an efficient way to maintain a local readonly copy of a live remote MySQL database?

I maintain a server that runs daily cron jobs to aggregate data sources and generate reports, accessible by a private Ruby on Rails application.
One of our data sources is a partial dump of one of our partner's databases. The partner runs an active application and the MySQL DB has hundreds of tables. They have given us read-only access to a relatively underpowered readonly slave of their application DB.
Because of latency issues and performance bottlenecking on their slave DB, we have been maintaining a limited local copy of their DB. We only need about 20 tables for our reports, so I only dump those tables. We also only need the data to a daily granularity, so realtime sync is not a requirement.
For a few months, I had implemented a nightly cron which streamed the dump of the necessary tables into a local production_tmp database. Then, when all tables were imported, I dropped production and renamed production_tmp to production. This was working until the DB grew to over 25GB, and we started running into disk space limitations.
For now, I have removed the redundancy step and am just streaming the dump straight into production on our local server. This feels a bit flimsy to me, and I would like to implement a safer approach. Also, currently doing the full dump/load takes our server over 2 hours, and I'd like to implement an approach that doesn't take as long. The database will only keep growing, so I'd like to implement something future proof.
Any suggestions would be appreciated!
I take it you have never heard of, or considered MySQL Replication?
The idea is that you do your backup & restore once, and then configure the replica to "subscribe" to a continuous stream of changes as they are made on the primary MySQL instance. Any change applied to the primary is applied automatically to the replica within seconds. You don't have to do the backup & restore procedure again, unless the replica gets damaged.
It takes some care to set up and keep working, but it's a much more efficient method of keeping two instances in sync.
#SusannahPotts mentions hot backup and/or incremental backup. You can get both of these features for free, without paying for MySQL Enterprise using Percona XtraBackup.
You can also consider using MySQL Transportable Tablespaces.
You'll need filesystem access to run either Percona XtraBackup or MySQL Enterprise Backup. It's not possible to use these physical backup tools for Amazon RDS, for example.
One alternative is to create a replication slave in the same network as the live system, and run Percona XtraBackup on that slave, where you do have filesystem access.
Another option is to stream the binary logs to another host (see https://dev.mysql.com/doc/refman/5.6/en/mysqlbinlog-backup.html) and then transfer them periodically to your local instance and replay them.
Each of these solutions has pros and cons. It's hard to recommend which solution is best for you, because you aren't sharing full details about your requirements.
This was working until the DB grew to over 25GB, and we started running into disk space limitations.
Some question marks "here":
Why don't you just increase the available Diskspace for your database? 25 GB seems nothing when it comes down to disk-space?
Why don't you modify your script to: download table1, import table1_tmp, drop table1_prod, rename table1_tmp to table1_prod; rinse and repeat.
Other than that:
Why don't you ask your partner for a system with enough performance to run your reports on? I'm quite sure, he would prefer this rather than having YOU download sensitive data every day to your "local site"?
Last thought (requires MySQL Enterprise Backup https://www.mysql.de/products/enterprise/backup.html):
Rather than dumping, downloading and importing 25 GB every day:
Create a full backup
Download and import
Use Differential or incremental backups from now.
The next day you download (and import) only the data-delta: https://dev.mysql.com/doc/mysql-enterprise-backup/4.0/en/mysqlbackup.incremental.html

best tool to take a backup from mysql

I am using mysql workbench for taking a backup/dump of my database hosted on Amazon RDS service. My database is very huge (about 8gib) and taking a 9-10 hours to download it from read-replica, mean while I am not able to see If download process is stuck or running.
Is there any GUI tool available to take a backup fast and can also give details of which process is running like which table is downloading with its row details or percentage of total download. Mysql workbench is a good tool, but It hasn't show all the options given in 'mysqldump' command utility, and It is also very slow. and I also doubt about my data integrity. can someone explain me how it's work specially with data integrity?
Thanks
First of all, your 8GB database is by no means 'huge'. Second, I'm not clear on what you're trying to do? Amazon provides multiple ways for you to have backups.
From: http://aws.amazon.com/rds/faqs/
Q: Do I need to enable backups for my DB Instance or is it done automatically?
By default and at no additional charge, Amazon RDS enables automated backups of your DB Instance with a 1 day retention period.

What are the best practices for Mysql backup

We have one php application and mysql server running on one of our production server.
Mysql server is currently 4GB big with intention to grow up to tens or even up to hundreds of GB.
What am curious to find out is what are the best practices for backup of mysql database in condition that application must be live under any circumstance? What is better, to have mysql replication server on which we will run backup scripts or to run on live server? What is more likely to slow down We have possibility to add additional server(s) if needed. Where do I need to store mysql dumps? Is it suggested to ftp copy mysql backup files to remote server.
What is the best practice to organize web application backup if don't have problem with number of server instances?
MySQL backup methods are documented on MySQL documentation.
The ideal backup solution will be to use MySQL Enterprise Backup. This is a licensed product sold on Oracle store. It is very fast compared to mysqldump.
MySQL Enterprise Backup: A licensed product that performs hot backups
of MySQL databases. It offers the most efficiency and flexibility when
backing up InnoDB tables, but can also back up MyISAM and other kinds
of tables.
If you are looking for a free solution with MySQL community edition, then you can install another replication server and either run mysqldump to take backup or make a raw data backup. During backup on your replication server, your main master database will be running. Since your data is big or will get bigger, it is recommended to backup raw data files. It is basically a process of copying data and log files from disk. Details are explained on MySQL documentation.
For larger databases, where mysqldump would be impractical or
inefficient, you can back up the raw data files instead. Using the raw
data files option also means that you can back up the binary and relay
logs that will enable you to recreate the slave in the event of a
slave failure.
Finally, you should copy backup files to another physical disk on the same to recover from disk failures or to another physical server to easily recover from complete server failures.
Replication is something that protects against hardware errors, for example, a hard disk crashed.
Backup - protects against software errors, for example, due to the human factor, data has been deleted from a table.
It is definitely good practice to combine both of these technologies by running a utility to create a backup on a replica. This not only reduces the load on the product database, but also covers more recovery scenarios.
In case of a hardware error, you can restore the most up-to-date data from the replica, and in cases of data corruption, you can already consider about from the what date to use the backup for recovery. Well, if your both the main server and the replica fail, then the backup will also save you.
What is the best way to make backups?
mysqldump is a good solution for small databases. This is a utility for creating logical backups nad it is included to MySQL Server. At the output, the utility creates a .sql file to recreate the database.
For large databases, it is better to use a physical backup. There are two ways on how to do it.
mysqlbackup is a utility included with MySQL Enterprise Solution. As a result, you get a binary file. Such a backup is created much faster than using mysqldump and is less load on the server.
xtrabackup, from Percona, is a lot like the MySQL Enterprise backup utility, but it's free. A more detailed comparison can be found here.
How often the backups should be made?
The more often you make backups, the better, but you can't make many such backups - since you will run out of space in the backup storage. There are two ways:
Find a compromise between the frequency of backups and the duration of storage.
Use incremental backups. The above utilities support incremental backups, but the management of such backups is more complicated (read more here)
Where the backups should be stored?
Anywhere you prefer, but not in the same place as the MySQL Server. Overall, I think using cloud storage is a good choice. Almost everyone today has a command line interface.
How to automate a backup?
The process of creating regular backups should be automated, and a person should intervene in it only in case of failure. A good backup process should include the following steps:
Creating a backup copy
Compression\Encryption
Uploading to storage
Sending success\fail notification
Removing old backups from the storage (so that it does not overflow)
The simplest script that implements this can be found, for example, here.
Something else?
Yes, the most important thing is not to create a backup and then restore it. Therefore, it is best practice to regularly test the recovery scenarios.
Happy backups!
What is better, to have mysql replication server on which we will run backup scripts or to run on live server
It depends on your db size (and time needed to dump it using mysqldump) and your reliability requirements.
If your db is relatively small and mysqldump dumps it in seconds or in a few minutes then its ok to just run scheduled backups. For most cases it is sufficient to have a daily backup which runs at a time when your app is mostly idle (at night when you clients are sleeping). You can use a nice tool automysqlbackup for that: it cares about the scheduling and backup rotation, all you need to do is to add it as your cron task and set up its config once.
Setting up a replica is only needed if:
Your backup takes long time (dozens of minutes or hours) to complete so you can not just stop your service for that long.
You can not afford loosing any history in case of main db crash. E.g. if you process financial transactions you may want to ensure that nothing will be lost if master db server dies.
In this cases you may want a replica with backups. Though you must understand that adding replication adds a new layer of problems: replicas may go out of sync, silently crash (and you will not notice that as the master and your app is running fine) etc.

Is it possible to real-time synchronize 2 SQL Server databases

I have an application that runs on server A and the database is on the same server
there is a backup server B which I use in case the server A is down
the application will remain unchanged but the data in the DB is changing constantly
Is there a way to synchronize those 2 databases real-time automatically?
currently I wait till all the users are gone so I can manually backup and restore in the backup server.
Edit: When I said real-time I didn't mean it literally, I can handle up to one hour delay but the faster sync the better.
My databases are located on 2 servers on the same local network.
2 of them are SQL Server 2008, the main DB is on windows server 2008
the backup is on windows server 2003
A web application (intranet) is using the DB
I can use sql agent (if that can help)
I don't know what kind of details could be useful to solve this, kindly tell me what can help. Thanks.
Edit: I need to sync all the tables and table only.
the second database is writable not read-only
I think what you want is Peer to Peer Transactional Replication.
From the link:
Peer-to-peer replication provides a scale-out and high-availability
solution by maintaining copies of data across multiple server
instances, also referred to as nodes. Built on the foundation of
transactional replication, peer-to-peer replication propagates
transactionally consistent changes in near real-time. This enables
applications that require scale-out of read operations to distribute
the reads from clients across multiple nodes. Because data is
maintained across the nodes in near real-time, peer-to-peer
replication provides data redundancy, which increases the availability
of data.