How to restore a MySQL database "safely" - mysql

How should you go about restoring (and backing) up a MySQL database "safely"? By "safely" I mean: the restore should create/overwrite a desired database, but not risk altering anything outside that database.
I have already read https://dev.mysql.com/doc/refman/5.7/en/backup-types.html.
I have external users. They & I may want to exchange backups for restore. We do not have a commercial MySQL Enterprise Backup, and are not looking for a third-party commercial offering.
In Microsoft SQL Server there are BACKUP and RESTORE commands. BACKUP creates a file containing just the database you want; both its rows and all its schema/structure are included. RESTORE accepts such a file, and creates or overwrites its structure. The user can restore to a same-named database, or specify a different database name. This kind of behaviour is just what I am looking for.
In MySQL I have come across 3 possibilities:
Most people seem to use mysqldump to create a "dump file", and mysql to read that back in. The dump file contains a list of arbitrary MySQL statements, which are simply executed by mysql. This is quite unacceptable: the file could contain any SQL statements. (Limiting access rights of restoring user to try to ensure it cannot do anything "naughty" is not acceptable.) There is also the issue that the user may have created the dump file with the "Include CREATE Schema" option (MySQL Workbench), which hard-codes the original database name for recreation. This "dump" approach is totally unsuitable to me, and I find it surprising that anyone would use it in a production environment.
I have come across MySQL's SELECT ... INTO OUTFILE and LOAD DATA INFILE statements. At least they do not contain SQL code to execute. However, they look like a lot of work, deal with a table at time not the whole database, and don't deal with the structure of the tables, you have to know that yourself for restoring. There is a mysqlimport helper command-line utility, but I don't see anything for the export side, and I don't see it for restoring a complete database.
The last is to use what MySQL refers to as "Physical (Raw)" rather than "Logical" Backups. This works on the database directories and files themselves. It is the equivalent of SQL Server's detach/attach method for backing up/restoring. But, as per https://dev.mysql.com/doc/refman/5.7/en/backup-types.html, it has all sorts of caveats, e.g. "Backups are portable only to other machines that have identical or similar hardware characteristics." (I have no idea, e.g. some users are Windows versus Windows, I have no idea about their architecture) and "Backups can be performed while the MySQL server is not running. If the server is running, it is necessary to perform appropriate locking so that the server does not change database contents during the backup." (let alone restores).
So can anything satisfy (what I regard as) my modest requirements, as outlined above, for MySQL backup/restore? Am I really the only person who finds the above 3 as the only, yet unacceptable, possible solutions?

1 - mysqldump - I use this quite a bit, usually in environments where I am handling all the details myself. I do have one configuration where I use that to send copies of a development database - to be dumped/restored in its entirety - to other developers. It is probably the fastest solution, has some reasonable configuration options (e.g., to include/exclude specific tables) and generates very functional SQL code (e.g., each INSERT batch is small enough to avoid locking/speed issues). For a "replace entire database" or "replace key tables in a specific database" solution, it works very well. I am not too concerned about the "arbitrary SQL commands" problem - if that is an issue then you likely have other issues with users trying to "do their own thing".
2 - SELECT ... INTO OUTFILE and LOAD DATA INFILE - The problem with these is that if you have any really big tables then the LOAD DATA INFILE statement can cause problems because it is trying to load everything all at once. You also have to add code to create (if needed) or empty the tables before LOAD DATA.
3 - Physical (raw) file transfer. This can work but under limited circumstances. I had one situation with a multi-gigabyte database and decided to compress the raw files, move them to the new machine, uncompress and just tell MySQL "everything is already there". It mostly worked well. But I would not recommend it for any unattended/end-user process due to the MANY possible problems.
What do I recommend?
1 - mysqldump - live with its limitations and risks, set up a script to call mysqldump and compress the file (I am pretty sure there are options in mysqldump to do the compression automatically), include the date in the file name so that there is less confusion as the files are sent around, and make a simple script for users to load the file.
2 - Write your own program. I have done this a few times. This is more work initially but allows you to control every aspect of the process and transfer a file that only contains data without any actual SQL code. You can control the specific database, tables, etc. One catch is that if you make any changes to the table structure, indexes, etc. you will need to make sure that information is somehow transmitted to the receiving problem so that it can change the structures as needed - that is not a problem with mysqldump as it normally replaces the tables, creating the new structures, indexes, etc. This can be written in any language that can connect to MySQL - it does not have to be the same language as your application.

If you're not going to use third party tools (like innobackupex for example) then you're limited to use ... mysqldump, which is in the mysql package.
I can't understand why it is not acceptable for you, why you don't like sql commands in those dumps. Best practice,when restoring a single db into the server, which already contains other databases, is to have a separated user, with rights only to write into the restored db. Then even if the user performing restore, would change the sql commands and tried to write to another db, they will not be able to.
When doing raw backup (physical copy of database files) you need to have all the instances down, mysql server not running. Similar hardware means you need to have the same directories as the source server (unless you would change my.cnf before starting the server, and putting all the files to right directories).
When coming into mysql, try to not compare it to sql server - it's totally different approach and philosophy.
But if you would convinced yourself anyhow to use third party tool - I recommend innobackex from Percona, which is free btw.

The export tool that complements mysqlimport is mysqldump --tab. This outputs CSV files like SELECT...INTO OUTFILE. It also outputs the table structure in much smaller .sql files. So there are two files for each table.
Once you recreate your tables from the .sql files, you can use mysqlimport to import all the data files. You can even use the mysqlimport --use-threads option to make it load multiple data files in parallel.
This way you have more control over which schema to load the data into, and it should run a lot faster than loading a large SQL dump.

Related

How can I dump mysql table in parts?

I have Linux server and a huge mysql table that I need to dump. The thing is, the sever is production and I don’t want to crash it by dumping all at once. Also I intend to pipe it over ssh to another server. Because I don’t want to fill up the disk space. I know about the mysqldump —where clause but I don’t want to script those IDs. Is there any native functionality in mysql that allows dumping in parts? It doesn’t have to be a mysqldump but it needs to be in parts so I don’t crash the server and I’ll need to pipe this over ssh.
Additional info: records are never updated in this table. They are only added
MySQL documentation: as outlined in their docs, mysqldump in not suited for large databases. They suggest to backup raw data files.
If your concern really is the load and not crashing the production, then maybe you should take a look at this post : How can I slow down a MySQL dump as to not affect current load on the server?
about how to backup large production databases, using the right mysqldump args.
Slicing a production database may end up more dangerous in the end.
Also I don't know how often entries get updated in the db, but slicing the export would give you an inconsistent dump regarding the data, having slices of the same table, from different times

Copy live database to development-server

We've a productive MySQL-Database which private user data inside (passwordhashes, ips, emails etc).
When a developer run a buildjob in jenkins on his developer-vm, we want to include a copy of the live-database so that he get an environment which is very similar to our production one. But we've to clean up the production database before it is copied to the dev-server because of 2 reasons:
Developers shouldn't get a copy of all our user data like hashed passwords or emails
The database is big, so we want to delete some of the contents that the dev has a few real data sets for testing, but not > 100k, that will have no benefit and will increase the time which the dump take
I thought about this and tried a few things, but I found no method which is fast and will do the job.
My first idea was to make a dump of all the data by mysqldump, import it on the dev-machine, and send some MySQL-Querys for setting placeholders instead of private data
UPDATE user_data SET email = "dev#example.com" [...]
On the one hand this is slow because it have to copy the huge database AND do the querys. And I don't like it that all of our user-data is on the dev-machine, even for a short time period. I would like it better when the data gets cleaned first and then exported to the dev-machine. This would be possible by copying the database in a temp one on the production system, then clean the data, export it and delete the copied database on the production system. But this also created a lot of overhead.
What is a good and fast method for doing this?
I thought about something like mysqldump with replacing the data, so that no overhead is created. But i can't find any tool which can do this.
Do you have enough room for two databases on the production server? If so, make a developer db on the same server (or any server, really) which is a nightly dump of production, minus all the sensitive information and bulk.
Developers get access to only this "developer" database, from the production server, which you know has been pruned of anything sensitive. As a bonus, they could connect directly to it, and possibly never need to download it.

Populate MySQL timezone info from MySQL data set, not from system without restart

I'd like to populate the MySQL timezone tables with the database provided by MySQL. I am using a cloud DB and can't overwrite DB tables and restart the server.
Can someone help me understand how to load these files manually?
Rational
I loaded the tz tables from the OS, but the OS has a ton of timezone names. I'd like a more concise set of names that I can query for forms. I think the set provided by MySQL might be a better fit. No other apps are running on the database, thus timezone conflicts aren't an issue.
The database provided by mysql comes as a bunch of myISAM container files; I don't think you're going to be able to safely drop them into the mysql data base directory without bouncing your mysqld.
Do you own this mysqld, or are you one of many tenants in a vendor-owned system?
If you own it, you can load a subset of the /usr/share/zoneinfo time zones. A useful subset might be /usr/share/zoneinfo/posix.
If you're using the mysql.time_zone_name.Name to populate a pick list (a good use for it) you could select an appropriate subset of the admittedly enormous list of names,
or create some aliases right in that table.
I ended up loading the tables into a SQL server on my on local machine, then exporting insert statements and manually loading those onto the server for which I don't have direct control of. Not a glamors solution, it it appears to be the only reasonable way to go about it.

Pulling data from MySQL into Hadoop

I'm just getting started with learning Hadoop, and I'm wondering the following: suppose I have a bunch of large MySQL production tables that I want to analyze.
It seems like I have to dump all the tables into text files, in order to bring them into the Hadoop filesystem -- is this correct, or is there some way that Hive or Pig or whatever can access the data from MySQL directly?
If I'm dumping all the production tables into text files, do I need to worry about affecting production performance during the dump? (Does it depend on what storage engine the tables are using? What do I do if so?)
Is it better to dump each table into a single file, or to split each table into 64mb (or whatever my block size is) files?
Importing data from mysql can be done very easily. I recommend you to use Cloudera's hadoop distribution, with it comes program called 'sqoop' which provides very simple interface for importing data straight from mysql (other databases are supported too).
Sqoop can be used with mysqldump or normal mysql query (select * ...).
With this tool there's no need to manually partition tables into files. But for hadoop it's much better to have one big file.
Useful links:
Sqoop User Guide
2)
Since I dont know your environment I will aire on the safe, side - YES, worry about affecting production performance.
Depending on the frequency and quantity of data being written, you may find that it processes in an acceptable amount of time, particularly if you are just writing new/changed data. [subject to complexity of your queries]
If you dont require real time or your servers have typically periods when they are under utilized (overnight?) then you could create the files at this time.
Depending on how you have your environment setup, you could replicate/log ship to specific db server(s) who's sole job is to create your data file(s).
3)
No need for you to split the file, HDFS will take care of partitioning the data file into bocks and replicating over the cluster. By default it will automatically split into 64mb data blocks.
see - Apache - HDFS Architecture
re: Wojtek answer - SQOOP clicky (doesn't work in comments)
If you have more questions or specific environment info, let us know
HTH
Ralph

How to bring files in a filesystem in/out MySQL DB?

The application that I am working on generates files dynamically with use. This makes backup and syncronization between staging,development and production a real big challenge. One way that we might get smooth solution (if feasable) is to have a script that at the moment of backing up the database can backup the dynamically generated files inside the database and in restore time can bring those file out of the database and in the filesystem again.
I am wondering if there are any available (pay or free) application that could be use as scripts to make this happen.
Basically if I have
/usr/share/appname/server/dynamicdir
/usr/share/appname/server/otherdir/etc/resource.file
Then taking the examples above and with the script put them on the mysql database.
Please let me know if you need more information.
Do you mean that the application is storing a files as blobs in the MySQL database, and/or creating lots of temporary tables? Or that you just want temporary files - themselves unrelated to a database - to be stored in MySQL as a backup?
I'm not sure that trying to use MySQL as an net-new intermediary for backups of files is a good idea. If the app already uses it, thats one thing, if not, MySQL isn't the right tool here.
Anyway. If you are interested in capturing a filesystem at point-in-time, the answer is to utilize LVM snapshots. You would likely have to rebuild your server to get your filesystems onto LVM, and have enough free storage there for as many snapshots as you think you'd need.
I would recommend having a new mount point just for this apps temporary files. If your MySQL tables are using InnoDB, a simple script to run mysqldump --single-transaction in the background, and then the lvm snapshot process, you could get these synced up to less then a second.
the should be trivial to accomplish using PHP, perl, python, etc. are you looking for someone to write this for you?