Mysql Restore from many dump files - mysql

We have an regular backup system which backs up every table from the DB into a separate files.
Like a table named
fo
will be dumped and compressed into a
foo.sql.bz2
I googled this kind of compresion and all i could think of to get it but i am out of ideas.
Anyone knows which tool is making backups like this and how can i restore the whole DB from milions of thouse files?
ps. We have over 700 tables, so resoring one by one is... kinda inpractical.

The .bz2 extension usually denotes a BZ2-compressed archive.
To decompress:
bzip2 -d foo.sql.bz2 # produces file "foo.sql"
Combine with find, and the magic happens:
find /path/to/dump/directory -name "*.sql.bz2" | xargs bzip2 -cd {} | mysql [options]

Related

How to restore single table from mysqldump that is gzipped with sql suffix?

I know how to restore table from mysqldump.sql file. But is it possible to restore single table from mysqldump.sql.gz?
I know that there is very similar question to mine, but I want to restore data from file that is .sql.gz, and I saw only methods with one suffix.
I also know, that I can just decompress the file, but preferably I would like to restore table without decompressing whole backup.
I think that you can try with sed command in terminal.
If your table is named users try to do it like that :
$ sed -n -e '/CREATE TABLE.*`users`/,/Table structure for table/p' mysqldump.sql.gz > users.dump
This will copy the information between CREATE TABLE and next table from your dump to users.dump.
After that you know what to do ;)

I have an old media wiki root dir, I am not certain if I'm able to restore it

This is an older installation for which I have (what I believe to be) the full root directory of the mediawiki site.
The version of mediawiki is 1.26.
I know the permissions on the directory are not correct and have been messed around with (touch) files by other users, etc.
As I'm not very familiar with mediawiki, I understand the database or a dump of it are very important.
I cannot determine which db was used for this installation, as grep -RiI wgDBname only produces mentions of the wgDBname , but not an actual DBname being used.
I followed all applicable steps in https://www.mediawiki.org/wiki/Manual:Restoring_a_wiki_from_backup#External_links , however this link assumes some knowledge of the DB location (or even which DB was used in the first place).
I've issued
find . -name *.sql -exec ls -lh {} \; > /tmp/output and so on, to try and find (files > just a few KB) in size, to perhaps find a DB that way (assuming it was a mysql DB), and it may have been a postgres installation, and so on.
Any pointers to a possible search direction would be appreciated. Thank you.

How to implement what vaguely is called "database versioning"?

I write a Web application using Yii framework and MySQL.
Now the boss wants "to store all changes in the database", in order to be able to restore older data if someone destroys some important information in the current version of the data.
What is to store all changes in the database is vague. I am not sure what exactly we should do.
How to fulfill this vague boss's requirement?
Can we do it with MySQL logs? What are pros and contras of using MySQL logs for this? Is it true that we need a programmer (me) to restore some (possibly not all) data from MySQL logs? Can MySQL (partial) data restoration be made simple?
Or should I hard work to manually (not with MySQL logs) store all old data in specific MySQL tables?
I guess what you are describing is an audit trail, which will be handy to go back and look at the history, but as for restoring, that will need to be manual.
Have a look at techniques for creating an audit trail.
You might want to try searching the extensions library for something like eactsasversioned that will archive edits made to records. I'm not sure if it saves deleted records, but it seems like it's close to what you want.
If you are looking for something you can easily restore from you probably need a backup script run on a very regular basis. I use a bash script(shown below) in cron to backup the databases I am worried about hourly. My databases are fairly small so this only takes a few seconds and could be increased to run every 15 minutes if you are super paranoid.
#!/bin/bash
dbName1="li_appointments"
dbName2="lidb_users"
dbName3="orangehrm_li"
fileName1=$dbName1"_`date +%Y.%m.%d-%H:%M:%S`.sql"
fileName2=$dbName2"_`date +%Y.%m.%d-%H:%M:%S`.sql"
fileName3=$dbName3"_`date +%Y.%m.%d-%H:%M:%S`.sql"
backupDir="/home/backups/mysql"
mysqldump -u backup_user --password='********************************' $dbName1 > $backupDir/$fileName1
mysqldump -u backup_user --password='********************************' $dbName2 > $backupDir/$fileName2
mysqldump -u backup_user --password='********************************' $dbName3 > $backupDir/$fileName3
bzip2 $backupDir/$fileName1
bzip2 $backupDir/$fileName2
bzip2 $backupDir/$fileName3
gpg -c --passphrase '********************************' $backupDir/$fileName1".bz2"
gpg -c --passphrase '********************************' $backupDir/$fileName2".bz2"
gpg -c --passphrase '********************************' $backupDir/$fileName3".bz2"
rm $backupDir/*.bz2
echo "Backups completed on `date +%D`" >> $backupDir/backuplog.log

Import Multiple .sql dump files into mysql database from shell

I have a directory with a bunch of .sql files that mysql dumps of each database on my server.
e.g.
database1-2011-01-15.sql
database2-2011-01-15.sql
...
There are quite a lot of them actually.
I need to create a shell script or single line probably that will import each database.
I'm running on a Linux Debian machine.
I thinking there is some way to pipe in the results of a ls into some find command or something..
any help and education is much appreciated.
EDIT
So ultimately I want to automatically import one file at a time into the database.
E.g. if I did it manually on one it would be:
mysql -u root -ppassword < database1-2011-01-15.sql
cat *.sql | mysql? Do you need them in any specific order?
If you have too many to handle this way, then try something like:
find . -name '*.sql' | awk '{ print "source",$0 }' | mysql --batch
This also gets around some problems with passing script input through a pipeline though you shouldn't have any problems with pipeline processing under Linux. The nice thing about this approach is that the mysql utility reads in each file instead of having it read from stdin.
One-liner to read in all .sql files and imports them:
for SQL in *.sql; do DB=${SQL/\.sql/}; echo importing $DB; mysql $DB < $SQL; done
The only trick is the bash substring replacement to strip out the .sql to get the database name.
There is superb little script at http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script which will take a huge mysqldump file and split it into a single file for each table. Then you can run this very simple script to load the database from those files:
for i in *.sql
do
echo "file=$i"
mysql -u admin_privileged_user --password=whatever your_database_here < $i
done
mydumpsplitter even works on .gz files, but it is much, much slower than gunzipping first, then running it on the uncompressed file.
I say huge, but I guess everything is relative. It took about 6-8 minutes to split a 2000-table, 200MB dump file for me.
I don't remember the syntax of mysqldump but it will be something like this
find . -name '*.sql'|xargs mysql ...
I created a script some time ago to do precisely this, which I called (completely uncreatively) "myload". It loads SQL files into MySQL.
Here it is on GitHub
It's simple and straight-forward; allows you to specify mysql connection parameters, and will decompress gzip'ed sql files on-the-fly. It assumes you have a file per database, and the base of the filename is the desired database name.
So:
myload foo.sql bar.sql.gz
Will create (if not exist) databases called "foo" and "bar", and import the sql file into each.
For the other side of the process, I wrote this script (mydumpall) which creates the corresponding sql (or sql.gz) files for each database (or some subset specified either by name or regex).

MySQL Memory engine + init-file

I'm trying to set up a MySQL database so that the tables are ran by the memory engine. I don't really care about loosing some data that gets populated but I would like to dump it daily (via mysqldump in a cronjob) and have the init-file set to this dump. However I can't seem to figure out how to get the mysqldump to be compatable with how the init-file wants the SQL statements to be formatted.
Am I just missing something completely obvious trying to set up a database this way?
MySQL dumps are exactly that -- dumps of the MySQL database contents as SQL. So, there isn't any way to read this directly as a database file.
What you can do, is modify your init script for MySQL to automatically load the last dump (via the command line) every time MySQL starts.
An even better solution would be to use a ramdisk to hold the entire contents of your database in memory, and then periodically copy this to a safe location as your backup.
Although, if you want to maintain the contents of your databases at all, you're better off just using one of the disk-based storage engines (InnoDB or MyISAM), and just giving your server a lot of RAM to use as a cache.
This solution is almost great, but it causes problems when string values in table data contain semicolons - all of them are replaced with newline char.
Here is how I implemented this:
mysqldump --comments=false --opt dbname Table1 Table2 > /var/lib/mysql/mem_tables_init.tmp1
#Format dump file - each statement into single line; semicolons in table data are preserved
grep -v -- ^-- /var/lib/mysql/mem_tables_init.tmp1 | sed ':a;N;$!ba;s/\n/THISISUNIQUESTRING/g' | sed -e 's/;THISISUNIQUESTRING/;\n/g' | sed -e 's/THISISUNIQUESTRING//g' > /var/lib/mysql/mem_tables_init.tmp2
#Add "USE database_name" instruction
cat /var/lib/mysql/mem_tables_init.tmp2 |sed -e 's/DROP\ TABLE/USE\ `dbname`;\nDROP\ TABLE/' > /var/lib/mysql/mem_tables_init.sql
#Cleanup
rm -f /var/lib/mysql/mem_tables_init.tmp1 /var/lib/mysql/mem_tables_init.tmp2
My understanding is that the --init-file is expecting each SQL statement on a single line and that there are no comments in the file.
You should be able to clear up the comments with:
mysqldump --comments=false
As for each SQL statement on one line, I'm not familiar with a mysqldump option to do that, but what you can do is a line of Perl to remove all of the newlines:
perl -pi -w -e 's/\n//g;' theDumpFilename
I don't know if --init-file will like it or not, but it's worth a shot.
The other thing you could do is launch mysql from a script that also loads in a regular mysqldump file. Not the solution you were looking for, but it might accomplish the effect you're after.
I stumbled onto this, so I'll tell you what I do. First, I have an ip->country db in a memory table. There is no reason to try to "save" it, its easily and regularly dropped and recreated, but it may be unpredictable how the php will act when its missing and its only scheduled to be updated weekly. Second, I have a bunch of other memory tables. There is no reason to save these, as they are even more volatile, with lifespans in minutes. They will be refreshed very quickly, but stale data is better than none at all. Also, if you are using any separate key caches, they may (in some cases) need to loaded first or you will be unable to load them. And finally, be sure to put a "use" statement in there if you're not dumpling complete databases, as there is no other interface (like mysql client) to open the database at start up.. So..
cat << EOF > /var/lib/mysql/initial_load.tmp
use fieldsave_db;
cache index fieldsave_db.search in search_cache;
EOF
mysqldump --comments=false -udrinkin -pbeer# fieldsave_db ip2c \
>> /var/lib/mysql/initial_load.tmp
mysqldump --comments=false -ufields -pavenue -B memtables \
>> /var/lib/mysql/initial_load.tmp
grep -v -- ^-- /var/lib/mysql/initial_load.tmp |tr -d '\012' \
|sed -e 's/;/;\n/g' > /var/lib/mysql/initial_load.sql
As always, YMMV, but it works for me.