Import Multiple .sql dump files into mysql database from shell - mysql

I have a directory with a bunch of .sql files that mysql dumps of each database on my server.
e.g.
database1-2011-01-15.sql
database2-2011-01-15.sql
...
There are quite a lot of them actually.
I need to create a shell script or single line probably that will import each database.
I'm running on a Linux Debian machine.
I thinking there is some way to pipe in the results of a ls into some find command or something..
any help and education is much appreciated.
EDIT
So ultimately I want to automatically import one file at a time into the database.
E.g. if I did it manually on one it would be:
mysql -u root -ppassword < database1-2011-01-15.sql

cat *.sql | mysql? Do you need them in any specific order?
If you have too many to handle this way, then try something like:
find . -name '*.sql' | awk '{ print "source",$0 }' | mysql --batch
This also gets around some problems with passing script input through a pipeline though you shouldn't have any problems with pipeline processing under Linux. The nice thing about this approach is that the mysql utility reads in each file instead of having it read from stdin.

One-liner to read in all .sql files and imports them:
for SQL in *.sql; do DB=${SQL/\.sql/}; echo importing $DB; mysql $DB < $SQL; done
The only trick is the bash substring replacement to strip out the .sql to get the database name.

There is superb little script at http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script which will take a huge mysqldump file and split it into a single file for each table. Then you can run this very simple script to load the database from those files:
for i in *.sql
do
echo "file=$i"
mysql -u admin_privileged_user --password=whatever your_database_here < $i
done
mydumpsplitter even works on .gz files, but it is much, much slower than gunzipping first, then running it on the uncompressed file.
I say huge, but I guess everything is relative. It took about 6-8 minutes to split a 2000-table, 200MB dump file for me.

I don't remember the syntax of mysqldump but it will be something like this
find . -name '*.sql'|xargs mysql ...

I created a script some time ago to do precisely this, which I called (completely uncreatively) "myload". It loads SQL files into MySQL.
Here it is on GitHub
It's simple and straight-forward; allows you to specify mysql connection parameters, and will decompress gzip'ed sql files on-the-fly. It assumes you have a file per database, and the base of the filename is the desired database name.
So:
myload foo.sql bar.sql.gz
Will create (if not exist) databases called "foo" and "bar", and import the sql file into each.
For the other side of the process, I wrote this script (mydumpall) which creates the corresponding sql (or sql.gz) files for each database (or some subset specified either by name or regex).

Related

Call external program from mysql

How can I call an external program from mysql?
I am a complete beginner at this, on Linux Mint 20, I created a database of all my video files, the paths of the videos are all listed in a table.
I can access the DB using Bash with:
mysql -u root -proot -e "use collection; select path from videos where path Like '%foo%' or path Like '%bar%'"
To search for what I want, but now I want to pipe the chosen vid(s) to MPV/VLC, whatever.
Apart from the fact I am doing it as root, am I going about this the wrong way?
I just want to perform quick searches in a terminal, then fire up the vid(s).
Thanks a lot, folks.
If I'm understanding correctly. You want to query your db for a specific type of file or path and then you want to use the result of your query to open up the files?
You don't open the program from MySQL, but you could open it from bash.
Figure out what the bash command is to open that program and use the output of your query to run over a loop in bash to open, one by one, the results you got from your query.
Alternatively you can output the results to a temporary file and read from it with bash:
mysql -user -pass -e "YOUR QUERY" > /tmp/output.txt
If you can get the right output in your output.txt file, I would look into reading from that file in bash with a loop. Something like:
while IFS= read -r line
do
mpv "$line"
done < output.txt

MySQL dumping and gzipping in a Cron job

What I want is to simply dump my MySQL database at certain intervals and then sync that folder to my S3 bucket when it's done. I am running into the same problem with every incarnation of my scripts.
If I run the script directly, it works fine. The DB dumps and is gzipped with no issues. If I run the script from a Cron job, the script seems to execute in chunks. It dumps and zips between 75-100 meg at a time overwriting the piece that just finished. So instead of ending up with one complete 541Mb gzip file, I end up with one that is only 75Mb and not the complete file.
It's almost like it starts to zip before the dump is complete.
Here is the current script I am using. There isn't much to it.
#!/bin/bash
NOW=$(date +%Y-%m-%d--%H)
echo Dumping database at production-$NOW.sql
mysqldump --user=USERNAME --password=PASSWORD --routines DBNAME > /var/mysqlBackups/production-$NOW.sql
echo Zipping production-$NOW.sql
gzip /var/mysqlBackups/production-$NOW.sql
printf "Completed backup and gzip\n\n"
I have tried adding my env variables to the script but that does nothing, and this is not a script that can run manually.
Any ideas, or places to start? I am going crazy.

Importing zipped files in Mysql using CMD

I am trying to import zipped database files into Mysql using command prompt using the following command
7z < backup.sql.7z | mysql -u root test
The root user don't have any password associated with it.
test is my target blank database.
I use 7zip for unzipping purpose.
The zipped database i.e. backup.sql.7z is located in D drive.
But it's giving the following error
So, instead I used the following command
7z < backup.7z | mysql -u root test
Note: This time I am using backup.7z instead of backup.sql.7z
But then I get the following error
Clearly there's something wrong with my SQL syntax.
What will be the correct syntax to use then ?
I needed to import from a compressed file as well, and stumbled upon your question.
After a bit of messing around, I found that this worked for me:
7z x -so backup.7z | mysql -u root test
x is the extraction command
-so makes 7-zip write to stdout
Nothing wrong with your syntax, it's just a limitation with 7zip. It's better to use xz in this case, which doesn't put extraneous junk in stdout, or directly call the 7z.dll with your favorite programming language. 7z.exe is really meant for archive management, rather than unix-style piping, and Igor is very reluctant to change that.
If you try a plain 7z < somefile.7z you'll immediately see that all you get back is a usage list.

Batch files and MySQL: pass a block of commands instead of another .bat file

I see that I can pass a batch file to mysql in order to run a sequence of commands. But can I put those commands in the same batch file as the one that initiates the mysql app?
I.e. can I pass a block of batch commands to mysql instead of passing a batch file, so that it might look something like this:
mysql < [list of commands, not a .bat filename]
You can also pipe commands into MySQL if you don't want/have them in a file:
echo " ...some SQL... " | mysql
The term "Batch file" in the mySQL manual doesn't refer to DOS .BAT files, but to a file with many mySQL commands.
mysql < list.sql
will do exactly what you need.

MySQL Memory engine + init-file

I'm trying to set up a MySQL database so that the tables are ran by the memory engine. I don't really care about loosing some data that gets populated but I would like to dump it daily (via mysqldump in a cronjob) and have the init-file set to this dump. However I can't seem to figure out how to get the mysqldump to be compatable with how the init-file wants the SQL statements to be formatted.
Am I just missing something completely obvious trying to set up a database this way?
MySQL dumps are exactly that -- dumps of the MySQL database contents as SQL. So, there isn't any way to read this directly as a database file.
What you can do, is modify your init script for MySQL to automatically load the last dump (via the command line) every time MySQL starts.
An even better solution would be to use a ramdisk to hold the entire contents of your database in memory, and then periodically copy this to a safe location as your backup.
Although, if you want to maintain the contents of your databases at all, you're better off just using one of the disk-based storage engines (InnoDB or MyISAM), and just giving your server a lot of RAM to use as a cache.
This solution is almost great, but it causes problems when string values in table data contain semicolons - all of them are replaced with newline char.
Here is how I implemented this:
mysqldump --comments=false --opt dbname Table1 Table2 > /var/lib/mysql/mem_tables_init.tmp1
#Format dump file - each statement into single line; semicolons in table data are preserved
grep -v -- ^-- /var/lib/mysql/mem_tables_init.tmp1 | sed ':a;N;$!ba;s/\n/THISISUNIQUESTRING/g' | sed -e 's/;THISISUNIQUESTRING/;\n/g' | sed -e 's/THISISUNIQUESTRING//g' > /var/lib/mysql/mem_tables_init.tmp2
#Add "USE database_name" instruction
cat /var/lib/mysql/mem_tables_init.tmp2 |sed -e 's/DROP\ TABLE/USE\ `dbname`;\nDROP\ TABLE/' > /var/lib/mysql/mem_tables_init.sql
#Cleanup
rm -f /var/lib/mysql/mem_tables_init.tmp1 /var/lib/mysql/mem_tables_init.tmp2
My understanding is that the --init-file is expecting each SQL statement on a single line and that there are no comments in the file.
You should be able to clear up the comments with:
mysqldump --comments=false
As for each SQL statement on one line, I'm not familiar with a mysqldump option to do that, but what you can do is a line of Perl to remove all of the newlines:
perl -pi -w -e 's/\n//g;' theDumpFilename
I don't know if --init-file will like it or not, but it's worth a shot.
The other thing you could do is launch mysql from a script that also loads in a regular mysqldump file. Not the solution you were looking for, but it might accomplish the effect you're after.
I stumbled onto this, so I'll tell you what I do. First, I have an ip->country db in a memory table. There is no reason to try to "save" it, its easily and regularly dropped and recreated, but it may be unpredictable how the php will act when its missing and its only scheduled to be updated weekly. Second, I have a bunch of other memory tables. There is no reason to save these, as they are even more volatile, with lifespans in minutes. They will be refreshed very quickly, but stale data is better than none at all. Also, if you are using any separate key caches, they may (in some cases) need to loaded first or you will be unable to load them. And finally, be sure to put a "use" statement in there if you're not dumpling complete databases, as there is no other interface (like mysql client) to open the database at start up.. So..
cat << EOF > /var/lib/mysql/initial_load.tmp
use fieldsave_db;
cache index fieldsave_db.search in search_cache;
EOF
mysqldump --comments=false -udrinkin -pbeer# fieldsave_db ip2c \
>> /var/lib/mysql/initial_load.tmp
mysqldump --comments=false -ufields -pavenue -B memtables \
>> /var/lib/mysql/initial_load.tmp
grep -v -- ^-- /var/lib/mysql/initial_load.tmp |tr -d '\012' \
|sed -e 's/;/;\n/g' > /var/lib/mysql/initial_load.sql
As always, YMMV, but it works for me.