I want to monitor the progress while importing multiple mysql dump files. I am using below script;
ls -1 *.sql | awk '{ print "source",$0 }' | mysql --batch -u {username} -p {dbname}
After a little research I have found mysql 'pv' command but I don't know how to use it with this script.
Related
I want to connect to mysql databse and execute some queries and export its result to a varibale, and do all of these need to be done entirely by bash script
I have a snippet code but does not work.
#!/bin/bash
BASEDIR=$(dirname $0)
cd $BASEDIR
mysqlUser=n_userdb
mysqlPass=d2FVR0NA3
mysqlDb=n_datadb
result=$(mysql -u $mysqlUser -p$mysqlPass -D $mysqlDb -e "select * from confs limit 1")
echo "${result}" >> a.txt
whats the problem ?
The issue was resolved in the chat by using the correct password.
If you further want to get only the data, use mysql with -NB (or --skip-column-names and --batch).
Also, the script needs to quote the variable expansions, or there will be issues with usernames/passwords containing characters that are special to the shell. Additionally, uppercase variable names are usually reserved for system variables.
#!/bin/sh
basedir=$(dirname "$0")
mysqlUser='n_userdb'
mysqlPass='d2FVR0NA3'
mysqlDb='n_datadb'
cd "$basedir" &&
mysql -NB -u "$mysqlUser" -p"$mysqlPass" -D "$mysqlDb" \
-e 'select * from confs limit 1' >a.txt 2>a-err.txt
Ideally though, you'd use a my.cnf file to configure the username and password.
See e.g.
MySQL Utilities - ~/.my.cnf option file
mysql .my.cnf not reading credentials properly?
Do this:
result=$(mysql -u $mysqlUser -p$mysqlPass -D $mysqlDb -e "select * from confs limit 1" | grep '^\|' | tail -1)
The $() statement of Bash has trouble handling variables which contain multiple lines so the above hack greps only the interesting part: the data
Is it possible to unzip a gzipped file while it's downloading and feed it to mysql all in one go without having to create a physical file?
So far I've been able to unzip and feed it to mysql using the following:-
gunzip < somefile.sql.gz | pv | mysql -u myself -p somedb #pv for viewing process
The above process expects the gzipped file to be downloaded already while unzipping and feeding it to mysql.
But I haven't been able to feed to mysql while it's unzipping and downloading simultaneously.
And if this is not possible I'd like to know the why too so as to get a peace of mind.
wget -qO- URL | gunzip | pv | mysql -u myself -p somedb
Okay, I've figured out. Thanks to #TGray for pointing me to the right direction.
The key was named pipes. Basically, you create a named pipe and start the download on that named pipes while piping that named pipe to gunzip as shown above in my post.
mkfifo mypipe
ssh username#server.com cat /source/file.gz > mypipe &
gunzip < mypipe | pv | mysql -uusername -ppassword dbname
I have a bunch of jpeg images stored as blobs on a mysql database which I need to download to my local machine, the following does not work, can someone please advise?
Note I know the below code just overwrites the same file but for the purpose of this exercise it does not matter.
IFS=$'\n'
for i in `mysql -sN -u******* -p******** -h****** -e "select my_images from mutable"; do
echo $i > myimage.jpg
done
I'm not really sure what is not working with your code, but you should be able to fetch all data and save each image like this:
#!/bin/bash
counter=0;
for i in `mysql -s -N -u******* -p******** -h****** -e"select my_images from mutable"`; do
echo $i > "image${counter}.jpg";
counter=$((counter+1));
done
#!/bin/bash
counter=0;
for i in `mysql -N -u * -p* -e "SELECT id from multable"`; do
mysql -N -u * -p* -e "SELECT my_images FROM multable WHERE id=$i INTO DUMPFILE
'/var/lib/mysql-files/tmp.jpg';"
mv /var/lib/mysql-files/tmp.jpg "image${counter}.jpg"
counter=$((counter+1));
done
This code can extract image from blob and the image will not be invalid.
The keypoint is to use INTO DUMPFILE and then the file can save to the folder indicated by ##secure_file_priv.
You can see the ##secure_file_priv by using the following command
$ echo "SELECT ##secure_file_priv" | mysql -u * -p
And you can change ##secure_file_priv value by setting it in my.cnf.
There are many my.cnf in mysql and you can check the loading sequence by following command
$ mysqld --help --verbose | grep cnf -A2 -B2
But I still suggest to use /var/lib/mysql-files/ to load image file from database first and then copy it because some version of MySQL can only use /var/lib/mysql-files folder even you have changed the ##priv_secure_file to indicate another folder.
I have a linux system with Mysql contain more than 400 databases,I need to export each databases as a single *.sql file.Is it possible to do this with mysql_dump or Mysqlworkbench.
I have tried mysql_dump with --all-databases option.but this make a file with all database.it is large in size.
One way to achieve this is to write a bash script: (Source)
#! /bin/bash
TIMESTAMP=$(date +"%F")
BACKUP_DIR="/backup/$TIMESTAMP"
MYSQL_USER="backup"
MYSQL=/usr/bin/mysql
MYSQL_PASSWORD="password"
MYSQLDUMP=/usr/bin/mysqldump
mkdir -p "$BACKUP_DIR/mysql"
databases=`$MYSQL --user=$MYSQL_USER -p$MYSQL_PASSWORD -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema|performance_schema)"`
for db in $databases; do
$MYSQLDUMP --force --opt --user=$MYSQL_USER -p$MYSQL_PASSWORD --databases $db | gzip > "$BACKUP_DIR/mysql/$db.gz"
done
For more information, have a look at this similar question.
The standard mysqldump command that I use is
mysqldump --opt --databases $dbname --host=$dbhost --user=$dbuser --password=$dbpass | gzip > $filename
To dump multiple databases
mysqldump --opt --databases $dbname1 $dbname2 $dbname3 $dbname_etc --host=$dbhost --user=$dbuser --password=$dbpass | gzip > $filename
My question is how do you dump multiple databases from different MySQL accounts into just one file?
UPDATE: When I meant 1 file, I mean 1 gzipped file with the difference sql dumps for the different sites inside it.
Nobody seems to have clarified this, so I'm going to give my 2 cents.
Going to note here, my experiences are in BASH, and may be exclusive to it, so variables and looping might work different in your environment.
The best way to achieve an archive with separate files inside of it is to use either ZIP or TAR, i prefer to use tar due to its simplicity and availability.
Tar itself doesn't do compression, but bundled with bzip2 or gzip it can provide excellent results. Since your example uses gzip I'll use that in my demonstration.
First, let's attack the problem of MySQL dumps, the mysqldump command does not separate the files (to my knowledge anyway). So let's make a small workaround for creating 1 file per database.
mysql -s -r -p$dbpass --user=$dbuser -e 'show databases' | while read db; do mysqldump -p$dbpass --user=$dbuser $db > ${db}.sql; done
So now we have a string that will show databases per file, and export those databases out to where ever you need simply edit the part after the > symbol
Next, let's add some look at the syntax for TAR
tar -czf <output-file> <input-file-1> <input-file-2>
because of this configuration it allows us to specify a great number of files to archive.
The options are broken down as follows.
c - Compress/Create Archive
z - GZIP Compression
f - Output to file
j - bzip compression
Our next problem is keeping a list of all the newly created files, we'll expand our while statement to append to a variable while running through each database found inside of MySQL.
DBLIST=""; mysql -s -r -p$dbpass --user=$dbuser -e 'show databases' | while read db; do mysqldump p$dbpass --user=$dbuser $db > ${db}.sql; DBLIST="$DBLIST $DB"; done
Now we have a DBLIST variable that we can use to have an output of all our files that will be created, we can then modify our 1 line statement to run the tar command after everything has been handled.
DBLIST=""; mysql -s -r -p$dbpass --user=$dbuser -e 'show databases' | while read db; do mysqldump p$dbpass --user=$dbuser $db > ${db}.sql; DBLIST="$DBLIST $DB"; done && tar -czf $filename "$DBLIST"
This is a very rough approach and doesn't allow you to manually specify databases, so to achieve that, using the following command will create you a TAR file that contains all of your specified databases.
DBLIST=""; for db in "<database1-name> <database2-name>"; do mysqldump -p$dbpass --user=$dbuser $db > ${db}.sql; DBLIST="$DBLIST $DB.sql"; done && tar -czf $filename "$DBLIST"
The looping through MySQL databases from the MySQL database comes from the following stackoverflow.com question "mysqldump with db in a separate file" which was simply modified in order to fit your needs.
And to have the script automatically clean it up in a 1 liner simply add the following at the end of the command
&& rm "$DBLIST"
making the command look like this
DBLIST=""; for db in "<database1-name> <database2-name>"; do mysqldump -p$dbpass --user=$dbuser $db > ${db}.sql; DBLIST="$DBLIST $DB.sql"; done && tar -czf $filename "$DBLIST" && rm "$DBLIST"
For every MySQL server account, dump the databases into separate files
For every dump file, execute this command:
cat dump_user1.sql dump_user2.sql | gzip > super_dump.gz
There is a similar post on Superuser.com website: https://superuser.com/questions/228878/how-can-i-concatenate-two-files-in-unix
just in case "multiple db" is literally "all db" for you
mysqldump -u root -p --all-databases > all.sql