Looking for help from the bash experts here.
What I am trying to do is probably incredibly simple, but I cannot find anything in google that is clearly explaining what to do. There is a lot of other stuff in the script but this is the block I need help on:
# import the extracted sql into mysql
for sql_file in $(find -maxdepth 1 -type f -iname "*$1*.sql" | sort); do
echo "Importing: $sql_file"
if mysql -u [USER] -p[PASSWORD] -h [HOST] $db < $sql_file
then
echo "Database $sql_file imported successfully. $db has been updated"
else
echo "ERROR: Database importing $sql_file into $db"
fi
done
If you cannot tell, I am basically trying to import multiple db.sql files into a given destination within a loop. What I would like to do is add the output of the mysql command to display any MySql errors that may be generated at this step. This output would come directly after this line:
echo "ERROR: Database importing $sql_file into $db"
If the import failed at this step, I would like it to output something like this, but not exit the script:
MySQL Import Failed: [Output what mysql would output to command line right here.]
This would inform the person performing the import, what exactly went wrong so they can tell the developers exactly what they encountered.
I dont know if this is too vague or not, but any help would be greatly appreciated.
How about generating individual log and error files as follows:
for sql_file in $(find -maxdepth 1 -type f -iname "*$1*.sql" | sort); do
echo "Importing: $sql_file"
log=$sql_file.log
err=$sql_file.err
if mysql -u [USER] -p[PASSWORD] -h [HOST] $db < $sql_file > $log 2> $err
then
echo "Database $sql_file imported successfully. $db has been updated"
else
echo "ERROR: Database importing $sql_file into $db"
echo "<<<<<"
cat $err
echo ">>>>>"
fi
done
This way you keep track of the processing of each script. For your original question it would suffice to just generate the error file. You can even delete it after the cat if you want.
Related
I'm trying to import GZiped MySQL databases listed in a folder.
GZiped files are located at .mysqldumps/.
$NAME tries to extract database name (as files are always named database_name.sql.gz) and pass it to mysql command line.
Also, as username and database name are the same, the same argument is passed ($NAME).
As files are GZiped, we try to zcat them (so gunzip -c) before pipe them to mysql.
The full script is:
#!/bin/bash
FILES='.mysqldumps/*'
PASSWORD='MyPassword'
for f in $FILES
do
NAME=dbprefix_`basename $f .sql.gz`
echo "Processing $f"
set -x
zcat $f | mysql -u "$NAME" -p$PASSWORD "$NAME"
done
But, when i run the script it outputs:
./.mysqlimport
Processing .mysqldumps/first_database.sql.gz
+ mysql -u dbprefix_first_database -pMyPassword dbprefix_first_database
+ zcat .mysqldumps/first_database.sql.gz
ERROR 1044 (42000) at line 22: Access denied for user 'dbprefix_first_database'#'localhost' to database 'first_database'
As you can see, the selected database is 'first_database' instead of 'dbprefix_first_database' and this just trowns an error of corse, and i just can't understand why $NAME is not correctly parse as database name.
What i'm doing wrong?
After some investigation, the problem comes from the DUMP and not from the script.
While using mysqldump the option --databases was used which includes the USE 'dbname'; and when importing, that name was used instead of $NAME.
Problem solved!
I'm wring a script that I plan to schedule by cron for 1AM each morning to backup a mySql DB.
Normally I use this to dump the database:
mysqldump --no-create-db --single-transaction myDB | gzip > ~/my_backup.sql.gz
In my head what I have written should:
Dump the DB, Write any errors to database.err
Pipe the output to gzip which then zips it up and writes to disk
Read the return code, assuming success write the file to a S3 bucket
For the purposes of testing writes current state to the shell
#!/bin/bash
# This script will run each night to backup
# the mySql DB then will upload to Amazon S3
DB_NAME="myDB"
S3_BUCKET="my.s3.bucket"
BACKUP_PATH="~/backups/${DB_NAME}.sql.gz"
mysqldump --no-create-db --single-transaction ${DB_NAME} 2> database.err | gzip > ${BACKUP_PATH}
if [ "$?" -eq 0 ]
then
echo "Database dump complted sucessuflly... wtiting to S3"
aws s3 cp ${BACKUP_PATH} s3://${S3_BUCKET}/
if [ "$?" -eq 0 ]
then
echo "Backup sucessfully written to S3"
else
echo "Error writing to S3"
fi
else
echo "Mysqldump encountered a problem look in database.err for information"
fi
What it looks like the script is doing is getting to the mysqldump line, but is unable to differentiate between the parameter where i specify the DB and the 2> (file descriptor I think is the term?). This is the error:
./backup-script: line 12: ~/backups/myDB.sql.gz: No such file or directory
mysqldump: Got error: 1049: Unknown database 'myDB 2' when selecting the database
Mysqldump encountered a problem look in database.err for information
Can anyone suggest what is happening here/what I'm doing wrong.
Try putting the database name first
mysqldump "${DB_NAME}" --no-create-db --single-transaction
I have a bunch of jpeg images stored as blobs on a mysql database which I need to download to my local machine, the following does not work, can someone please advise?
Note I know the below code just overwrites the same file but for the purpose of this exercise it does not matter.
IFS=$'\n'
for i in `mysql -sN -u******* -p******** -h****** -e "select my_images from mutable"; do
echo $i > myimage.jpg
done
I'm not really sure what is not working with your code, but you should be able to fetch all data and save each image like this:
#!/bin/bash
counter=0;
for i in `mysql -s -N -u******* -p******** -h****** -e"select my_images from mutable"`; do
echo $i > "image${counter}.jpg";
counter=$((counter+1));
done
#!/bin/bash
counter=0;
for i in `mysql -N -u * -p* -e "SELECT id from multable"`; do
mysql -N -u * -p* -e "SELECT my_images FROM multable WHERE id=$i INTO DUMPFILE
'/var/lib/mysql-files/tmp.jpg';"
mv /var/lib/mysql-files/tmp.jpg "image${counter}.jpg"
counter=$((counter+1));
done
This code can extract image from blob and the image will not be invalid.
The keypoint is to use INTO DUMPFILE and then the file can save to the folder indicated by ##secure_file_priv.
You can see the ##secure_file_priv by using the following command
$ echo "SELECT ##secure_file_priv" | mysql -u * -p
And you can change ##secure_file_priv value by setting it in my.cnf.
There are many my.cnf in mysql and you can check the loading sequence by following command
$ mysqld --help --verbose | grep cnf -A2 -B2
But I still suggest to use /var/lib/mysql-files/ to load image file from database first and then copy it because some version of MySQL can only use /var/lib/mysql-files folder even you have changed the ##priv_secure_file to indicate another folder.
I'm trying to set variable in the cycle in Ubuntu bash, which is getting recordset from database, but this variable is setting to its previous value.
Here is a code:
#!/bin/bash
PREV_FILE_PATH="127"
while true
do
echo "$PREV_FILE_PATH"
mysql -h$DB_HOST -u $DB_USER -p$DB_PASS $DB_NAME --skip-column-names --default-character-set=UTF8 -e "here is a query" | while read "here is getting variables from recordset";
do
PREV_FILE_PATH="777"
done
done
And this code prints every time:
127
127
127
But whe I replaced this block-:
mysql -h$DB_HOST -u $DB_USER -p$DB_PASS $DB_NAME --skip-column-names --default-character-set=UTF8 -e "here is a query" | while read "here is getting variables from recordset";
with just while true and break at the end of cycle it works fine and prints:
127
777
777
777
Script creates some subshell and running that MySQL query in that subshell. So what should I do to make script change that variable?
As you noted the issue is due to the creation of a subshell which is being caused by piping the output of the mysql command to the while loop. A simple example:
PREV_FILE_PATH=127
echo test | while read _; do
PREV_FILE_PATH=777
done
echo $PREV_FILE_PATH
# output: 127
Since you're using BASH you can move the mysql command from being a pipe to a substituted process fed to the while loop via STDIN redirection. Using the previous simple example:
PREV_FILE_PATH=127
while read _; do
PREV_FILE_PATH=777
done < <(echo test)
echo $PREV_FILE_PATH
# output: 777
So to fix your code, you will want to move your mysql command in the same fashion that I moved the echo command above:
while read "here is getting variables from recordset"
do
PREV_FILE_PATH="777"
done < <(mysql -h$DB_HOST -u $DB_USER [..remaining options..])
Note that process substitution via <() is a BASH-ism and isn't POSIX compliant.
I want a shell script (ubuntu) which fetchs data from a sql table row by row!
And i want to have access on EACH row, e.g. to Check with a simple if statement, u know?
#!/bin/bash
TIME=`date +"%T"`
echo true > log/ausgefuehrt-$TIME.log
/opt/lampp/lampp start
echo $TIME
#ASSIGNING MYSQL
MYSQL=/opt/lampp/bin/mysql
#$MYSQL -e"select * from ftp.ftp" -u root
$MYSQL -e"select * from ftp.ftp"
#STARTING TO FETCH EACH ROW (trying)
WHILE read ROW ;
do echo $ROW # HERE <<< is the problem.. i dont know how to access each row :/
done
exit
Pipe the output of mysql through a while loop as shown below:
$MYSQL -e"select * from ftp.ftp" | while IFS= read -r ROW
do
echo "$ROW"
done