i have a shell script that is called with a few parameters very frequently. it is supposed to build a query and execute the statement. in case an error occures, it should write the arguments seperated into a file so the error-handling can take place by calling that script again.
everything works but
the problem is, i catch connection refused error etc but if the statement cannot be executed because the table is locked and i do not want to wait for the timeout.
my code:
...
mysql -u ${username} -p${password} -h ${database} -P ${port} --connect-timeout=1 --skip-reconnect -e "$NQUERY"
mysqlstatus=$?
if [ $mysqlstatus -ne 0 ]; then
echo "[ERROR:QUERY COULD NOT BE EXECUTED:$mysqlstatus: QUERY WRITTEN TO LOG]" >> ${GENLOG}
#echo ${NQUERY} >> ${FQUER}
for i in "$#"; do
ARGS="$ARGS $i|"
done
echo "${ARGS}" >> ${ARGLOG}
else
echo "[OK] $NQUERY" >> ${GENLOG}
fi
...
but when a table is locked, the executing is not canceled and it runs like forever..
its not a solution for me to set the Max_statement_time_set or anything on the mysql server, since im not the only one using the db
You can use the timeout command along with mysql
timeout 3 mysql -u ...
This will wait 3 seconds for the mysql command to return, if the command runs longer then 3 seconds timeout will return exit status 124 to the shell. If you don't have timeout you can use job control with something like this.
#background the process
mysql -u ... &
#get pid of background process
bg_pid=$!
sleep 3
#check if your pid is still running
#using string matching incase pid was re assigned
if [[ $(ps -p $bg_pid -o args --no-headers) =~ "mysql" ]]
then
echo "running to long"
else
echo "OK"
fi
Related
I have a small down and dirty script to dump one of the tables all of a client's databases nightly:
#!/bin/bash
DB_BACKUP="/backups/mysql_backup/`date +%Y-%m-%d`"
DB_USER="dbuser"
DB_PASSWD="dbpass"
# Create the backup directory
mkdir -p $DB_BACKUP
# Remove backups older than 10 days
find /backups/mysql_backup/ -maxdepth 1 -type d -mtime +10 -exec rm -rf {} \;
# Backup each database on the system
for db in $(mysql --user=$DB_USER --password=$DB_PASSWD -e 'show databases' -s --skip-column-names|grep -viE '(staging|performance_schema|information_schema)');
do echo "dumping $db-uploads"; mysqldump --user=$DB_USER --password=$DB_PASSWD --events --opt --single-transaction $db uploads > "$DB_BACKUP/mysqldump-$db-uploads-$(date +%Y-%m-%d).sql";
done
Recently we've had some issues where some of the tables get corrupted, and mysqldump fails with the following message:
mysqldump: Got error: 145: Table './myDBname/myTable1' is marked as crashed and should be repaired when using LOCK TABLES
Is there a way for me to check if this happens in the bash script, and log the errors if so?
Also, as written would such an error halt the script, or would it continue to backup the rest of the databases normally? If it would halt execution is there a way around that?
Every program has an exit status. The exit status of each program is assigned to the $? builtin bash variable. By convention, this is 0 if the command was successful, or some other value 1-255 if the command was not successful. The exact value depends on the code in that program.
You can see the exit codes that mysqldump might issue here: https://github.com/mysql/mysql-server/blob/8.0/client/mysqldump.cc#L65-L72
You can check for this, and log it, output an error message of you choosing, exit the bash script, whatever you want.
mysqldump ...
if [[ $? != 0 ]] ; then
...do something...
fi
You can alternatively write this which does the same thing:
mysqldump ... || {
...do something...
}
The || means to execute the following statement or code block if the exit status of the preceding command is nonzero.
By default, commands that return errors do not cause the bash script to exit. You can optionally make that the behavior of the script by using this statement, and all following commands will cause the script to exit if they fail:
set -e
I need to suppress all error messages that the mysql command prints to stdout. I saw many other similar questions but all answers suggest workarounds to avoid these messages (check database/table/column exists before executing query). But I need the mysql command to return a failure exit code on error and don't print anything to output except the data explicitly requested in a query in a successful run. The -s key doesn't help in hiding error messages.
My task is to execute a MySQL query in a script and get either the requested data (printed with the -s key) or a non-zero exit code. I don't want to check each and every table/column/etc existence before executing a target query. How can I achieve this?
UPD: I tried this but it didn't help:
mysql ... db -s -N -e "SELECT config_id FROM core_config_data LIMIT 1;" 2> /dev/null
To sum it up:
You want the mysql command to:
not print any error
exit with a non-success code when a query fails
Then I've got good news for you! The output of errors will always be on stderr. Therefore you can just redirect the output to null or whatever you like.
root#icarus ~/so # mariadb -Dmaio290sql1 -e 'SELECT * FROM wp_users' -s 2> bla.txt
[actual content]
root#icarus ~/so # echo $?
0
root#icarus ~/so # mariadb -Dmaio290sql1 -e 'SELECT * FROM nope' -s 2> bla.txt
root#icarus ~/so # echo $?
1
The last query is throwing an error and therefore the exit code is not 0.
This was tested on MariaDB though: 10.3.27-MariaDB-0+deb10u1 Debian 10.
I have a q.sql file that has queries like
SET SQL_SAFE_UPDATES = 0;
UPDATE student SET gender = 'f' WHERE gender = 'm';
.
.
UPDATE student SET rollno = '03' WHERE rollno = '003';
This .sql file is executed through a shellscript:
mysql -uuser -ppass DB < q.sql
The command is executed even when one of the queries in q.sql file has failed. Now I want to verify if all the queries are updated successfully.
I tried to echo $? but it always prints 0, i.e command successful, even if the one of the queries in q.sql has failed.
mysql -uuser -ppass DB < q.sql
echo $?
If query fails I want it to print "failed" or stop the further execution of the shellscript.
If you use bash, you can use the set -e in your script and execute each line of your mysql script using -e option.
#!/bin/bash
set -e
while read line; do
mysql -uuser -ppass DB -e "$line"
done < q.sql
For information set --help shows:
-e Exit immediately if a command exits with a non-zero status.
and the man mysql page:
--execute=statement, -e statement
Execute the statement and quit. The default output format is like that produced with --batch. See Section 4.2.3.1, "Using Options on the Command Line", for some examples. With this option, mysql does not use the history file.
You can catch the output in a file for further processing:
mysql -uuser -ppass DB < q.sql > mysql.out
If you have a query that produces a lot of output, you can run the output through a pager rather than watching it scroll off the top of your screen:
mysql -uuser -ppass DB < q.sql | more
If you want to get the interactive output format in batch mode, use mysql -t. To echo to the output the statements that are executed, use mysql -v.
https://dev.mysql.com/doc/refman/8.0/en/batch-mode.html
I'm trying to set variable in the cycle in Ubuntu bash, which is getting recordset from database, but this variable is setting to its previous value.
Here is a code:
#!/bin/bash
PREV_FILE_PATH="127"
while true
do
echo "$PREV_FILE_PATH"
mysql -h$DB_HOST -u $DB_USER -p$DB_PASS $DB_NAME --skip-column-names --default-character-set=UTF8 -e "here is a query" | while read "here is getting variables from recordset";
do
PREV_FILE_PATH="777"
done
done
And this code prints every time:
127
127
127
But whe I replaced this block-:
mysql -h$DB_HOST -u $DB_USER -p$DB_PASS $DB_NAME --skip-column-names --default-character-set=UTF8 -e "here is a query" | while read "here is getting variables from recordset";
with just while true and break at the end of cycle it works fine and prints:
127
777
777
777
Script creates some subshell and running that MySQL query in that subshell. So what should I do to make script change that variable?
As you noted the issue is due to the creation of a subshell which is being caused by piping the output of the mysql command to the while loop. A simple example:
PREV_FILE_PATH=127
echo test | while read _; do
PREV_FILE_PATH=777
done
echo $PREV_FILE_PATH
# output: 127
Since you're using BASH you can move the mysql command from being a pipe to a substituted process fed to the while loop via STDIN redirection. Using the previous simple example:
PREV_FILE_PATH=127
while read _; do
PREV_FILE_PATH=777
done < <(echo test)
echo $PREV_FILE_PATH
# output: 777
So to fix your code, you will want to move your mysql command in the same fashion that I moved the echo command above:
while read "here is getting variables from recordset"
do
PREV_FILE_PATH="777"
done < <(mysql -h$DB_HOST -u $DB_USER [..remaining options..])
Note that process substitution via <() is a BASH-ism and isn't POSIX compliant.
I'm doing a bash script that interacts with a MySQL datatabase using the mysql command line programme. I want to use table locks in my SQL. Can I do this?
mysql -e "LOCK TABLES mytable"
# do some bash stuff
mysql -u "UNLOCK TABLES"
The reason I ask, is because table locks are only kept for the session, so wouldn't the lock be released as soon as that mysql programme finishes?
[EDIT]
nos had the basic idea -- only run "mysql" once, and the solution nos provided should work, but it left the FIFO on disk.
nos was also correct that I screwed up: a simple "echo X >FIFO" will close the FIFO; I remembered wrongly. And my (removed) comments w.r.t. timing don't apply, sorry.
That said, you don't need a FIFO, you could use an inter-process pipe. And looking through my old MySQL scripts, some worked akin to this, but you cannot let any commands write to stdout (without some "exec" tricks).
#!/bin/bash
(
echo "LOCK TABLES mytable READ ;"
echo "Doing something..." >&2
echo "describe mytable;"
sleep 5
echo "UNLOCK tables;"
) | mysql ${ARGUMENTS}
Another option might be to assign a file descriptor to the FIFO, then have it run in the background. This is very similar to what nos did, but the "exec" option wouldn't require a subshell to run the bash commands; hence would allow you to set "RC" in the "other stuff":
#!/bin/bash
# Use the PID ($$) in the FIFO and remove it on exit:
FIFO="/tmp/mysql-pipe.$$"
mkfifo ${FIFO} || exit $?
RC=0
# Tie FD3 to the FIFO (only for writing), then start MySQL in the u
# background with its input from the FIFO:
exec 3<>${FIFO}
mysql ${ARGUMENTS} <${FIFO} &
MYSQL=$!
trap "rm -f ${FIFO};kill -1 ${MYSQL} 2>&-" 0
# Now lock the table...
echo "LOCK TABLES mytable WRITE;" >&3
# ... do your other stuff here, set RC ...
echo "DESCRIBE mytable;" >&3
sleep 5
RC=3
# ...
echo "UNLOCK TABLES;" >&3
exec 3>&-
# You probably wish to sleep for a bit, or wait on ${MYSQL} before you exit
exit ${RC}
Note that there are a few control issues:
This code has NO ERROR CHECKING for failure to lock (or any SQL commands
within the "other stuff"). And that's definitely non-trivial.
Since in the first example, the "other stuff" is within a subshell, you cannot easily
set the return code of the script from that context.
Here's one way, I'm sure there's an easier way though..
mkfifo /tmp/mysql-pipe
mysql mydb </tmp/mysql-pipe &
(
echo "LOCK TABLES mytable READ ;" 1>&6
echo "Doing something "
echo "UNLOCK tables;" 1>&6
) 6> /tmp/mysql-pipe
A very interesting approach I found out while looking into this issue for my own, is by using MySQL's SYSTEM command. I'm not still sure what exactly are the drawbacks, if any, but it will certainly work for a lot of cases:
Example:
mysql <<END_HEREDOC
LOCK TABLES mytable;
SYSTEM /path/to/script.sh
UNLOCK TABLES;
END_HEREDOC
It's worth noting that this only works on *nix, obviously, as does the SYSTEM command.
Credit goes to Daniel Kadosh: http://dev.mysql.com/doc/refman/5.5/en/lock-tables.html#c10447
Another approach without the mkfifo commands:
cat <(echo "LOCK TABLES mytable;") <(sleep 3600) | mysql &
LOCK_PID=$!
# BASH STUFF
kill $LOCK_PID
I think Amr's answer is the simplest. However I wanted to share this because someone else may also need a slightly different answer.
The sleep 3600 pauses the input for 1 hour. You can find other commands to make it pause here: https://unix.stackexchange.com/questions/42901/how-to-do-nothing-forever-in-an-elegant-way
The lock tables SQL runs immediately, then it will wait for the sleep timer.
Problem and limitation in existing answers
Answers by NVRAM, nos and xer0x
If commands between LOCK TABLES and UNLOCK TABLES are all SQL queries, you should be fine.
In this case, however, why don't we just simply construct a single SQL file and pipe it to the mysql command?
If there are commands other than issuing SQL queries in the critical section, you could be running into trouble.
The echo command that sends the lock statement to the file descriptor doesn't block and wait for mysql to respond.
Subsequent commands are therefore possible to be executed before the lock is actually acquired. Synchronization aren't guaranteed.
Answer by Amr Mostafa
The SYSTEM command is executed on the MySQL server. So the script or command to be executed must be present on the same MySQL server.
You will need terminal access to the machine/VM/container that host the server (or at least a mean to transfer your script to the server host).
SYSTEM command also works on Windows as of MySQL 8.0.19, but running it on a Windows server of course means you will be running a Windows command (e.g. batch file or PowerShell script).
A modified solution
Below is a example solution based on the answers by NVRAM and nos, but waits for lock:
#!/bin/bash
# creates named pipes for attaching to stdin and stdout of mysql
mkfifo /tmp/mysql.stdin.pipe /tmp/mysql.stdout.pipe
# unbuffered option to ensure mysql doesn't buffer the output, so we can read immediately
# batch and skip-column-names options are for ease of parsing the output
mysql --unbuffered --batch --skip-column-names $OTHER_MYSQL_OPTIONS < /tmp/mysql.stdin.pipe > /tmp/mysql.stdout.pipe &
PID_MYSQL=$!
# make sure to stop mysql and remove the pipes before leaving
cleanup_proc_pipe() {
kill $PID_MYSQL
rm -rf /tmp/mysql.stdin.pipe /tmp/mysql.stdout.pipe
}
trap cleanup_proc_pipe EXIT
# open file descriptors for writing and reading
exec 10>/tmp/mysql.stdin.pipe
exec 11</tmp/mysql.stdout.pipe
# update the cleanup procedure to close the file descriptors
cleanup_fd() {
exec 10>&-
exec 11>&-
cleanup_proc_pipe
}
trap cleanup_fd EXIT
# try to obtain lock with 5 seconds of timeout
echo 'SELECT GET_LOCK("my_lock", 5);' >&10
# read stdout of mysql with 6 seconds of timeout
if ! read -t 6 line <&11; then
echo "Timeout reading from mysql"
elif [[ $line == 1 ]]; then
echo "Lock acquired successfully"
echo "Doing some critical stuff..."
echo 'DO RELEASE_LOCK("my_lock");' >&10
else
echo "Timeout waiting for lock"
fi
The above example uses SELECT GET_LOCK() to enter the critical section. It produces output for us to parse the result and decide what to do next.
If you need to execute statements that doesn't produce output (e.g. LOCK TABLES and START TRANSACTION), you may perform a dummy SELECT 1; after such statement and read from the stdout with a reasonable timeout. E.g.:
# ...
echo 'LOCK TABLES my_table WRITE;' >&10
echo 'SELECT 1;' >&10
if ! read -t 10 line <&11; then
echo "Timeout reading from mysql"
elif [[ $line == 1 ]]; then
echo "Table lock acquired"
# ...
else
echo "Unexpected output?!"
fi
You may also want to attach a third named pipe to stderr of mysql to handle different cases of error.