Getting MySQL query result without error messages (suppress error messages) - mysql

I need to suppress all error messages that the mysql command prints to stdout. I saw many other similar questions but all answers suggest workarounds to avoid these messages (check database/table/column exists before executing query). But I need the mysql command to return a failure exit code on error and don't print anything to output except the data explicitly requested in a query in a successful run. The -s key doesn't help in hiding error messages.
My task is to execute a MySQL query in a script and get either the requested data (printed with the -s key) or a non-zero exit code. I don't want to check each and every table/column/etc existence before executing a target query. How can I achieve this?
UPD: I tried this but it didn't help:
mysql ... db -s -N -e "SELECT config_id FROM core_config_data LIMIT 1;" 2> /dev/null

To sum it up:
You want the mysql command to:
not print any error
exit with a non-success code when a query fails
Then I've got good news for you! The output of errors will always be on stderr. Therefore you can just redirect the output to null or whatever you like.
root#icarus ~/so # mariadb -Dmaio290sql1 -e 'SELECT * FROM wp_users' -s 2> bla.txt
[actual content]
root#icarus ~/so # echo $?
0
root#icarus ~/so # mariadb -Dmaio290sql1 -e 'SELECT * FROM nope' -s 2> bla.txt
root#icarus ~/so # echo $?
1
The last query is throwing an error and therefore the exit code is not 0.
This was tested on MariaDB though: 10.3.27-MariaDB-0+deb10u1 Debian 10.

Related

Abort subsequent mysql script execution if any one of the mysql script failed to execute

I have 3 sql scripts (11.sql,12.sql & 13.sql) which contains stored procedures (or) DML statements and has to be executed one after other in ascending order. I could able to do this through bash script.
for sql_file in `ls *.sql`
do
echo "Output of the $sql_file as below..."
mysql --defaults-file=/home/sri/myfile.txt -v --safe-updates=1 << EOFMYSQL
USE sampledb;
SOURCE $sql_file;
EOFMYSQL
But whenever any one .sql file gets failed due to different reasons like below, execution has to be aborted and further .sql should not execute.
Error Code: 1146. Table 'sampledb.table_one' doesn't exist 0.360 sec
Access denied errors (or) Insufficient privileges
But irrespective of failure all 3 sql scripts executing one after other.I tried various options but could not able to figure out exact steps to implement this requirement of aborting sql scripts execution in case of any errors.
Thank you in advance.
One way is to put your heredoc contents (i.e. lines between EOFMYSQL) into the appropriate *.sql files.
Then use this basic structure to execute and exit the do loop if mysql's exit status $? is not zero (because sql failed with error):
for sql_file in *.sql
do
mysql --defaults-file=... < sql_file
[ $? -ne 0 ] && exit
done
I could able to check if the SQL script (Stored procedure with DML statements) is successfully executed or not by using below script. If it fails, it returns the corresponding MySQL error code/statement.
for sql_file in `ls *.sql`
do
mysql --defaults-file=input.txt --safe-updates=1 sampledb < $sql_file
if [ $? -eq 0 ]; then
echo "$sql_file was executed"
else
echo "Aborting MySQL execution due to previous script error. Please investigate"
exit 1
fi
done
But I need to display below messages using bash script that explains the no of records effected for the DML statement execution. (through stored procedure scripts)
{0 row(s) affected Rows matched: 1 Changed: 0 Warnings:0}
Could you please let me know how to implement this.

mysql cmdline client to return error to shell script

Is there a way to get the mysql cmdline client to abort as soon as it encounters an error, and to return a non-zero exit status to the controlling shell script.
Basically I want to be able to have a shell script like:
mysql -A --batch <<END_SQL
UPDATE table1...
UPDATE table2...
...
END_SQL
if [ $? -ne 0 ]
then
# Error handling
fi
With oracle I just had to put WHENEVER SQLERROR EXIT FAILURE at the top of the sql commands. Is there a mysql equivalent? Not found anything in google.
By default, the mysql client already does exit when it encounters an error.
You can make the mysql client not exit when it encounters an error if you use the --force option.
$ echo "select * from nonexistanttable ; select now()" | mysql test
ERROR 1146 (42S02) at line 1: Table 'test.nonexistanttable' doesn't exist
$ echo $?
1
Notice this does not return the result of now(). It exited first.
$ echo "select * from nonexistanttable ; select now()" | mysql --force test
ERROR 1146 (42S02) at line 1: Table 'test.nonexistanttable' doesn't exist
now()
2017-11-12 22:46:30
$ echo $?
0
Ah, there it is.
This question is sort of the opposite of MySQL: ignore errors when importing?
It's not widely known that this is the behavior of the mysql client. The only clue would have been to read about the --force option in this manual page: https://dev.mysql.com/doc/refman/5.7/en/mysql-command-options.html

Bash mysql error handling locked tables

i have a shell script that is called with a few parameters very frequently. it is supposed to build a query and execute the statement. in case an error occures, it should write the arguments seperated into a file so the error-handling can take place by calling that script again.
everything works but
the problem is, i catch connection refused error etc but if the statement cannot be executed because the table is locked and i do not want to wait for the timeout.
my code:
...
mysql -u ${username} -p${password} -h ${database} -P ${port} --connect-timeout=1 --skip-reconnect -e "$NQUERY"
mysqlstatus=$?
if [ $mysqlstatus -ne 0 ]; then
echo "[ERROR:QUERY COULD NOT BE EXECUTED:$mysqlstatus: QUERY WRITTEN TO LOG]" >> ${GENLOG}
#echo ${NQUERY} >> ${FQUER}
for i in "$#"; do
ARGS="$ARGS $i|"
done
echo "${ARGS}" >> ${ARGLOG}
else
echo "[OK] $NQUERY" >> ${GENLOG}
fi
...
but when a table is locked, the executing is not canceled and it runs like forever..
its not a solution for me to set the Max_statement_time_set or anything on the mysql server, since im not the only one using the db
You can use the timeout command along with mysql
timeout 3 mysql -u ...
This will wait 3 seconds for the mysql command to return, if the command runs longer then 3 seconds timeout will return exit status 124 to the shell. If you don't have timeout you can use job control with something like this.
#background the process
mysql -u ... &
#get pid of background process
bg_pid=$!
sleep 3
#check if your pid is still running
#using string matching incase pid was re assigned
if [[ $(ps -p $bg_pid -o args --no-headers) =~ "mysql" ]]
then
echo "running to long"
else
echo "OK"
fi

MySQLSHOW Suppress Warning in Bash Script

I'm working on a simple bash script and one of the things it does is check whether a database already exists before moving on. It's simple enough code, but I'm getting a warning message whenever I try to run the script and I want to suppress that.
Here is the code:
if ! mysql -uroot -proot -e "use $NAME"; then
echo YES
else
echo NO
fi
So, as output, I get the following message when the if statement returns true:
ERROR 1049 (42000) at line 1: Unknown database 'database'
YES
How can I suppress that message? It doesn't stop the script from running, but I would prefer not to see it.
It simply tells you that the DB with the name database (which is apparently passed as a value of the $NAME variable) doesn't exist. Use the correct DB name and there'll be no warning.
To simply mute the warning redirect all the output to /dev/null as usual:
if ! mysql -uroot -proot -e "use $NAME" 2>&1 >/dev/null; then
echo YES
else
echo NO
fi

MySQL from the command line - can I practically use LOCKs?

I'm doing a bash script that interacts with a MySQL datatabase using the mysql command line programme. I want to use table locks in my SQL. Can I do this?
mysql -e "LOCK TABLES mytable"
# do some bash stuff
mysql -u "UNLOCK TABLES"
The reason I ask, is because table locks are only kept for the session, so wouldn't the lock be released as soon as that mysql programme finishes?
[EDIT]
nos had the basic idea -- only run "mysql" once, and the solution nos provided should work, but it left the FIFO on disk.
nos was also correct that I screwed up: a simple "echo X >FIFO" will close the FIFO; I remembered wrongly. And my (removed) comments w.r.t. timing don't apply, sorry.
That said, you don't need a FIFO, you could use an inter-process pipe. And looking through my old MySQL scripts, some worked akin to this, but you cannot let any commands write to stdout (without some "exec" tricks).
#!/bin/bash
(
echo "LOCK TABLES mytable READ ;"
echo "Doing something..." >&2
echo "describe mytable;"
sleep 5
echo "UNLOCK tables;"
) | mysql ${ARGUMENTS}
Another option might be to assign a file descriptor to the FIFO, then have it run in the background. This is very similar to what nos did, but the "exec" option wouldn't require a subshell to run the bash commands; hence would allow you to set "RC" in the "other stuff":
#!/bin/bash
# Use the PID ($$) in the FIFO and remove it on exit:
FIFO="/tmp/mysql-pipe.$$"
mkfifo ${FIFO} || exit $?
RC=0
# Tie FD3 to the FIFO (only for writing), then start MySQL in the u
# background with its input from the FIFO:
exec 3<>${FIFO}
mysql ${ARGUMENTS} <${FIFO} &
MYSQL=$!
trap "rm -f ${FIFO};kill -1 ${MYSQL} 2>&-" 0
# Now lock the table...
echo "LOCK TABLES mytable WRITE;" >&3
# ... do your other stuff here, set RC ...
echo "DESCRIBE mytable;" >&3
sleep 5
RC=3
# ...
echo "UNLOCK TABLES;" >&3
exec 3>&-
# You probably wish to sleep for a bit, or wait on ${MYSQL} before you exit
exit ${RC}
Note that there are a few control issues:
This code has NO ERROR CHECKING for failure to lock (or any SQL commands
within the "other stuff"). And that's definitely non-trivial.
Since in the first example, the "other stuff" is within a subshell, you cannot easily
set the return code of the script from that context.
Here's one way, I'm sure there's an easier way though..
mkfifo /tmp/mysql-pipe
mysql mydb </tmp/mysql-pipe &
(
echo "LOCK TABLES mytable READ ;" 1>&6
echo "Doing something "
echo "UNLOCK tables;" 1>&6
) 6> /tmp/mysql-pipe
A very interesting approach I found out while looking into this issue for my own, is by using MySQL's SYSTEM command. I'm not still sure what exactly are the drawbacks, if any, but it will certainly work for a lot of cases:
Example:
mysql <<END_HEREDOC
LOCK TABLES mytable;
SYSTEM /path/to/script.sh
UNLOCK TABLES;
END_HEREDOC
It's worth noting that this only works on *nix, obviously, as does the SYSTEM command.
Credit goes to Daniel Kadosh: http://dev.mysql.com/doc/refman/5.5/en/lock-tables.html#c10447
Another approach without the mkfifo commands:
cat <(echo "LOCK TABLES mytable;") <(sleep 3600) | mysql &
LOCK_PID=$!
# BASH STUFF
kill $LOCK_PID
I think Amr's answer is the simplest. However I wanted to share this because someone else may also need a slightly different answer.
The sleep 3600 pauses the input for 1 hour. You can find other commands to make it pause here: https://unix.stackexchange.com/questions/42901/how-to-do-nothing-forever-in-an-elegant-way
The lock tables SQL runs immediately, then it will wait for the sleep timer.
Problem and limitation in existing answers
Answers by NVRAM, nos and xer0x
If commands between LOCK TABLES and UNLOCK TABLES are all SQL queries, you should be fine.
In this case, however, why don't we just simply construct a single SQL file and pipe it to the mysql command?
If there are commands other than issuing SQL queries in the critical section, you could be running into trouble.
The echo command that sends the lock statement to the file descriptor doesn't block and wait for mysql to respond.
Subsequent commands are therefore possible to be executed before the lock is actually acquired. Synchronization aren't guaranteed.
Answer by Amr Mostafa
The SYSTEM command is executed on the MySQL server. So the script or command to be executed must be present on the same MySQL server.
You will need terminal access to the machine/VM/container that host the server (or at least a mean to transfer your script to the server host).
SYSTEM command also works on Windows as of MySQL 8.0.19, but running it on a Windows server of course means you will be running a Windows command (e.g. batch file or PowerShell script).
A modified solution
Below is a example solution based on the answers by NVRAM and nos, but waits for lock:
#!/bin/bash
# creates named pipes for attaching to stdin and stdout of mysql
mkfifo /tmp/mysql.stdin.pipe /tmp/mysql.stdout.pipe
# unbuffered option to ensure mysql doesn't buffer the output, so we can read immediately
# batch and skip-column-names options are for ease of parsing the output
mysql --unbuffered --batch --skip-column-names $OTHER_MYSQL_OPTIONS < /tmp/mysql.stdin.pipe > /tmp/mysql.stdout.pipe &
PID_MYSQL=$!
# make sure to stop mysql and remove the pipes before leaving
cleanup_proc_pipe() {
kill $PID_MYSQL
rm -rf /tmp/mysql.stdin.pipe /tmp/mysql.stdout.pipe
}
trap cleanup_proc_pipe EXIT
# open file descriptors for writing and reading
exec 10>/tmp/mysql.stdin.pipe
exec 11</tmp/mysql.stdout.pipe
# update the cleanup procedure to close the file descriptors
cleanup_fd() {
exec 10>&-
exec 11>&-
cleanup_proc_pipe
}
trap cleanup_fd EXIT
# try to obtain lock with 5 seconds of timeout
echo 'SELECT GET_LOCK("my_lock", 5);' >&10
# read stdout of mysql with 6 seconds of timeout
if ! read -t 6 line <&11; then
echo "Timeout reading from mysql"
elif [[ $line == 1 ]]; then
echo "Lock acquired successfully"
echo "Doing some critical stuff..."
echo 'DO RELEASE_LOCK("my_lock");' >&10
else
echo "Timeout waiting for lock"
fi
The above example uses SELECT GET_LOCK() to enter the critical section. It produces output for us to parse the result and decide what to do next.
If you need to execute statements that doesn't produce output (e.g. LOCK TABLES and START TRANSACTION), you may perform a dummy SELECT 1; after such statement and read from the stdout with a reasonable timeout. E.g.:
# ...
echo 'LOCK TABLES my_table WRITE;' >&10
echo 'SELECT 1;' >&10
if ! read -t 10 line <&11; then
echo "Timeout reading from mysql"
elif [[ $line == 1 ]]; then
echo "Table lock acquired"
# ...
else
echo "Unexpected output?!"
fi
You may also want to attach a third named pipe to stderr of mysql to handle different cases of error.