Separating sqlcmd's stdout and stderr - sqlcmd

I need to implement a test harness for Azure SQL Data Warehouse using sqlcmd on Linux. In my test, I want to capture any error messages encountered by sqlcmd, but send the query results to /dev/null (using -o argument). In reviewing https://msdn.microsoft.com/en-us/library/ms162773.aspx, it seems that if -o is used, -r1 is meaningless.
-r[0 | 1]
Redirects the error message output to the screen (stderr). If you do not specify a parameter or if you specify 0, only error messages that have a severity level of 11 or higher are redirected. If you specify 1, all error message output including PRINT is redirected. Has no effect if you use -o. By default, messages are sent to stdout.
Having trouble understanding why stdout and stderr would be intermingled in this way.
Is my only recourse to run sqlcmd in the background while writing "stdout+stderr" to a named pipe and then strip off the error messages from the named pipe's results?
I don't want to incur any delay in writing the output, but do want to return the full results to the client.

Does "sqlcmd -i debug_dir/test2.sql -r1 2> /tmp/2.out 1> /tmp/1.out" work for you?
I tried it on my local box:
$> cat debug_dir/test2.sql
select * from test1;
go
select * from test1 where i = 'a';
go
select * from test2;
go
$> sqlcmd -S XXX -N -U YYY -P ZZZ -d AAA -I -i debug_dir/test2.sql -r1 2> /tmp/2.out 1> /tmp/1.out
$> cat /tmp/1.out
i j k
2 1 3
1 2 3
2 1 4
(3 rows affected)
a b c
2 1 4
11 12 4
11 12 3
2 1 3
$> cat /tmp/2.out
Msg 245, Level 16, State 1, Server XXX, Line 1
Conversion failed when converting the varchar value 'a' to data type int.
Msg 104309, Level 16, State 1, Server XXX, Line 1
There are no batches in the input script.
You can redirect to /dev/null instead of /tmp/1.out
I hope that helps.

Related

How to redirect the mysqldump to output file using xargs?

I have same problem as mentioned in this below query.
Can't take mysqldump of long argument list
Any idea how to redirect the mysqldump result to a output file?
You can redirect the result of xargs with > just like any other shell command.
For example if I have an xargs pipeline that runs three commands:
% echo "1\n2\n3" | xargs -n 1 "echo"
1
2
3
I can just use > to redirect the output of all three commands:
% echo "1\n2\n3" | xargs -n 1 "echo" > output
% more output
1
2
3

Beeline-Hive returns CSV with blank rows on top of data

My script does simple job, run SQL from a file and save to CSV.
Code is up and running but there is odd behaviour while producing CSV output.
Data starts at around line 70, rather then from very beginning in the CSV file.
#!/bin/bash
beeline -u jdbc:hive2:default -n -p --silent=true --outputformat=csv2 -f code.sql > file_date+`%Y%m%d%H%M%`.csv
I would like my data to start at the very first row of actual data.
1 blank;blank;blank
2 blank;blank;blank
3 blank;blank;blank
4 attr;attr;attr
5 data;data;data
6 data;data;data
7 data;data;data
8 data;data;data
9 data;data;data
Workaround embedded in next step of my automation:
sed -i '/^$/d' file.txt

Explain me how does this Shell pipe magic (... | tee >(tail -c1 >$PULSE) | bzip2 | ...) works?

Here the original source code (relevant 30 lines bash code highlighted)
Here simplified (s3 is a binary which streams to object storage). The dots (...) are options not posted here.
PULSE=$(mktemp -t shield-pipe.XXXXX)
trap "rm -f ${PULSE}" QUIT TERM INT
set -o pipefail
mysqldump ... | tee >(tail -c1 >$PULSE) | bzip2 | s3 stream ...
How does that work exactly? Can you explain me how this redirections and pipes working? Howto debug the error mysqldump: Got errno 32 on write. When manually invoked (only) mysqldump never fails with an error.
The tricky part is that:
tee writes to standard output as well as a file
>( cmd ) creates a writeable process substitution (a command that mimics the behaviour of a writeable file)
This is used to effectively pipe the output of mysqldump into two other commands: tail -c1 to print the last byte to a file and bzip2 to compress the stream.
As Inian pointed out in the comments, the error 32 comes from a broken pipe. I guess that this comes from s3 stream terminating (maybe a timeout?) which in turn causes the preceding commands in the pipeline to fail.

How to filter mysql.log whole query?

Hello I have a problem with filtering mysql.log (general log). I am trying to filter whole query, but in log file lines are split by newline and using GREP shows only part of the query.
Command
tail -n 2000000 mysql.log | grep '016198498'
Produces only this - without UPDATE table SET etc. just part of the code
inm = '016198498',
Any solution to grep whole query with timestamp ?
Solution has been found. You can grep lines before and after. e.g. 10 lines before and 10 lines after which provides sufficient output for me in this case.
tail -n 3000000 mysql.log | grep -B 10 -A 10 '016198498'

Bash for loop picking up filenames and a column from read -r and gnu plot

The top part of the following script works great, the .dat files are created via the MySQL command, and work perfectly with gnu plot (via the command line). The problem is getting the bottom (gnuplot) to work correctly. I'm pretty sure I have a couple of problems in the code: variables and the array. I need to call each .dat file (plot), have the title in the graph (from title in customers.txt)and name it (.png)
any guidance would be appreciated. Thanks a lot -- RichR
#!/bin/bash
set -x
databases=""
titles=""
while read -r ipAddr dbName title; do
dbName=$(echo "$dbName" | sed -e 's/pacsdb//')
rm -f "$dbName.dat"
touch "$dbName.dat"
databases=("$dbName.dat")
titles="$titles $title"
while read -r period; do
mysql -uroot -pxxxx -h "$ipAddr" "pacsdb$dbName" -se \
"SELECT COUNT(*) FROM tables WHERE some.info BETWEEN $period;" >> "$dbName.dat"
done < periods.txt
done < customers.txt
for database in "${databases[#]}"; do
gnuplot << EOF
set a bunch of options
set output "/var/www/$dbName.png"
plot "$dbName.dat" using 2:xtic(1) title "$titles"
EOF
done
exit 0
customers.txt example line-
192.168.179.222 pacsdbgibsonia "Gibsonia Animal Hospital"
Error output.....
+ for database in '"${databases[#]}"'
+ gnuplot
line 0: warning: Skipping unreadable file ".dat"
line 0: No data in plot
+ exit 0
to initialise databases array:
databases=()
to append $dbName.dat to databases array:
databases+=("$dbName.dat")
to retrieve dbName, remove suffix pattern .dat
dbName=${database%.dat}