How to write a bash function to wrap another command? - mysql

I am trying to write a function wrapper for the mysql command
If .my.cnf exists in the pwd, I would like to automatically attach --defaults-file=.my.cnf to the command
Here's what I'm trying
function mysql {
if [ -e ".my.cnf" ]; then
/usr/local/bin/mysql --defaults-file=.my.cnf "$#"
else
/usr/local/bin/mysql "$#"
fi
}
The idea is, I want to be able to use the mysql command exactly as I was before, only, if the .my.cnf file is present, attach it as an argument
Question: Will I run into any trouble with this method? Is there a better way to do it?
If I specify --defaults-file=foo.cnf manually, that should be used instead of .my.cnf.

Your function as written is perfectly fine. This is a touch DRYer:
function mysql {
if [ -e ".my.cnf" ]; then
set -- --defaults-file=.my.cnf "$#"
fi
/usr/local/bin/mysql "$#"
}
That set command puts your my.cnf argument at the beginning of the command line arguments
Only if the option is not already present:
function mysql {
if [[ -e ".my.cnf" && "$*" != *"--defaults-file"* ]]; then
set -- --defaults-file=.my.cnf "$#"
fi
/usr/local/bin/mysql "$#"
}

Related

Access Access Design View field descriptions

I have an Access database with field descriptions that (theoretically) are visible in Design View. I don't have a copy of access. I can export the data and schema using mdbtools, but those don't come with the descriptions. Are there ways to programmatically extract those descriptions?
Turns out there was an un/under-documented mdbutils command that will give metadata for a table: mdb-prop. Here's a shell script that will list out the metadata of every field, adapted from a script whose provenance I have forgotten:
#!/usr/bin/env bash
# Usage: mdb-export-all.sh full-path-to-db
command -v mdb-tables >/dev/null 2>&1 || {
echo >&2 "I require mdb-tables but it's not installed. Aborting.";
exit 1;
}
command -v mdb-export >/dev/null 2>&1 || {
echo >&2 "I require mdb-export but it's not installed. Aborting.";
exit 1;
}
fullfilename=$1
filename=$(basename "$fullfilename")
dbname=${filename%.*}
mkdir "$dbname"
IFS=$'\n'
for table in $(mdb-tables -1 "$fullfilename"); do
echo "Check table $table"
# Save a file with with all metadata for every field
mdb-prop "$fullfilename" "$table" > "$dbname/$table.txt"
# Save a file with with just the descriptions:
cat "$dbname/$table.txt" | grep -E 'name|Description' > "$dbname/info_$table.txt"
done

Execution of dynamic mysql query fails

In a recent bash script, I required a function to standardize calls to a mysql server. My first version of the function looked like this:
mysqlfunc()
{
args="-A"
if [ "$1" = "++" ]; then
shift
while [ 1 ]; do
if [ "$1" = "--" ]; then
shift
break
fi
args="$args $1"
shift
done
fi
query="-e \"$*\""
if [ -f "$1" ]; then
query="< $1"
fi
mysql $args -h<host> -p<password> -P<port> -u<user> <database> $query
}
This version of the function produced a syntactically correct mysql statement; executing the evaluated command on the command line worked without error. However, when a file was passed to the function, such as:
mysqlfunc $DB_Scripts/mysql_table_create.sql
The mysql command would fail, complaining of what appeared to be either incorrect arguments supplied or incorrect syntax. It didn't specify which, only printed the usage help for mysql.
My question: Why does this dynamic statement assignment fail?
Example:
Function call:
mysqlcmd $PATH_TO_FILE/example.sql
Mysql command executed:
mysql -A -h<host> -p<password> -P<port> -u<user> <database> < <path_to_file>/example.sql
Result:
Usage for mysql printed to the terminal
You can't put shell metacharacters like < inside variables and have them function.
That's not how the parser works. The shell doesn't see the < in the variable result as a redirection operator it sees it as a literal string.
This is part of what Bash FAQ 050 covers.
can you do something like
if [ -f "$1" ]; then
cat $1 | mysql $args -h<host> -p<password> -P<port> -u<user> <database>
else
mysql $args -h<host> -p<password> -P<port> -u<user> <database> $query
fi
works with psql

gnu parallel not recognize user-defined functions

I cannot get the gnu parallel function to implement a custom function that I built.
My function is:
function run_cuffLinks() {
inputBAM="${HOME}/Analyses/P_miniata/CleanUpPipeline/TH_${1}/${1}.realigned.bam"
if [[ ! -f $inputBAM ]]; then echo -e "$inputBAM could not be found\nexit 1" ; fi
WORKING_DIR="${HOME}/data/CuffLinks/TH_$1"
if [[ ! -d $WORKING_DIR ]]; then mkdir -p $WORKING_DIR; fi
REF="${HOME}/ReferenceSequences/GATK_pmin.scaf.fa"
if [[ ! -f $REF ]]; then echo -e "$inputBAM could not be found\nexit 1" ; exit 1; fi
GTF_FILE="${HOME}/ReferenceSequences/genes.sorted.gff3"
if [[ ! -f $GTF_FILE ]]; then echo -e "$inputBAM could not be found\nexit 1" ; exit 1; fi
cufflinks \
--output-dir $WORKING_DIR \
--num-threads 2 \
--frag-len-mean 100 \
--GTF-guide $GTF_FILE \
--frag-bias-correct $REF \
-L "HH" \
$inputBAM ;
}
When I enter:
parallel --no-notice -j+2 run_cuffLinks {} ::: sample1 sample2 sample3
I get the output:
/bin/bash: run_cuffLinks: command not found
/bin/bash: run_cuffLinks: command not found
/bin/bash: run_cuffLinks: command not found
If I include a '$' symbol in front of the function name, I get:
/bin/bash: sample1: command not found
/bin/bash: sample2: command not found
/bin/bash: sample3: command not found
I have also tried using the -pipe --recend and --rrs options, but without a positive result.
Is GNU parallel not able to process user-defined functions?
You do not write whether you have walked through the tutorial (man parallel_tutorial). In that it shows that you must export -f the function, and since you do not write that, I believe you might have forgotten that:
export -f run_cuffLinks
parallel ...
Since version 20180522 you can also use env_parallel:
env_parallel --session
[define functions and variables here that you want parallel to see]
# Use env_parallel like you would parallel
env_parallel run_cuffLinks ...
PS: Use --bibtex once to avoid --no-notice in the future.

Bourne shell function return variable always empty

The following Bourne shell script, given a path, is supposed to test each component of the path for existence; then set a variable comprising only those components that actually exist.
#! /bin/sh
set -x # for debugging
test_path() {
path=""
echo $1 | tr ':' '\012' | while read component
do
if [ -d "$component" ]
then
if [ -z "$path" ]
then path="$component"
else path="$path:$component"
fi
fi
done
echo "$path" # this prints nothing
}
paths=/usr/share/man:\
/usr/X11R6/man:\
/usr/local/man
MANPATH=`test_path $paths`
echo $MANPATH
When run, it always prints nothing. The trace using set -x is:
+ paths=/usr/share/man:/usr/X11R6/man:/usr/local/man
++ test_path /usr/share/man:/usr/X11R6/man:/usr/local/man
++ path=
++ echo /usr/share/man:/usr/X11R6/man:/usr/local/man
++ tr : '\012'
++ read component
++ '[' -d /usr/share/man ']'
++ '[' -z '' ']'
++ path=/usr/share/man
++ read component
++ '[' -d /usr/X11R6/man ']'
++ read component
++ '[' -d /usr/local/man ']'
++ '[' -z /usr/share/man ']'
++ path=/usr/share/man:/usr/local/man
++ read component
++ echo ''
+ MANPATH=
+ echo
Why is the final echo $path empty? The $path variable within the while loop was incrementally set for each iteration just fine.
The pipe runs all commands involved in sub-shells, including the entire while ... loop. Therefore, all changes to variables in that loop are confined to the sub-shell and invisible to the parent shell script.
One way to work around that is putting the while ... loop and the echo into a list that executes entirely in the sub-shell, so that the modified variable $path is visible to echo:
test_path()
{
echo "$1" | tr ':' '\n' | {
while read component
do
if [ -d "$component" ]
then
if [ -z "$path" ]
then
path="$component"
else
path="$path:$component"
fi
fi
done
echo "$path"
}
}
However, I suggest using something like this:
test_path()
{
echo "$1" | tr ':' '\n' |
while read dir
do
[ -d "$dir" ] && printf "%s:" "$dir"
done |
sed 's/:$/\n/'
}
... but that's a matter of taste.
Edit: As others have said, the behaviour you are observing depends on the shell. The POSIX standard describes pipelined commands as run in sub-shells, but that is not a requirement:
Additionally, each command of a multi-command pipeline is in a subshell environment; as an extension, however, any or all commands in a pipeline may be executed in the current environment.
Bash runs them in sub-shells, but some shells run the last command in the context of the main script, when only the preceding commands in the pipeline are run in sub-shells.
This should work in a Bourne shell that understands functions (and would work in Bash and other shells too):
test_path() {
echo $1 | tr ':' '\012' |
{
path=""
while read component
do
if [ -d "$component" ]
then
if [ -z "$path" ]
then path="$component"
else path="$path:$component"
fi
fi
done
echo "$path" # this prints nothing
}
}
The inner set of braces groups the commands into a unit, so path is only set in the subshell but is echoed from the same subshell.
Why is the final echo $path empty?
Until recently, Bash would give all components of a pipeline their own process, separate from the shell process in which the pipeline is run.
Separate process == separate address space, and no variable sharing.
In ksh93 and in recent Bash (may need a shopt setting), the shell will run the last component of a pipeline in the calling shell, so any variables changed inside the loop are preserved when the loop exits.
Another way to accomplish what you want is to make sure that the echo $path is in the same process as the loop, using parentheses:
#! /bin/sh
set -x # for debugging
test_path() {
path=""
echo $1 | tr ':' '\012' | ( while read component
do
[ -d "$component" ] || continue
path="${path:+$path:}$component"
done
echo "$path"
)
}
Note: I simplified the inner if. There was no else so the test can be replaced with a shortcut. Also, the two path assignments can be combined into one, using the S{var:+ ...} parameter substitution trick.
Your script works just fine with no change under Solaris 11 and probably also most commercial Unix like AIX and HP-UX because under these OSes, the underlying implementation of /bin/sh is provided by ksh. This would be also the case if /bin/sh is backed by zsh.
It doesn't work for you likely because your /bin/sh is implemented by one of bash, dash, mksh or busybox sh which all process each component of a pipeline in a subshell while ksh and zsh both keep the last element of a pipeline in the current shell, saving an unnecessary fork.
It is possible to "fix" your script for it to work when sh is provided by bash by adding this line somewhere before the pipeline:
shopt -s lastpipe
or better, if you wan't to keep portability:
command -v shopt > /dev/null && shopt -s lastpipe
This will keep the script working for ksh, and zsh but still won't work for dash, mksh or the original Bourne shell.
Note that both bash and ksh behaviors are allowed by the POSIX standard.

Shell script if condition

I'm writing a script which runs the following command
mysql -u root -e "show databases"
and this will display a list of databases.
If this table doesn't contain a database by name "userdb", it should do the following-
if [ ... ]; then
echo "error"
exit
fi
What do i write in the if [ ... ] condition?
You can check with grep if the table name is listed. grep -q will not print anything to the console but will set the exit status according to the result (the exit status will then be checked by if).
if ! mysql -u root -e 'show databases' | grep -q '^userdb$' ; then
echo error
exit
fi
About the regular expression: '^' matches the beginning of the line and '$' matches the end of the line (to avoid a false positive for database names containing userdb, e.g. userdb2)
Try this one:
usedb=DBname
check=`mysql -u root -e "show databases" | grep $userdb`
if [ "$check" != "$userdb" ]; then
echo "error"
exit
fi
But here will be an error if line with database name contain any other information.
Try to workaround it with regexp