Select mysql query with bash - mysql

How to select mysql query with bash so each column will be in a separate array value?
I've tried the following command but it only works if the content is one word. for example:
id= 11, text=hello, important=1
if I've an article for instance in text. the code will not work properly. I guess I can use cut -f -d but if "text" contains special characters it wont work either.
while read -ra line; do
id=$(echo "${line[1]}")
text=$(echo "${line[2]}")
important=$(echo "${line[3]}")
echo "id: $id"
echo "text: $text"
echo "important: $important"
done < <(mysql -e "${selectQ}" -u${user} -p${password} ${database} -h ${host})

Bash by default splits strings at any whitespace character. First you need a unique column identifier for your output, you can use mysql --batch to get tab-separated csv output.
From the MySQL man page:
--batch, -B
Print results using tab as the column separator, with each row on a new line. With this option, mysql does not use the history file.
Batch mode results in nontabular output format and escaping of special characters. Escaping may be disabled by using raw mode; see the description for the --raw option
You want the result to be escaped, so don't use --raw, otherwise a tab character in your result data will break the loop again.
To skip the first row (column names) you can use the option --skip-column-names in addition
Now you can walk through each line and split it by tab character.
You can force bash to split by tab only by overriding the IFS variable (Internal Field Separator) temporarily.
Example
# myread prevents collapsing of empty fields
myread() {
local input
IFS= read -r input || return $?
while (( $# > 1 )); do
IFS= read -r "$1" <<< "${input%%[$IFS]*}"
input="${input#*[$IFS]}"
shift
done
IFS= read -r "$1" <<< "$input"
}
# loop though the result rows
while IFS=$'\t' myread id name surname url created; do
echo "id: ${id}";
echo "name: ${name}";
echo "surname: ${surname}";
echo "url: ${url}";
echo "created: ${created}";
done < <(mysql --batch --skip-column-headers -e "SELECT id, name, surname, url, created FROM users")
myread function all credits to this answer by Stefan Kriwanek
Attention:
You need to be very careful with quotes and variable delimiters.
If you just echo $row[0] without the curly brackets, you will get the wrong result
EDIT
You still have a problem , when a column returns empty string because the internal field separator matches any amount of the defined char:
row1\t\trow3 will create an array [row1,row3] instead of [row1,,row3]
I found a very nice approach to fix this, updated the example above.
Also read can directly seperate the input stream into variables.

Related

Find / Replace / Append JSON String in bashscript without using jq

I have a json string and should extract the values in the square brackets with bash script and validate it against the expected values. If the expected value exists, leave as it is or else add the new values into the square brackets as expected.
"hosts": [“unix://“,”tcp://0.0.0.0:2376"]
I cannot use jq.
Expected :
Verify if the values “unix://“ and ”tcp://0.0.0.0:2376" exists for the key "hosts". Add if it doesn't exist
I tried using like below,
$echo "\"hosts\":[\"unix://\",\"tcp://0.0.0.0:2376\"]" | cut -d: -f2
["unix
$echo "\"hosts\":[\"unix://\",\"tcp://0.0.0.0:2376\"]" | sed 's/:.*//'
"hosts"
I have tried multiple possibilities with sed & cut but cannot achieve what I expect. I'm a shell script beginner.
How can I achieve this with sed or cut ?
You need to detect the precense of "unix://" and "tcp://0.0.0.0:2376" in your string. You can do it like this:
#!/bin/bash
#
string='"hosts": ["unix://","tcp://0.0.0.0:2376"]'
check1=$(echo "$string" | grep -c "unix://")
check2=$(echo "$string" | grep -c "tcp://0.0.0.0:2376")
(( total = check1 + check2 ))
if [[ "$total" -eq 2 ]]
then
echo "they are both in, nothing to do"
else
echo "they are NOT both there, fix variable string"
string='"hosts": ["unix://","tcp://0.0.0.0:2376"]'
fi
grep -c counts how many times a specific string appears. In your case, both strings have to be found once, so adding them together will produce 0, 1 or 2. Only when it is equal to 2 is the string correct.
cut will extract some string based on a certain delimiter. But it is not typically used to verify if a string is in there, grep does that.
sed has many uses, such as replacing text (with 's///'). But again, grep is the tool that was built to detect strings in other strings (or files).
Now when it comes to adding text, you say that if one of "unix://" or "tcp://0.0.0.0:2376" is missing, add it. Well that comes back to redefining the whole string with the correct values, so just assign it.
Finaly, if you think about it, you want to ensure that string is "hosts": ["unix://","tcp://0.0.0.0:2376"]. So no need to verify anything, just force it through hardcode at the start of your script. The end result will be the same.
Part 2
If you MUST use cut, you could:
#!/bin/bash
#
string='"hosts": ["unix://","tcp://0.0.0.0:2376"]'
firstelement=$(echo "$string" | cut -d',' -f1 | cut -d'"' -f4
echo $firstelement
# will display unix://
secondelement=$(echo "$string" | cut -d',' -f2 | cut -d'"' -f2
echo $secondelement
# will display tcp://0.0.0.0:2376
Then you can use if statements to compare to your desired values. But note that this approach will fail if you do not have at least 2 elements in your text between the [ ]. Ex. ["unix://"] will fail cut -d',' since there is no ',' character in the string.
Part 3
If you MUST use sed:
#!/bin/bash
#
string='"hosts": ["unix://","tcp://0.0.0.0:2376"]'
firstelement=$(echo "$string" | sed 's/.*\["\(.*\)",".*/\1/')
echo "$firstelement"
# will output unix://
secondelement=$(echo "$string" | sed 's/.*","\(.*\)"\]/\1/')
echo $secondelement
# will output tcp://0.0.0.0:2376
Again here, the main character to work with is the ,.
firstelement explanation
sed 's/.*\["\(.*\)",".*/\1/'
.* anything...
\[" followed by [ and ". Since [ means something to sed, you have to \ it
\(.*\) followed by anything at all (. matches any character, * matches any number of these characters).
"," followed by ",". This only happens for the first element.
.* followed by anything
\1 keep only the characters enclosed between \( and \)
Similarily, for the second element the s/// is modified to keep only what follows ",", up to the last "] at the end of the string.
Again like with cut above, use if statements to verify if the extracted values are what you wanted.
Again, read my last comments in the first approach, you might not need all this...

removing commas from numbers in CSV file

I have a file that has many columns and I only need two of those columns. I am getting the columns I need using
cut -f 2-3 -d, file1.csv > file2.csv
The issue I am having is that the first column is ID and once it gets past 999 it becomes 1,000 and so it is treated as an extra column now. I cant get rid of all commas because I need them to separate the data. Is there a way to use sed to remove commas that only show up between 0-9?
I'd use a real CSV parser, and count backwards from the end of the line:
ruby -rcsv -ne '
row = $_.parse_csv
puts row[-5..-4].to_csv :force_quotes => true
' <<END
999,"someone#example.com","Doe, John","Doe","555-1212","address"
1,234,"email#email.com","name","lastname","phone","address"
END
"someone#example.com","Doe, John"
"email#email.com","name"
This works for the example in the comments:
awk -F'"?,"' '{print $2, $3}' file
The field separator is zero or one " followed by ,". This means that the comma in the first number doesn't count.
To separate the two fields with a comma instead of a space, you can change the OFS variable like this:
awk -F'"?,"' -v OFS=',' '{print $2, $3}' file
Or like this:
awk -F'"?,"' 'BEGIN{OFS=","}{print $2, $3}' file
Alternatively, if you want the quotes as well, you can use printf:
awk -F'"?,"' '{printf "\"%s\",\"%s\"\n", $2, $3}' file
From your comments, it sounds like there is a comma and a space (', ') pattern between tokens.
If this is the case, you can do this easily with sed. The strategy is to first replace all occurrences of , with some unique character sequence (like maybe ||).
's:, :||:g'
From there you can remove all commas:
's:,::g'
Finally, replace the double pipes with comma-space again.
's:||:, :g'
Putting it into one statement:
sed -i -e 's:, :||:g;s:,::g;s:||:, :g' your_odd_file.csv
And a command-line example to try before you buy:
bash$ sed -e 's:, :||:g;s:,::g;s:||:, :g' <<< "1,200,000, hello world, 123,456"
1200000, hello world, 123456
If you are in the unfortunate situation where there is not a space between fields in the CSV - you can attempt to 'fake it' by detecting changes in data type - like where there is a numeric field followed by a text field.
's:,\([^0-9]\):, \1:g' # numeric followed by non-numeric
's:\([^0-9]\),:\1, :g' # non-numeric field followed by something (anything)
You can put this all together into one statement, but you are venturing into dangerous waters here - this will definitely be a one-off solution and should be taken with a large grain of salt.
sed -e 's:,\([^0-9]\):, \1:g;s:\([^0-9]\),:\1, :g' \
-e 's:, :||:g;s:,::g;s:||:, :g' file1.csv > file2.csv
And another example:
bash$ sed -e 's:,\([^0-9]\):, \1:g;s:\([^0-9]\),:\1, :g' \
-e 's:, :||:g;s:,::g;s:||:, :g' <<< "1,200,000,hello world,123,456"
1200000, hello world, 123456

BASH: get data from MySQL where records have spaces

I'm trying to get values from a MySQL database, for a later substitution with perl, everything works fine but if a record contain space " " the variables will be set uncorrectly.
Example if in the database I have:
sede = "street"
fede = "calvin and hobbes"
lede = "12"
the result for the variables will be:
$TAGSEDE = "steet"
$TAGFEDE = "calvin"
$TAGLEDE = "and"
I understood there is something wrong in setting $DBDATAF but, I can't identify it (english isn't my mother language so some misunderstanding are more than a possibility).
DBDATAF=$(mysql -u$DBUSER -p$DBPASS -se "USE $DBNAME; SELECT sede, fede, lede FROM $DBTABL WHERE cf='$CFPI'")
read TAGSEDE TAGFEDE TAGLEDE <<< $DBDATAF
/usr/bin/perl -p -i -e "s/TAGDATAINSERT/$TAGDATAINSERT/g" $i
Your code breaks down for two reasons:
When $DBDATAF is passed to the <<< operator, the tabs are discarded. To keep them, double quote the variable as shown below.
read separates the input line into words using the IFS special variable. By default, it separates on tab, space or newline. So even though tabs are now preseved when passed to <<<, the read command splits on spaces also. Setting IFS to a tab makes read split as desired. Putting the IFS assignment and read on one line ensures that IFS returns to the default after read exits
IFS="$( echo -e '\t' )" read TAGSEDE TAGFEDE TAGLEDE <<< "$DBDATAF"

Store query in array in bash

My script need to store in a structure the result of a query:
#!/bin/bash
user="..."
psw="..."
database="..."
query="select name, mail from t"
customStructure=$(mysql -u$user -p$psw $database -e "$query";)
I've no idea how store the array of {name, mail} from query result..
I need structure like this:
array=[ [name1,mail1] , [name2,mail2], ....., [nameN, mailN] ]
Is there a way to do this in bash?
Bash arrays are initialized like so:
myarray=("hi" 1 "2");
To capture the individual portions of output of a command into an array, we must loop through the output, adding it's results to the array. That can be done like so:
for i in `echo "1 2 3 4"`
do
myarray+=($i)
done
In your example, it looks like you wish to get the output of a MySQL command and store the parts of it's output lines into subarrays. I will show you how to capture lines into arrays, and given that, you should be able to figure out how to put subarrays into that yourself.
while read line
do
myarray+=("$line")
done < <(mysql -u${user} -p${psw} ${database} -e "${query}")
It's also worth mentioning that for this kind of MySQL operation, where you don't need output metadata (such as pretty formatting and table names), you can use MySQL's -B option to do 'batch output'.
Field level record can be accessed via read -a command and IFS is set to the empty string to prevent read from stripping leading and trailing whitespace from the line.
#!/bin/bash
user="..."
psw="..."
database="..."
query="select name, mail from t"
OIFS="$IFS" ; IFS=$'\n' ; oset="$-" ; set -f
while IFS="$OIFS" read -a line
do
echo ${line[0]}
echo ${line[1]}
done < <(mysql -u${user} -p${psw} ${database} -e "${query}")

Escaping MYSQL command lines via Bash Scripting

PHP has mysql_real_escape_string() to correctly escape any characters that might cause problems. What is the best way to mimic this functionality for BASH?
Is there anyway to do prepared mysql statements using bash? This seems to be the best way.
Most of my variables won't (shouldn't) have special characters, however I give the user complete freedom for their password. It may include characters like ' and ".
I may be doing multiple SQL statements so I'll want to make a script that takes in parameters and then runs the statement. This is what I have so far:
doSQL.sh:
#!/bin/sh
SQLUSER="root"
SQLPASS="passwor339c"
SQLHOST="localhost"
SQL="$1"
SQLDB="$2"
if [ -z "$SQL" ]; then echo "ERROR: SQL not defined"; exit 1; fi
if [ -z "$SQLDB" ]; then SQLDB="records"; fi
echo "$SQL" | mysql -u$SQLUSER -p$SQLPASS -h$SQLHOST $SQLDB
and an example using said command:
example.sh:
PASSWORD=$1
doSQL "INSERT INTO active_records (password) VALUES ('$PASSWORD')"
Obviously this would fail if the password password contained a single quote in it.
In Bash, printf can do the escaping for you:
$ a=''\''"\;:#[]{}()|&^$#!?, .<>abc123'
$ printf -v var "%q" "$a"
$ echo "$var"
\'\"\\\;:#\[\]\{\}\(\)\|\&\^\$#\!\?\,\ .\<\>abc123
I'll leave it to you to decide if that's aggressive enough.
This seems like a classic case of using the wrong tool for the job.
You've got a lot of work ahead of you to implement the escaping done by mysql_real_escape_string() in bash. Note that mysql_real_escape_string() actually delegates the escaping to the MySQL library which takes into account the connection and database character sets. It's called "real" because its predecessor mysql_escape_string() did not take the character set into consideration, and could be tricked into injecting SQL.
I'd suggest using a scripting language that has a MySQL library, such as Ruby, Python, or PHP.
If you insist on bash, then use the MySQL Prepared Statements syntax.
There is no escape from the following construct, no matter what quotes you use:
function quoteSQL() {
printf "FROM_BASE64('%s')" "$(echo -n "$1" | base64 -w0 )"
}
PASSWORD=$1
doSQL "INSERT INTO active_records (password) VALUES ($(quoteSQL "$PASSWORD"));"
# I would prefer piping
printf 'INSERT INTO active_records (password) VALUES (%s);\n' $(quoteSQL "$PASSWORD") | doSQL
mysql_real_escape_string() of course only escapes a single string literal to be quoted, not a whole statement. You need to be clear what purpose the string will be used for in the statement. According to the MySQL manual section on string literals, for inserting into a string field you only need to escape single and double quotation marks, backslashes and NULs. However, a bash string cannot contain a NUL, so the following should suffice:
#escape for MySQL single string
PASSWORD=${PASSWORD//\\/\\\\}
PASSWORD=${PASSWORD//\'/\\\'}
PASSWORD=${PASSWORD//\"/\\\"}
If you will be using the string after a LIKE, you will also probably want to escape % and _.
Prepared statements are another possibility. And make sure you don't use echo -e in your bash.
See also https://www.owasp.org/index.php/SQL_Injection_Prevention_Cheat_Sheet
This will escape apostrophes
a=$(echo "$1" | sed s/"'"/"\\\'"/g)
Please note though that mysql_real_escape_string also escapes \x00, \n, \r, \, " and \x1a. Be sure to escape these for full security.
To escape \x00 for example:
a=$(echo "$1" | sed s/"\x00"/"\\\'"/g)
With a bit of effort you can probably escape these using one sed command.
Sure, why not just use the real thing?
A script, anywhere, such as
~/scripts/mysqli_real_escape.php
#!/bin/php
<?php
$std_input_data = '';
$mysqli = new mysqli('localhost', 'username', 'pass', 'database_name');
if( ftell(STDIN) !== false ) $std_input_data = stream_get_contents(STDIN);
if( empty($std_input_data) ) exit('No input piped in');
if( mysqli_connect_errno( ) ) exit('Could not connect to database');
fwrite ( STDOUT,
$mysqli->real_escape_string($std_input_data)
);
exit(0);
?>
Next, run from bash terminal:
chmod +x ~/script/mysqli_real_escape.php`
ln -s ~/script/mysqli_real_escape.php /usr/bin/mysqli_real_escape
All set! Now you can use mysqli_real_escape in your bash scripts!
#!/bin/bash
MyString="stringW##)*special characters"
MyString="$(printf "$MyString" | mysqli_real_escape )"
Note: From what I understand, command substitution using "$(cmd ..."$var")" is preferred over using backticks. However, as no further nesting would be needed either should be fine.
Further Note: When inside command substitution, "$(...)", a new quote context is created. This is why the quotes around variables do not screw up the string.
This is how I did it, where my-file.txt contains spaces, new lines and quotes:
IFS='' content=$(cat my-file.txt)
mysql <flags> -e "update table set column = $(echo ${content#Q} | cut -c 2-) where something = 123"
Here are a couple Bash functions I wrote, grouped into a library.
It provides methods for proper quoting/escaping strings and identifiers:
##### db library functions #####
# Executes SQL Queries on localhost's MySQL server
#
# #Env
# $adminDBUser: The database user
# $adminDBPassword: The database user's password
#
# #Params
# $#: Optional MySQL arguments
#
# #Output
# >&1: The MySQL output stream
db::execute() {
# Uncomment below to debug
#tee --append debug.sql |
mysql \
--batch \
--silent \
--user="${adminDBUser:?}" \
--password="${adminDBPassword:?}" \
--host=localhost \
"$#"
}
# Produces a quoted string suitable for inclusion in SQL statements.
#
# #Params
# $1: The string to bo quoted
#
# #Output
# >&1: The quoted identifier suitable for inclusion in SQL statements
db::quoteString() {
local -- string="${1:?}"
local -- bas64String && bas64String=$(printf %s "${string}" | base64)
db::execute <<< "SELECT QUOTE(FROM_BASE64('${bas64String}'));"
}
# Produces a quoted identifier suitable for inclusion in SQL statements.
#
# #Params
# $1: The identifier to bo quoted
#
# #Output
# >&1: The quoted identifier suitable for inclusion in SQL statements
db::quoteIdentifier() {
local -- identifier="${1:?}"
local -- bas64Identifier && bas64Identifier=$(printf %s "${identifier}" | base64)
db::execute <<< "SELECT sys.quote_identifier(FROM_BASE64('${bas64Identifier}'))"
}
This will work:
echo "John O'hara" | php -R 'echo addslashes($argn);'
To pass it to a variable:
name=$(echo "John O'hara" | php -R 'echo addslashes($argn);')