This question already has answers here:
Passing bash variable to jq
(10 answers)
Closed 1 year ago.
#!/bin/bash
filename='delete'
while read p; do
jq 'if .tweet | test('\"$p\"'; "i") then . |= . + {vendor: '\"$p\"'} else empty end' sfilter.json
done < $filename
I keep getting this error, when I use variable = "hello world" but not "hello"
Inshort whenever there is a space in letter it causes this error.
But while executing directly on shell there seems to be no error.
jq: error: syntax error, unexpected $end, expecting QQSTRING_TEXT or QQSTRING_INTERP_START or QQSTRING_END (Unix shell quoting issues?) at <top-level>, line 1:
if .tweet | test("sdas
jq: 1 compile error
But when I run the command on shell it works perfectly. Any ideas?
Edit:
Input delete file
sdas adssad
Don't generate a dynamic jq filter using string interpolation. Pass the value via the --arg option.
#!/bin/bash
filename='delete'
while IFS= read -r p; do
jq --arg p "$p" 'if .tweet | test($p; "i") then . |= . + {vendor: $p} else empty end' sfilter.json
done < "$filename"
pass the value of p to your jq program like this:
while read p; do
jq --arg p "$p" 'if .tweet | test($p; "i") then . |= . + {vendor: $p} else empty end' sfilter.json
done < $filename
The shell error occurs because you concatenated the string incorrectly.
Try this:
p=value
JQ_program='if .tweet | test("'"$p"'"; "i") then . |= . + {vendor: "'"$p"'"} else empty end'
echo "$JQ_program"
# result
# if .tweet | test("value"; "i") then . |= . + {vendor: "value"} else empty end
If I'm reading a MySql binlog, can I get an indication of which statements occur in the same transaction?
There's nothing built-in yet, but perhaps this page will provide some help. They provide an awk script that will parse the binary log and provide transaction details, for row-based replication at least. We don't like link-only answers, so I'll post the script itself here:
startdate="2015-01-12 21:40:00"
stopdate="2015-01-12 21:45:00"
logfile="mysqld-bin.000023"
mysqlbinlog --base64-output=decode-rows -vv --start-datetime="$startdate" --stop-datetime="$stopdate" $logfile | awk \
'BEGIN {s_type=""; s_count=0;count=0;insert_count=0;update_count=0;delete_count=0;flag=0;} \
{if(match($0, /#15.*Table_map:.*mapped to number/)) {printf "Timestamp : " $1 " " $2 " Table : " $(NF-4); flag=1} \
else if (match($0, /(### INSERT INTO .*..*)/)) {count=count+1;insert_count=insert_count+1;s_type="INSERT"; s_count=s_count+1;} \
else if (match($0, /(### UPDATE .*..*)/)) {count=count+1;update_count=update_count+1;s_type="UPDATE"; s_count=s_count+1;} \
else if (match($0, /(### DELETE FROM .*..*)/)) {count=count+1;delete_count=delete_count+1;s_type="DELETE"; s_count=s_count+1;} \
else if (match($0, /^(# at) /) && flag==1 && s_count>0) {print " Query Type : "s_type " " s_count " row(s) affected" ;s_type=""; s_count=0; } \
else if (match($0, /^(COMMIT)/)) {print "[Transaction total : " count " Insert(s) : " insert_count " Update(s) : " update_count " Delete(s) : " \
delete_count "] \n+----------------------+----------------------+----------------------+----------------------+"; \
count=0;insert_count=0;update_count=0; delete_count=0;s_type=""; s_count=0; flag=0} } '
I am coming across the following error
ERROR 1059 (42000) at line 3: Identifier name '#o_acc,o_pos,o_aa1,o_aa2,rsid,acc,pos,aa1,aa2,prediction,pph2_prob,pph2_FPR,pph2_TPR'
here is my code:
#!/bin/sh
MYSQL_ARGS="some ARGS"
DB="$3"
DELIM=";"
CSV="$1"
TABLE="$2"
[ "$CSV" = "" -o "$TABLE" = "" ] && echo "Syntax: $0 csvfile tablename" && exit 1
FIELDS=$(head -1 "$CSV" | sed -e 's/'$DELIM'/` varchar(255),\n`/g' -e 's/\r//g')
FIELDS='`'"$FIELDS"'` varchar(255)'
#echo "$FIELDS" && exit
mysql $MYSQL_ARGS $DB -e "
DROP TABLE IF EXISTS $TABLE;
CREATE TABLE $TABLE ($FIELDS);
LOAD DATA INFILE '$(pwd)/$CSV' INTO TABLE $TABLE
FIELDS TERMINATED BY '$DELIM'
IGNORE 1 LINES
;
"
and a sample of my data file:
#o_acc,o_pos,o_aa1,o_aa2,rsid,acc,pos,aa1,aa2,prediction,pph2_prob,pph2_FPR,pph2_TPR
ENSG00000145888,455,H,N,?,P23415,455,H,N,probablydamaging,0.997,0.0167,0.409
ENSG00000145888,450,R,H,?,P23415,450,R,H,probablydamaging,1,0.00026,0.00018
ENSG00000145888,440,M,I,?,P23415,440,M,I,benign,0,1,1
ENSG00000145888,428,R,H,?,P23415,428,R,H,probablydamaging,1,0.00026,0.00018
ENSG00000145888,428,R,C,?,P23415,428,R,C,probablydamaging,1,0.00026,0.00018
ENSG00000145888,413,R,Q,?,P23415,413,R,Q,probablydamaging,0.993,0.0301,0.696
ENSG00000145888,412,M,L,?,P23415,412,M,L,benign,0.143,0.136,0.923
ENSG00000145888,406,S,C,?,P23415,406,S,C,possiblydamaging,0.658,0.0867,0.865
ENSG00000145888,402,P,L,?,P23415,402,P,L,benign,0,1,1
The error message looks like it is telling me I am trying to create a column called:
#o_acc,o_pos,o_aa1,o_aa2,rsid,acc,pos,aa1,aa2,prediction,pph2_prob,pph2_FPR,pph2_TPR
however you will see, I am trying to create 13 columns.
Can anyone spot anything wrong with either my data or code?
Try import sql using -f ignore errors
-f, --force Continue even if we get an SQL error.
I am complete newbie in Perl. I have this little sub
sub processOpen{
my($filename, $mdbh)=#_;
my($qr, $query);
# parse filename to get the date extension only
# we will import the data into the table with this extension
# /home//logs/open.v7.20120710_2213.log
my(#fileparts) = split(/\./, $filename);
my(#filedateparts) = split(/_/, $fileparts[2]);
my($tableext) = $filedateparts[0];
$query = "LOAD DATA INFILE '" . $filename . "' INTO TABLE open_" . $tableext . " FIELDS TERMINATED BY '||' LINES TERMINATED BY '\n'
(open_datetime, open_date, period,tag_id)";
$qr = $$mdbh->prepare($query);
$qr->execute(); # causes error (see below)
$qr->finish();
}
And I'm getting the following error:
DBD::mysql::st execute failed: Can't get stat of '/home/logs/open..v7.20120710_2213.log' (Errcode: 2) at /home/thisfile.pm line 32.
Line 32 is the $qr->execute();
Error Code 2 is most likely "File not found".
Does your file exist? Note that if you are running the perl on a separate host from the MySQL database, the file must be on the database host, not the client host.