I'm trying to write a bash script that, among other things, extracts information from a mysql database. I tried the following to extract a file from entry 20:
mysql -se "select file_column_name from table where id=20;" >file.txt
That gave me a file.txt with the file name, not the file contents. How would I get the actual blob into file.txt?
Turn the value in file.txt into a variable and then use it as you need to? i.e.
blobFile=$(cat file.txt)
echo "----- contents of $blobFile ---------"
cat $blobFile
# copy the file somewhere else
scp $blobFile user#Remote /path/to/remote/loc/for/blobFile
# look for info in blobfile
grep specialInfo $blobFile
# etc ...
Is that what you want/need to do?
I hope this helps.
Related
I am using json2csv to convert multiple json files structured like
{
"address": "0xe9f6191596bca549e20431978ee09d3f8db959a9",
"copyright": "None",
"created_at": "None"
...
}
The problem is that I need to put multiple json files into one csv file.
In my code I iterate through a hash file, call a curl with that hash and output the data to a json. Then I use json2csv to convert each json to csv.
mkdir -p curl_outs
{ cat hashes.hash; echo; } | while read h; do
echo "Downloading $h"
curl -L https://main.net955305.contentfabric.io/s/main/q/$h/meta/public/nft > curl_outs/$h.json;
node index.js $h;
json2csv -i curl_outs/$h.json -o main.csv;
done
I use -o to output the json into csv, however it just overwrites the previous json data. So I end up with only one row.
I have used >>, and this does append to the csv file.
json2csv -i "curl_outs/${h}.json" >> main.csv
But for some reason it appends the data's keys to the end of the csv file
I've also tried
cat csv_outs/*.csv > main.csv
However I get the same output.
How do I append multiple json files to one main csv file?
It's not entirely clear from the image and your description what's wrong with >>, but it looks like maybe the CSV file doesn't have a trailing line break, so appending the next file (>>) starts writing directly at the end of the last row and column (cell) of the previous file's data.
I deal with CSVs almost daily and love the GoCSV tool. Its stack subcommand will do just what the name implies: stack multiple CSVs, one on top of the other.
In your case, you could download each JSON and convert it to an individual (intermediate) CSV. Then, at the end, stack all the intermediate CSVs, then delete all the intermediate CSVs.
mkdir -p curl_outs
{ cat hashes.hash; echo; } | while read h; do
echo "Downloading $h"
curl -L https://main.net955305.contentfabric.io/s/main/q/$h/meta/public/nft > curl_outs/$h.json;
node index.js $h;
json2csv -i curl_outs/$h.json -o curl_outs/$h.csv;
done
gocsv stack curl_outs/*.csv > main.csv;
# I suggested deleting the intermediate CSVs
# rm curl_outs/*.csv
# ...
I changed the last line of your loop to json2csv -i curl_outs/$h.json -o curl_outs/$h.csv; to create those intermediate CSVs I mentioned before. Now, gocsv's stack subcommand can take a list of those intermediate CSVs and give you main.csv.
I have case with loop. My task is to create json file with loop from csv data. Unfornunately when i generate field pk, the value is empty that make my json fault.This is the subset of my csv
table,pk
aaa,nik
aab,ida
aac,idb
aad,idc
aae,idd
aef,ide
...
This is my full code:
#!bin/bash
CSV_LIST="/xxx/table_lists.csv"
DATA=${CSV_LIST}
mkdir sqlconn
cd sqlconn
cat ${DATA} |
while IFS=',' read table pk ; do
PK= echo ${pk} | tr -d '\n'
cat > ./sqlservercon_$table.json << EOF
{"name" :"sqlservercon_$table","config":{"connector.class":"io.confluent.connect.jdbc.JdbcSinkConnector","topics":"$table",
...
,"pk.fields":" $PK","pk.mode":"record_value","destination.table.format":"db.dbo.$table","errors.tolerance":"all","flush.size":"10000"
}}
EOF
done
So the rendered result give me this:
{"name" :"sqlservercon_XXX","config":{"connector.class":"io.confluent.connect.jdbc.JdbcSinkConnector","topics":"XXX",...
,"pk.fields":" ","pk.mode":"record_value","destination.table.format":"XXX.XXX.XXX","errors.tolerance":"all","flush.size":"10000"
}}
but when i not edited my field pk
...,
"pk.fields":" $pk",
...
, it gives me wrong JSON file like this:
...,"pk.fields":" id
",...
Any helps are appreciated
UPDATE
When i check my csv using cat -v table_lists.csv the last column has ^M character that ruin the json file. But i still don't know how to deal with it.
In respect to the comments I gave, the following script were working
#!/bin/bash
cd /home/test
CSV_LIST="/home/test/tableList.csv"
DATA=${CSV_LIST}
# Prepare data file
sed -i "s/\r//g" ${DATA}
# Added for debugging purpose
echo "Creating connection file in JSON for"
# Print file content from 2nd line only
tail --lines=+2 ${DATA} |
while IFS=',' read TABLE PK ; do
# Added for debugging purpose
echo "Table: ${TABLE} and PK: ${PK}"
# Added missing $()
PK_TRIMMED=$(echo ${PK} | tr -d '\n')
cat > ./sqlservercon_${TABLE}.json << EOF
{"name":"sqlservercon_${TABLE}","config":{"connector.class":"io.confluent.connect.jdbc.JdbcSinkConnector","topics":"${TABLE}",...,"pk.fields":"${PK_TRIMMED}","pk.mode":"record_value","destination.table.format":"db.dbo.${TABLE}","errors.tolerance":"all","flush.size":"10000"}}
EOF
done
Okay, after several check, beside wrong script that i have give in here, I investigate the CSV file. I download it directly from Spreadsheet Google, even it's give me .csv, but not encoded right for UNIX or Ubuntu as my development environment.
So i decided to do something like this manually:
From google spreadsheet, select all column that i want to use
Create an empty csv file
Copy paste the cells it into the .csv file
Change the " "(double space) with ,
And for the loop one, because i want to curl it instead save the json, i do this:
#!/bin/bash
CSV_LIST="/home/admin/kafka/main/config/tables/table_lists.csv"
DATA=${CSV_LIST}
while IFS=',' read table pk; do
curl -X POST http://localhost:8083/connectors -H 'Content-Type:application/json' -d'{"name" :"sqlservercon_'$table'","config":{...,...,"destination.table.format":"db.dbo.'$table'","errors.tolerance":"all",
"flush.size":"10000"
}}' | jq
done < ${DATA}
I try to convert json files to html files. So I can later import them to the website. My idea is run json2table to create a table and write then to the html directory. But I find not out how can redirecting the output of json2table.
for file in $(ls json/*.json); do cat $file | json2table > html/$file.html; done
But this end with
bash: html/json/26.json.html: No such file or directory
Is there someone which can help to fix the error? Thank you
Silvio
Your ${file} includes the json directory. When you don't have a directory html/json the redirection will fail.
I think you have a subdir html and stripping the path from ${file} will work:
for file in json/*.json; do
cat "${file}" | json2table > html/"${file#*/}".html
# OR avoid cat with
# json2table < "${file}" > html/"${file#*/}".html
done
As a bonus I removed the ls, added quotes for filenames with spaces and showed in a comment how to avoid cat.
I want give all the complete script if people have same problem / project or how will call it.
The first loop generates an HTML table from the JSON files and stores it in an HTML file. The second loop combines the result of the first loop with a template, so that you have a complete page. In the body tag, the include is simply searched for, deleted and filled with the table.
#!/bin/bash
jdata="json"
hdata="html"
template="tpl/tpl.html"
tmp="tmp"
# convert json to html
# https://stackoverflow.com/questions/51126226/combine-for-loop-with-cat-and-json2html/51126732#51126732
for file in $jdata/*.json;
do
# php
#php json.php $file > tmp/${file#*/}.html
# ruby
json2table < "${file}" > $hdata/"${file#*/}".html
done
# write html file
# https://unix.stackexchange.com/questions/32908/how-to-insert-the-content-of-a-file-into-another-file-before-a-pattern-marker
for file in html/*.html;
do
# extract Project Number for newfile
filename="$(basename -- "$file")"
extension="${filename#*.}"
filename="${filename%.*}"
# save the project number
NO="$(grep "No" $jdata/$filename | sed 's/[^0-9]*//g')"
OUT="$NO.html"
# write new html file
sed 's/include/$x/g' tpl/tpl.html | x="$(<$file)" envsubst '$x' > html/$OUT
done
Thank you for help & Nice day
Silvio
Your ls command returns the directory as well as the file name.
Ex. json/1.json json/2.json json/3.json
So do:
for file in $(/bin/ls json/*.json)
do
cat $file | json2table >html/$(basename $file).html
done
I always use the full path for ls in such conditions since I want to make sure I do not use any alias that might have been defined on it.
basename removes the directory out of $file
This question already has answers here:
bash script, create array of all files in a directory
(3 answers)
Closed 7 years ago.
I am currently working on a bash script where I must download files from our mySQL database, host them somewhere different, then update the database with the new location for the image. The last portion is my problem area, creating the array full of filenames and iterating through them, replacing the file names in the database as we go.
For whatever reason I keep getting these kinds of errors:
not found/X2b6qZP.png: 1: /xxx/images/X2b6qZP.png: ?PNG /xxx/images/X2b6qZP.png: 2: /xxx/images/X2b6qZP.png: : not found
/xxx/images/X2b6qZP.png: 1: /xxx/images/X2b6qZP.png: Syntax error: word unexpected (expecting ")")
files=$($DOWNLOADDIRECTORY/*)
files=$(${files[#]##*/})
# Iterate through the file names in the download directory, and assign the new values to the detail table.
for file in "${files[#]}"
do
mysql -h ${HOST} -u ${USER} -p${PASSWORD} ${DBNAME} "UPDATE crm_category_detail SET detail_value = 'http://xxx.xxx.x.xxx/img/$file' WHERE detail_value LIKE '%imgur.com/$file'"
done
You are trying to execute a glob as a command. The syntax to use arrays is array=(tokens):
files=("$DOWNLOADDIRECTORY"/*)
files=("${files[#]##*/}")
You are also trying to run your script with sh instead of bash.
Do not run sh file or use #!/bin/sh. Arrays are not supported in sh.
Instead use bash file or #!/bin/bash.
whats going on right here?
files=$($DOWNLOADDIRECTORY/*)
I dont think this is doing what you think it is doing.
According to this answer, you want to omit the first $ to get an array of files.
files=($DOWNLOADDIRECTORY/*)
I just wrote a sample script
#!/bin/sh
alist=(/*)
printf '%s\n' "${alist[#]}"
Output
/bin
/boot
/data
/dev
/dist
/etc
/home
/lib
....
Your assignments are not creating arrays. You need arrayname=( values for array ) as the notation. Hence:
files=( "$DOWNLOADDIRECTORY"/* )
files=( "${files[#]##*/}" )
The first line will give you all the names in the directory specified by $DOWNLOADDIRECTORY. The second carefully removes the directory prefix.
I've used spaces after ( and before ) for clarity; the shell neither requires nor objects to them. I used double quotes around the variable name and expansions to keep things sane when name do contain spaces etc.
Although it isn't immediately obvious why you might do this, its advantage over many alternatives is that it preserves spaces etc in file names.
You could just loop directly over the files:
for file in "$DOWNLOADDIRECTORY"/*; do
file="${file##*/}" # or file=$(basename "$file")
# MySQL stuff
done
Some quoting added in case of spaces in paths.
Hello and thank you for any help you can provide
I have my Apache2 web server set up so that when I go to a specific link, it will run and display the output of a shell script stored on my server. I need to output the results of an SVN command (svn log). If I simply put the command 'svn log -q' (-q for quiet), I get the output of:
(of course not blurred), and with exactly 72 dashes in between each line. I need to be able to take these dashes, and turn them into an html line break, like so:
Basically I need the shell script to take the output of the 'svn log -q' command, search and replace every chunk of 72 dashes with an html line break, and then echo the output.
Is this at all possible?
I'm somewhat a noob at shell scripting, so please excuse any mess-ups.
Thank you so much for your help.
svn log -q | sed -e 's,-{72},<br/>,'
If you want to write it in the script this might help:
${string//substring/replacement}
Replace all matches of $substring with $replacement.
stringZ=abcABC123ABCabc
echo ${stringZ/abc/xyz} # xyzABC123ABCabc
# Replaces first match of 'abc' with 'xyz'.
echo ${stringZ//abc/xyz} # xyzABC123ABCxyz
# Replaces all matches of 'abc' with # 'xyz'.