How do I search for a particular file name in the entire directory and then delete it in TCL?
The easy way is using the fileutil::traverse package from tcllib, which safely searches a directory tree for matching files. You can then call file delete with those filenames.
Example:
#!/usr/bin/env tclsh
package require fileutil::traverse
# Look for all foo.txt files regardless of directory.
# Use this to get the OS-specific path delimiter instead of a hardcoded / or \\
set pattern [file join * foo.txt]
fileutil::traverse findFoo . -filter [list string match $pattern]
# I suggest a dry run first to make sure your filter is returning just the
# appropriate filename(s).
puts [findFoo files]
# When satisfied, delete for real.
# file delete [findFoo files]
Perhaps this is the easiest way, although it is not portable:
puts [exec find . -name $filenameToDelete -print]
If that find the right files, you can do this
exec find . -name $filenameToDelete -delete
I have case with loop. My task is to create json file with loop from csv data. Unfornunately when i generate field pk, the value is empty that make my json fault.This is the subset of my csv
table,pk
aaa,nik
aab,ida
aac,idb
aad,idc
aae,idd
aef,ide
...
This is my full code:
#!bin/bash
CSV_LIST="/xxx/table_lists.csv"
DATA=${CSV_LIST}
mkdir sqlconn
cd sqlconn
cat ${DATA} |
while IFS=',' read table pk ; do
PK= echo ${pk} | tr -d '\n'
cat > ./sqlservercon_$table.json << EOF
{"name" :"sqlservercon_$table","config":{"connector.class":"io.confluent.connect.jdbc.JdbcSinkConnector","topics":"$table",
...
,"pk.fields":" $PK","pk.mode":"record_value","destination.table.format":"db.dbo.$table","errors.tolerance":"all","flush.size":"10000"
}}
EOF
done
So the rendered result give me this:
{"name" :"sqlservercon_XXX","config":{"connector.class":"io.confluent.connect.jdbc.JdbcSinkConnector","topics":"XXX",...
,"pk.fields":" ","pk.mode":"record_value","destination.table.format":"XXX.XXX.XXX","errors.tolerance":"all","flush.size":"10000"
}}
but when i not edited my field pk
...,
"pk.fields":" $pk",
...
, it gives me wrong JSON file like this:
...,"pk.fields":" id
",...
Any helps are appreciated
UPDATE
When i check my csv using cat -v table_lists.csv the last column has ^M character that ruin the json file. But i still don't know how to deal with it.
In respect to the comments I gave, the following script were working
#!/bin/bash
cd /home/test
CSV_LIST="/home/test/tableList.csv"
DATA=${CSV_LIST}
# Prepare data file
sed -i "s/\r//g" ${DATA}
# Added for debugging purpose
echo "Creating connection file in JSON for"
# Print file content from 2nd line only
tail --lines=+2 ${DATA} |
while IFS=',' read TABLE PK ; do
# Added for debugging purpose
echo "Table: ${TABLE} and PK: ${PK}"
# Added missing $()
PK_TRIMMED=$(echo ${PK} | tr -d '\n')
cat > ./sqlservercon_${TABLE}.json << EOF
{"name":"sqlservercon_${TABLE}","config":{"connector.class":"io.confluent.connect.jdbc.JdbcSinkConnector","topics":"${TABLE}",...,"pk.fields":"${PK_TRIMMED}","pk.mode":"record_value","destination.table.format":"db.dbo.${TABLE}","errors.tolerance":"all","flush.size":"10000"}}
EOF
done
Okay, after several check, beside wrong script that i have give in here, I investigate the CSV file. I download it directly from Spreadsheet Google, even it's give me .csv, but not encoded right for UNIX or Ubuntu as my development environment.
So i decided to do something like this manually:
From google spreadsheet, select all column that i want to use
Create an empty csv file
Copy paste the cells it into the .csv file
Change the " "(double space) with ,
And for the loop one, because i want to curl it instead save the json, i do this:
#!/bin/bash
CSV_LIST="/home/admin/kafka/main/config/tables/table_lists.csv"
DATA=${CSV_LIST}
while IFS=',' read table pk; do
curl -X POST http://localhost:8083/connectors -H 'Content-Type:application/json' -d'{"name" :"sqlservercon_'$table'","config":{...,...,"destination.table.format":"db.dbo.'$table'","errors.tolerance":"all",
"flush.size":"10000"
}}' | jq
done < ${DATA}
In my hypothetical folder /hd/log/, I have 2 dozens Folder and each folder has log files in this format foldername.2017.07.09.log. I have a crontab that gzips the last log file every night, so there is a new log file with new log name every day.
I am trying to create a dynamic json file whose out put looks like this:
[
{
"Foldername": "foldername",
"lastmodifiedfile": "/hd/log/foldername/foldername.2017.07.09.log"
},
{
"Foldername": "foldername2",
"lastmodifiedfile": "/hd/log/foldername2/foldername2.2017.07.09.log"
}
]
The bash script should be able to dynamically create array for each subfolder name (in case more folder are added or names are changed) and also give direct link to the last modified file.
I already php program to parse json file, but no sane way to crease this json file dynamically.
Any help or pointers is appreciated.
printf "%s" "["
for var in $(find /hd/log -type d)
do
path=$("ls -1t $var" | head -1)
echo $var"/"$path | awk -F\/ '{ printf "%s","\n\t{\n\t\t\"Foldername\":\""$(NF-1)"\",\n\t\tlastmodifiedfile\":\""$0"\"\n\t},"}'
done
printf "%s" "]"
Here we find all directories in /hd/log in a loop taking each directory in turn and then using ls -1t | head -1 to get the last modified file in the directory. The path and file is then parsed through awk to get the desired output. We first set the delimiter for awk as / with the -F flag. Then we then print the json syntax as required using the last but one / delimited piece of data for the directory (NF -1 - number field -1) and the complete line for the last modified file ($0).
I'm using jq (http://stedolan.github.io/jq/) to pull some specific data from some JSON files and convert it to another JSON file eg:
cat data1.json | ./jq '[.["messages"][] | {to: .to, from: .from, body: .body, direction: .direction, date_sent: .date_sent }]' > results1.json
I have 50 JSON files in a directory to do this to. How do I write a bit of shell script to iterate over all 50 files, perform said function, and save out to 50 scrubbed JSON files?
I'm thinking its something along these lines, but need some guidance:
for file in *.json | ./jq | '[.["messages"][] | {to: .to, from: .from, body: .body, direction: .direction, date_sent: .date_sent }]' "$file" "$newfile.json" ; done
Thanks!
I'm not familiar with jq, so there might be some way to get it to process many files in a single invocation. This will work for invoking it once per file though:
#!/bin/bash
for file in *.json; do
./jq '[.["messages"...' < "$file" > "$file.scrubbed"
done
Using cat for redirecting the input to a file is redundant. Just use < instead.
If your input files follow a consistent naming scheme like datan.json and you want the output files to be called e.g. resultn.json, you could use > "${file/data/result}" instead (though it might not be portable to some non-Bash shells). Watch out so you don't accidentally overwrite some file whose name doesn't contain "data" though. Search for ${parameter/pattern/string} in the Bash manual.
I'm trying to write a bash script that, among other things, extracts information from a mysql database. I tried the following to extract a file from entry 20:
mysql -se "select file_column_name from table where id=20;" >file.txt
That gave me a file.txt with the file name, not the file contents. How would I get the actual blob into file.txt?
Turn the value in file.txt into a variable and then use it as you need to? i.e.
blobFile=$(cat file.txt)
echo "----- contents of $blobFile ---------"
cat $blobFile
# copy the file somewhere else
scp $blobFile user#Remote /path/to/remote/loc/for/blobFile
# look for info in blobfile
grep specialInfo $blobFile
# etc ...
Is that what you want/need to do?
I hope this helps.