grep json value of a key name. (busybox without option -P) - json

I found tons of threads who discussed "how to grep json values".
But unfortunately useless for me and all who using grep from busybox (embedded linux). This grep version doesn't have the option "-P" (perl exp). Only "-E" (Extended Regexp) is available.
BusyBox v1.20.2 () multi-call binary.
Usage: grep [-HhnlLoqvsriwFE] [-m N] [-A/B/C N] PATTERN/-e PATTERN.../-f FILE [FILE]...
Search for PATTERN in FILEs (or stdin)
-H Add 'filename:' prefix
-h Do not add 'filename:' prefix
-n Add 'line_no:' prefix
-l Show only names of files that match
-L Show only names of files that don't match
-c Show only count of matching lines
-o Show only the matching part of line
-q Quiet. Return 0 if PATTERN is found, 1 otherwise
-v Select non-matching lines
-s Suppress open and read errors
-r Recurse
-i Ignore case
-w Match whole words only
-x Match whole lines only
-F PATTERN is a literal (not regexp)
-E PATTERN is an extended regexp
-m N Match up to N times per file
-A N Print N lines of trailing context
-B N Print N lines of leading context
-C N Same as '-A N -B N'
-e PTRN Pattern to match
-f FILE Read pattern from file
I have a json example:
{
"one": "apple",
"two": "banana"
}
Now, I want to extract the value e.g. "apple" from key "one".
grep -E '".*?"' file.json
Just an example how it should look like.
And btw: How to access groups from regex?
I would be grateful for any help or alternatives.

With busybox awk:
busybox awk -F '[:,]' '/"one"/ {gsub("[[:blank:]]+", "", $2); print $2}'
-F '[:,]' sets the field separator as : or ,
/"one"/ {gsub("[[:blank:]]+", "", $2); print $2} macthes if the line contains "one", if so strips off all horizontal whitespace(s) from second field and then printing the field
If you want to strip off the quotes too:
busybox awk -F '[:,]' '/"one"/ {gsub("[[:blank:]\"]+", "", $2); print $2}'
Example:
$ cat file.json
{
"one": "apple",
"two": "banana"
}
$ busybox awk -F '[:,]' '/"one"/ {gsub("[[:blank:]]+", "", $2); print $2}' file.json
"apple"
$ busybox awk -F '[:,]' '/"one"/ {gsub("[[:blank:]\"]+", "", $2); print $2}' file.json
apple

I like simple commands which enhance readability and easy to understand. In your file first we have to remove whitespaces to match the string. For that I usually prefer sed command. After that we can use awk command to find the match.
awk -F: '$1=="one" {print $2}' | sed -r 's/(\t|\s|,)//g' file.json
It will return:
"apple"
Note: I removed Comma(,) which present at end of line. If you need Comma also as output then refer below command.
awk -F: '$1=="one" {print $2}' | sed -r 's/(\t|\s)//g' file.json
It will return:
"apple",

awk solution won't work if a single line json
example:
{"one":"apple","two":"banana"}
following sed will do:
busybox cat file.json | sed -n 's/.*"one":\([^}, ]*\).*/\1/p'

Related

How to print only those numbers of a column which are greater than certain number in bash [duplicate]

I found some ways to pass external shell variables to an awk script, but I'm confused about ' and ".
First, I tried with a shell script:
$ v=123test
$ echo $v
123test
$ echo "$v"
123test
Then tried awk:
$ awk 'BEGIN{print "'$v'"}'
$ 123test
$ awk 'BEGIN{print '"$v"'}'
$ 123
Why is the difference?
Lastly I tried this:
$ awk 'BEGIN{print " '$v' "}'
$ 123test
$ awk 'BEGIN{print ' "$v" '}'
awk: cmd. line:1: BEGIN{print
awk: cmd. line:1: ^ unexpected newline or end of string
I'm confused about this.
#Getting shell variables into awk
may be done in several ways. Some are better than others. This should cover most of them. If you have a comment, please leave below.                                                                                    v1.5
Using -v (The best way, most portable)
Use the -v option: (P.S. use a space after -v or it will be less portable. E.g., awk -v var= not awk -vvar=)
variable="line one\nline two"
awk -v var="$variable" 'BEGIN {print var}'
line one
line two
This should be compatible with most awk, and the variable is available in the BEGIN block as well:
If you have multiple variables:
awk -v a="$var1" -v b="$var2" 'BEGIN {print a,b}'
Warning. As Ed Morton writes, escape sequences will be interpreted so \t becomes a real tab and not \t if that is what you search for. Can be solved by using ENVIRON[] or access it via ARGV[]
PS If you have vertical bar or other regexp meta characters as separator like |?( etc, they must be double escaped. Example 3 vertical bars ||| becomes -F'\\|\\|\\|'. You can also use -F"[|][|][|]".
Example on getting data from a program/function inn to awk (here date is used)
awk -v time="$(date +"%F %H:%M" -d '-1 minute')" 'BEGIN {print time}'
Example of testing the contents of a shell variable as a regexp:
awk -v var="$variable" '$0 ~ var{print "found it"}'
Variable after code block
Here we get the variable after the awk code. This will work fine as long as you do not need the variable in the BEGIN block:
variable="line one\nline two"
echo "input data" | awk '{print var}' var="${variable}"
or
awk '{print var}' var="${variable}" file
Adding multiple variables:
awk '{print a,b,$0}' a="$var1" b="$var2" file
In this way we can also set different Field Separator FS for each file.
awk 'some code' FS=',' file1.txt FS=';' file2.ext
Variable after the code block will not work for the BEGIN block:
echo "input data" | awk 'BEGIN {print var}' var="${variable}"
Here-string
Variable can also be added to awk using a here-string from shells that support them (including Bash):
awk '{print $0}' <<< "$variable"
test
This is the same as:
printf '%s' "$variable" | awk '{print $0}'
P.S. this treats the variable as a file input.
ENVIRON input
As TrueY writes, you can use the ENVIRON to print Environment Variables.
Setting a variable before running AWK, you can print it out like this:
X=MyVar
awk 'BEGIN{print ENVIRON["X"],ENVIRON["SHELL"]}'
MyVar /bin/bash
ARGV input
As Steven Penny writes, you can use ARGV to get the data into awk:
v="my data"
awk 'BEGIN {print ARGV[1]}' "$v"
my data
To get the data into the code itself, not just the BEGIN:
v="my data"
echo "test" | awk 'BEGIN{var=ARGV[1];ARGV[1]=""} {print var, $0}' "$v"
my data test
Variable within the code: USE WITH CAUTION
You can use a variable within the awk code, but it's messy and hard to read, and as Charles Duffy points out, this version may also be a victim of code injection. If someone adds bad stuff to the variable, it will be executed as part of the awk code.
This works by extracting the variable within the code, so it becomes a part of it.
If you want to make an awk that changes dynamically with use of variables, you can do it this way, but DO NOT use it for normal variables.
variable="line one\nline two"
awk 'BEGIN {print "'"$variable"'"}'
line one
line two
Here is an example of code injection:
variable='line one\nline two" ; for (i=1;i<=1000;++i) print i"'
awk 'BEGIN {print "'"$variable"'"}'
line one
line two
1
2
3
.
.
1000
You can add lots of commands to awk this way. Even make it crash with non valid commands.
One valid use of this approach, though, is when you want to pass a symbol to awk to be applied to some input, e.g. a simple calculator:
$ calc() { awk -v x="$1" -v z="$3" 'BEGIN{ print x '"$2"' z }'; }
$ calc 2.7 '+' 3.4
6.1
$ calc 2.7 '*' 3.4
9.18
There is no way to do that using an awk variable populated with the value of a shell variable, you NEED the shell variable to expand to become part of the text of the awk script before awk interprets it. (see comment below by Ed M.)
Extra info:
Use of double quote
It's always good to double quote variable "$variable"
If not, multiple lines will be added as a long single line.
Example:
var="Line one
This is line two"
echo $var
Line one This is line two
echo "$var"
Line one
This is line two
Other errors you can get without double quote:
variable="line one\nline two"
awk -v var=$variable 'BEGIN {print var}'
awk: cmd. line:1: one\nline
awk: cmd. line:1: ^ backslash not last character on line
awk: cmd. line:1: one\nline
awk: cmd. line:1: ^ syntax error
And with single quote, it does not expand the value of the variable:
awk -v var='$variable' 'BEGIN {print var}'
$variable
More info about AWK and variables
Read this faq.
It seems that the good-old ENVIRON awk built-in hash is not mentioned at all. An example of its usage:
$ X=Solaris awk 'BEGIN{print ENVIRON["X"], ENVIRON["TERM"]}'
Solaris rxvt
You could pass in the command-line option -v with a variable name (v) and a value (=) of the environment variable ("${v}"):
% awk -vv="${v}" 'BEGIN { print v }'
123test
Or to make it clearer (with far fewer vs):
% environment_variable=123test
% awk -vawk_variable="${environment_variable}" 'BEGIN { print awk_variable }'
123test
You can utilize ARGV:
v=123test
awk 'BEGIN {print ARGV[1]}' "$v"
Note that if you are going to continue into the body, you will need to adjust
ARGC:
awk 'BEGIN {ARGC--} {print ARGV[2], $0}' file "$v"
I just changed #Jotne's answer for "for loop".
for i in `seq 11 20`; do host myserver-$i | awk -v i="$i" '{print "myserver-"i" " $4}'; done
I had to insert date at the beginning of the lines of a log file and it's done like below:
DATE=$(date +"%Y-%m-%d")
awk '{ print "'"$DATE"'", $0; }' /path_to_log_file/log_file.log
It can be redirect to another file to save
Pro Tip
It could come handy to create a function that handles this so you dont have to type everything every time. Using the selected solution we get...
awk_switch_columns() {
cat < /dev/stdin | awk -v a="$1" -v b="$2" " { t = \$a; \$a = \$b; \$b = t; print; } "
}
And use it as...
echo 'a b c d' | awk_switch_columns 2 4
Output:
a d c b

use curl/bash command in jq

I am trying to get a list of URL after redirection using bash scripting. Say, google.com gets redirected to http://www.google.com with 301 status.
What I have tried is:
json='[{"url":"google.com"},{"url":"microsoft.com"}]'
echo "$json" | jq -r '.[].url' | while read line; do
curl -LSs -o /dev/null -w %{url_effective} $line 2>/dev/null
done
So, is it possible for us to use commands like curl inside jq for processing JSON objects.
I want to add the resulting URL to existing JSON structure like:
[
{
"url": "google.com",
"redirection": "http://www.google.com"
},
{
"url": "microsoft.com",
"redirection": "https://www.microsoft.com"
}
]
Thank you in advance..!
curl is capable of making multiple transfers in a single process, and it can also read command line arguments from a file or stdin, so, you don't need a loop at all, just put that JSON into a file and run this:
jq -r '"-o /dev/null\nurl = \(.[].url)"' file |
curl -sSLK- -w'%{url_effective}\n' |
jq -R 'fromjson | map(. + {redirection: input})' file -
This way only 3 processes will be spawned for the whole task, instead of n + 2 where n is the number of URLs.
I would generate a dictionary with jq per url and slurp those dictionaries into the final list with jq -s:
json='[{"url":"google.com"},{"url":"microsoft.com"}]'
echo "$json" | jq -r '.[].url' | while read url; do
redirect=$(curl -LSs \
-o /dev/null \
-w '%{url_effective}' \
"${url}" 2>/dev/null)
jq --null-input --arg url "${url}" --arg redirect "${redirect}" \
'{url:$url, redirect: $redirect}'
done | jq -s
Alternative (first) solution:
You can output the url and the effective_url as tab separated data and create the output json with jq:
json='[{"url":"google.com"},{"url":"microsoft.com"}]'
echo "$json" | jq -r '.[].url' | while read line; do
prefix="${line}\t"
curl -LSs -o /dev/null -w "${prefix}"'%{url_effective}'"\n" "$line" 2>/dev/null
done | jq -r --raw-input 'split("\t")|{"url":.[0],"redirection":.[1]}'
Both solutions will generate valid json, independently of whatever characters the url/effective_url might contain.
Trying to keep this in JSON all the way is pretty cumbersome. I would simply try to make Bash construct a new valid JSON fragment inside the loop.
So in other words, if $url is the URL and $redirect is where it redirects to, you can do something like
printf '{"url": "%s", "redirection": "%s"}\n' "$url" "$redirect"
to produce JSON output from these strings. So tying it all together
jq -r '.[].url' <<<"$json" |
while read -r url; do
printf '{"url:" "%s", "redirection": "%s"}\n' \
"$url" "$(curl -LSs -o /dev/null -w '%{url_effective}' "$url")"
done |
jq -s
This is still pretty brittle; in particular, if either of the printf input strings could contain a literal double quote, that should properly be escaped.

Sh Script JSON values from JSON string

i have a file which contains an JSON object as a string:
{"STATUS":[{"STATUS":"S","When":1530779438,"Code":70,"Msg":"CGMiner stats","Description":"cgminer 4.9.0"}],"STATS":[{"CGMiner":"4.9.0","Miner":"9.0.0.5","CompileTime":"Sat May 26 20:42:30 CST 2018","Type":"Antminer Z9-Mini"},{"STATS":0,"ID":"ZCASH0","Elapsed":179818,"Calls":0,"Wait":0.000000,"Max":0.000000,"Min":99999999.000000,"GHS 5s":"16.39","GHS av":16.27,"miner_count":3,"frequency":"750","fan_num":1,"fan1":5760,"fan2":0,"fan3":0,"fan4":0,"fan5":0,"fan6":0,"temp_num":3,"temp1":41,"temp2":40,"temp3":43,"temp2_1":56,"temp2_2":53,"temp2_3":56,"temp_max":43,"Device Hardware%":0.0000,"no_matching_work":0,"chain_acn1":4,"chain_acn2":4,"chain_acn3":4,"chain_acs1":" oooo","chain_acs2":" oooo","chain_acs3":" oooo","chain_hw1":0,"chain_hw2":0,"chain_hw3":0,"chain_rate1":"5.18","chain_rate2":"5.34","chain_rate3":"5.87"}],"id":1}
now i want to get some values from keys in this object within a sh script.
The following cmd works, but however not for all keys!?
this works: (i get "750")
grep -o '"frequency": *"[^"]*"' LXstats.txt | grep -o '"[^"]*"$'
but this not: (empty)
grep -o '"fan_num": *"[^"]*"' LXstats.txt | grep -o '"[^"]*"$'
same with this:
grep -o '"fan1": *"[^"]*"' LXstats.txt | grep -o '"[^"]*"$'
Working on a xilinx OS, which has no python, so "jq" will not work and grep has no "-P" option. So anyone have an idea to work with that? :)
Thanks and best regards,
dave
When you want to do more than just g/re/p you should be using awk, not combinations of greps+pipes, etc.
$ awk -v tag='frequency' 'match($0,"\""tag"\": *(\"[^\"]*|[0-9]+)") { val=substr($0,RSTART,RLENGTH); sub(/^"[^"]+": *"?/,"",val); print val }' file
750
$ awk -v tag='fan_num' 'match($0,"\""tag"\": *(\"[^\"]*|[0-9]+)") { val=substr($0,RSTART,RLENGTH); sub(/^"[^"]+": *"?/,"",val); print val }' file
1
$ awk -v tag='fan1' 'match($0,"\""tag"\": *(\"[^\"]*|[0-9]+)") { val=substr($0,RSTART,RLENGTH); sub(/^"[^"]+": *"?/,"",val); print val }' file
5760
The above will work with any awk in any shell on any UNIX box. If you have GNU awk for the 3rd arg to match() and gensub() you can write it a bit briefer:
$ awk -v tag='frequency' 'match($0,"\""tag"\": *(\"[^\"]*|[0-9]+)",a) { print gensub(/^"/,"",1,a[1]) }' file
750

Parsing JSON array: 'paste' for bash variables?

At first, I parsed an array JSON file with a loop using jshon, but it takes too long.
To speed things up, I thought I could return every value of id from every index, repeat with word another type, put these into variables, and finally join them together before echo-ing. I've done something similar with files using paste, but I get an error complaining that the input is too long.
If there is a more efficient way of doing this in bash without too many dependencies, let me know.
I forgot to mention that I want to have the possibility of colorizing the different parts independently (red id). Also, I don't store the json; it's piped:
URL="http://somewebsitewithanapi.tld?foo=no&bar=yes"
API=`curl -s "$URL"`
id=`echo $API | jshon -a -e id -u`
word=`echo $API | jshon -a -e word -u | sed 's/bar/foo/'`
red='\e[0;31m' blue='\e[0;34`m' #bash colors
echo "${red}$id${x}. ${blue}$word${x}" #SOMEHOW CONCATENATED SIDE-BY-SIDE,
# PRESERVING THE ABILITY TO COLORIZE THEM INDEPENDENTLY.
My input (piped; not a file):
[
{
"id": 1,
"word": "wordA"
},
{
"id": 2,
"word": "wordB"
},
{
"id": 3,
"word": "wordC"
}
]
Tried:
jshon -a -e id -u :
That yields:
1
2
3
And:
jshon -a -e text -u :
That yields:
wordA
wordB
wordC
Expected result after joining:
1 wordA
2 wordB
3 wordC
4 wordD
you can use the json parser jq:
jq '.[] | "\(.id) \(.word)"' jsonfile
It yields:
"1 wordA"
"2 wordB"
"3 wordC"
If you want to get rid of double quotes, pipe the output to sed:
jq '.[] | "\(.id) \(.word)"' jsonfile | sed -e 's/^.\(.*\).$/\1/'
That yields:
1 wordA
2 wordB
3 wordC
UPDATE: See Martin Neal's comment for a solution to remove quotes without an additional sed command.
The paste solution you're thinking of is this:
paste <(jshon -a -e id -u < foo.json) <(jshon -a -e word -u < foo.json)
Of course, you're processing the file twice.
You could also use a language with a JSON library, for example ruby:
ruby -rjson -le '
JSON.parse(File.read(ARGV.shift)).each {|h| print h["id"], " ", h["word"]}
' foo.json
1 wordA
2 wordB
3 wordC
API=$(curl -s "$URL")
# store ids and words in arrays
id=( $(jshon -a -e id -u <<< "$API") )
word=( $(jshon -a -e word -u <<< "$API" | sed 's/bar/foo/') )
red='\e[0;31m';
blue='\e[0;34m'
x='\e[0m'
for (( i=0; i<${#id[#]}; i++ )); do
printf "%s%s%s %s%s%s\n" "$red" "${id[i]}" "$x" \
"$blue" "${word[i]}" "$x"
done
I would go with Birei's solution but if your output is constrained along the lines of your sample, the following may work (with GNU grep)
paste -d ' ' <(grep -oP '(?<=id": ).*(?=,)' file.txt) \
<(grep -oP '(?<=word": ").*(?=",)' file.txt)

How do I find files that do not end with a newline/linefeed?

How can I list normal text (.txt) filenames, that don't end with a newline?
e.g.: list (output) this filename:
$ cat a.txt
asdfasdlsad4randomcharsf
asdfasdfaasdf43randomcharssdf
$
and don't list (output) this filename:
$ cat b.txt
asdfasdlsad4randomcharsf
asdfasdfaasdf43randomcharssdf
$
Use pcregrep, a Perl Compatible Regular Expressions version of grep which supports a multiline mode using -M flag that can be used to match (or not match) if the last line had a newline:
pcregrep -LMr '\n\Z' .
In the above example we are saying to search recursively (-r) in current directory (.) listing files that don't match (-L) our multiline (-M) regex that looks for a newline at the end of a file ('\n\Z')
Changing -L to -l would list the files that do have newlines in them.
pcregrep can be installed on MacOS with the homebrew pcre package: brew install pcre
Ok it's my turn, I give it a try:
find . -type f -print0 | xargs -0 -L1 bash -c 'test "$(tail -c 1 "$0")" && echo "No new line at end of $0"'
If you have ripgrep installed:
rg -l '[^\n]\z'
That regular expression matches any character which is not a newline, and then the end of the file.
Give this a try:
find . -type f -exec sh -c '[ -z "$(sed -n "\$p" "$1")" ]' _ {} \; -print
It will print filenames of files that end with a blank line. To print files that don't end in a blank line change the -z to -n.
If you are using 'ack' (http://beyondgrep.com) as a alternative to grep, you just run this:
ack -v '\n$'
It actually searches all lines that don't match (-v) a newline at the end of the line.
The best oneliner I could come up with is this:
git grep --cached -Il '' | xargs -L1 bash -c 'if test "$(tail -c 1 "$0")"; then echo "No new line at end of $0"; exit 1; fi'
This uses git grep, because in my use-case I want to ensure files commited to a git branch have ending newlines.
If this is required outside of a git repo, you can of course just use grep instead.
grep -RIl '' . | xargs -L1 bash -c 'if test "$(tail -c 1 "$0")"; then echo "No new line at end of $0"; exit 1; fi'
Why I use grep? Because you can easily filter out binary files with -I.
Then the usual xargs/tail thingy found in other answers, with the addition to exit with 1 if a file has no newline. So this can be used in a pre-commit githook or CI.
This should do the trick:
#!/bin/bash
for file in `find $1 -type f -name "*.txt"`;
do
nlines=`tail -n 1 $file | grep '^$' | wc -l`
if [ $nlines -eq 1 ]
then echo $file
fi
done;
Call it this way: ./script dir
E.g. ./script /home/user/Documents/ -> lists all text files in /home/user/Documents ending with \n.
This is kludgy; someone surely can do better:
for f in `find . -name '*.txt' -type f`; do
if test `tail -c 1 "$f" | od -c | head -n 1 | tail -c 3` != \\n; then
echo $f;
fi
done
N.B. this answers the question in the title, which is different from the question in the body (which is looking for files that end with \n\n I think).
Most solutions on this page do not work for me (FreeBSD 10.3 amd64). Ian Will's
OSX solution does almost-always work, but is pretty difficult to follow : - (
There is an easy solution that almost-always works too : (if $f is the file) :
sed -i '' -e '$a\' "$f"
There is a major problem with the sed solution : it never gives you the
opportunity to just check (and not append a newline).
Both the above solutions fail for DOS files. I think the most
portable/scriptable solution is probably the easiest one,
which I developed myself : - )
Here is that elementary sh script which combines file/unix2dos/tail. In
production, you will likely need to use "$f" in quotes and fetch tail output
(embedded into the shell variable named last) as \"$f\"
if file $f | grep 'ASCII text' > /dev/null; then
if file $f | grep 'CRLF' > /dev/null; then
type unix2dos > /dev/null || exit 1
dos2unix $f
last="`tail -c1 $f`"
[ -n "$last" ] && echo >> $f
unix2dos $f
else
last="`tail -c1 $f`"
[ -n "$last" ] && echo >> $f
fi
fi
Hope this helps someone.
This example
Works on macOS (BSD) and GNU/Linux
Uses standard tools: find, grep, sh, file, tail, od, tr
Supports paths with spaces
Oneliner:
find . -type f -exec sh -c 'file -b "{}" | grep -q text' \; -exec sh -c '[ "$(tail -c 1 "{}" | od -An -a | tr -d "[:space:]")" != "nl" ]' \; -print
More readable version
Find under current directory
Regular files
That 'file' (brief mode) considers text
Whose last byte (tail -c 1) is not represented by od's named character "nl"
And print their paths
#!/bin/sh
find . \
-type f \
-exec sh -c 'file -b "{}" | grep -q text' \; \
-exec sh -c '[ "$(tail -c 1 "{}" | od -An -a | tr -d "[:space:]")" != "nl" ]' \; \
-print
Finally, a version with a -f flag to fix the offending files (requires bash).
#!/bin/bash
# Finds files without final newlines
# Pass "-f" to also fix those files
fix_flag="$([ "$1" == "-f" ] && echo -true || echo -false)"
find . \
-type f \
-exec sh -c 'file -b "{}" | grep -q text' \; \
-exec sh -c '[ "$(tail -c 1 "{}" | od -An -a | tr -d "[:space:]")" != "nl" ]' \; \
-print \
$fix_flag \
-exec sh -c 'echo >> "{}"' \;
Another option:
$ find . -name "*.txt" -print0 | xargs -0I {} bash -c '[ -z "$(tail -n 1 {})" ] && echo {}'
Since your question has the perl tag, I'll post an answer which uses it:
find . -type f -name '*.txt' -exec perl check.pl {} +
where check.pl is the following:
#!/bin/perl
use strict;
use warnings;
foreach (#ARGV) {
open(FILE, $_);
seek(FILE, -2, 2);
my $c;
read(FILE,$c,1);
if ( $c ne "\n" ) {
print "$_\n";
}
close(FILE);
}
This perl script just open, one per time, the files passed as parameters and read only the next-to-last character; if it is not a newline character, it just prints out the filename, else it does nothing.
This example works for me on OSX (many of the above solutions did not)
for file in `find . -name "*.java"`
do
result=`od -An -tc -j $(( $(ls -l $file | awk '{print $5}') - 1 )) $file`
last_char=`echo $result | sed 's/ *//'`
if [ "$last_char" != "\n" ]
then
#echo "Last char is .$last_char."
echo $file
fi
done
Here another example using little bash build-in commands and which:
allows you to filter for extension (e.g. | grep '\.md$' filters only the md files)
pipe more grep commands for extending the filter (like exclusions | grep -v '\.git' to exclude the files under .git
use the full power of grep parameters to for more filters or inclusions
The code basically, iterates (for) over all the files (matching your chosen criteria grep) and if the last 1 character of a file (-n "$(tail -c -1 "$file")") is not not a blank line, it will print the file name (echo "$file").
The verbose code:
for file in $(find . | grep '\.md$')
do
if [ -n "$(tail -c -1 "$file")" ]
then
echo "$file"
fi
done
A bit more compact:
for file in $(find . | grep '\.md$')
do
[ -n "$(tail -c -1 "$file")" ] && echo "$file"
done
and, of course, the 1-liner for it:
for file in $(find . | grep '\.md$'); do [ -n "$(tail -c -1 "$file")" ] && echo "$file"; done