What I'm trying to do:
Use jq to pass along parameters to ffmpeg in a bash script.
Have a JSON in this external file that I generate regularly.
{
"streams":[
{
"track":"/var/www/html/stream1.m3u8",
"link":"http://playertest.longtailvideo.com/adaptive/bipbop/bipbop.m3u8"
},
{
"track":"/var/www/html/stream2.m3u8",
"link":"https://mnmedias.api.telequebec.tv/m3u8/29880.m3u8"
},
{
"track":"/var/www/html/stream3.m3u8",
"link":"http://www.streambox.fr/playlists/test_001/stream.m3u8"
}
]
}
This is the command I've tried based on the response found here https://github.com/stedolan/jq/issues/503
jq -r '.streams[] | ffmpeg -v verbose -i \(.link | #sh) -c copy -flags -global_header -hls_time 10 -hls_list_size 6 -hls_wrap 10 -start_number 1 \(.Track | #sh)"' streams.json | sh
However I get this error message:
jq: error: syntax error, unexpected IDENT, expecting $end (Unix shell quoting issues?) at <top-level>, line 1:
.streams[] | ffmpeg -v verbose -i \(.link | #sh) -c copy -flags -global_header -hls_time 10 -hls_list_size 6 -hls_wrap 10 -start_number 1 \(.Track | #sh)"
jq: 1 compile error
The shortest possible change to your original code is just to add the quotes that were missing:
jq -r '.streams[] | "ffmpeg -v verbose -i \(.link | #sh) -c copy -flags -global_header -hls_time 10 -hls_list_size 6 -hls_wrap 10 -start_number 1 \(.Track | #sh)"' streams.json | sh
# ^-- this was missing
Note that "ffmpeg ..." is a string, and is contained in quotes. That said, you're relying on jq to generate safe code for your shell -- since it has features explicitly built for the purpose, this isn't as bad as an idea as it might be otherwise; but it's still better practice to avoid code generation wherever possible.
As an alternate approach that avoids code generation and is safe with all possible filenames, use jq to generate a NUL-delimited stream of track / link pairs, and a BashFAQ #1 loop to iterate over them:
#!/usr/bin/env bash
while IFS= read -r -d '' track && IFS= read -r -d '' link; do
ffmpeg -v verbose -i "$link" -c copy -flags -global_header -hls_time 10 \
-hls_list_size 6 -hls_wrap 10 -start_number 1 "$track"
done < <(jq -j '.streams[] | ( .track + "\u0000" + .link + "\u0000" )' streams.json)
Using bash and jq :
#!/bin/bash
file="$1"
c=0
while true; do
track=$(jq -r ".streams[$c].track" "$file" 2>/dev/null)
link=$(jq -r ".streams[$c].link" "$file" 2>/dev/null)
[[ ! $stream || ! $link ]] && break
ffmpeg -v verbose -i "$link" -c copy -flags -global_header -hls_time 10 \
-hls_list_size 6 -hls_wrap 10 -start_number 1 "$track"
((c++))
done
Usage :
./script.bash file.json
Using nodejs to generate the shell commands :
(replace file.json with your own path/file)
#!/bin/bash
node<<EOF
var j=$(<file.json);
for (var i = 0; i<j.streams.length; i++) {
console.log("ffmpeg -v verbose -i '" + j.streams[i].link + "' -c copy -flags -global_header -hls_time 10 -hls_list_size 6 -hls_wrap 10 -start_number 1 '" + j.streams[i].track + "'");
}
EOF
Output :
ffmpeg -v verbose -i 'http://playertest.longtailvideo.com/adaptive/bipbop/bipbop.m3u8' -c copy -flags -global_header -hls_time 10 -hls_list_size 6 -hls_wrap 10 -start_number 1 '/var/www/html/stream1.m3u8'
ffmpeg -v verbose -i 'https://mnmedias.api.telequebec.tv/m3u8/29880.m3u8' -c copy -flags -global_header -hls_time 10 -hls_list_size 6 -hls_wrap 10 -start_number 1 '/var/www/html/stream2.m3u8'
ffmpeg -v verbose -i 'http://www.streambox.fr/playlists/test_001/stream.m3u8' -c copy -flags -global_header -hls_time 10 -hls_list_size 6 -hls_wrap 10 -start_number 1 '/var/www/html/stream3.m3u8'
Related
I want to use a bash script to output the contents of top command and then write it to a json file. But I'm having difficulty writing the slashes/encodings/line breaks into a file with a valid json object
Here's what I tried:
#!/bin/bash
message1=$(top -n 1 -o %CPU)
message2=$(top -n 1 -o %CPU | jq -aRs .)
message3=$(top -n 1 -o %CPU | jq -Rs .)
message4=${message1//\\/\\\\/}
echo "{\"message\":\"${message2}\"}" > file.json
But when I look at the file.json, it looks soemthing like this:
{"message":""\u001b[?1h\u001b=\u001b[?25l\u001b[H\u001b[2J\u001b(B\u001b[mtop - 21:34:53 up 55 days, 5:14, 2 users, load average: 0.17, 0.09, 0.03\u001b(B\u001b[m\u001b[39;49m\u001b(B\u001b[m\u001b[39;49m\u001b[K\nTasks:\u001b(B\u001b[m\u001b[39;49m\u001b[1m 129 \u001b(B\u001b[m\u001b[39;49mtotal,\u001b(B\u001b[m\u001b[39;49m\u001b[1m 1 \u001b(B\u001b[m\u001b[39;49mrunning,\u001b(B\u001b[m\u001b[39;49m\u001b[1m 128 \u001b(B\u001b[m\u001b[39;49msleeping,\u001b(B\u001b[m
Each of the other attempts with message1 to message4 all result in various json syntax issues.
Can anyone suggest what I should try next?
You don't need all the whistle of echo and multiple jq invocations:
top -b -n 1 -o %CPU | jq -aRs '{"message": .}' >file.json
Or pass the output of the top command as an argument variable.
Using --arg to pass arguments to jq:
jq -an --arg msg "$(top -b -n 1 -o %CPU)" '{"message": $msg}' >file.json
This code will fetch the data from Multiple servers and store in CSV file.
I am trying to get same data into HTML table format with the condition base columns, like if disk free space is less in 20% then columns of that server becomes yellow if less then 10% then becomes red.
rm -f /tmp/health*
touch /tmp/health.csv
#Servers File Path
FILE="/tmp/health.csv"
USR=root
#Create CSV File Haeader
echo " Date, Hostname, Connectivity, Root-FreeSpace, Uptime, OS-version, Total-ProcesCount, VmToolsversion, ServerLoad, Memory, Disk, CPU, LastReboot-Time, UserFailedLoginCount, " > $FILE
for server in `more /root/servers.txt`
do
_CMD="ssh $USR#$server"
Date=$($_CMD date)
Connectivity=$($_CMD ping -c 1 google.com &> /dev/null && echo connected || echo disconnected)
ip_add=`ifconfig | grep "inet addr" | head -2 | tail -1 | awk {'print$2'} | cut -f2 -d:`
RootFreeSpace=$($_CMD df / | tail -n +2 |awk '{print $5}')
Uptime=$($_CMD uptime | sed 's/.*up \([^,]*\), .*/\1/')
OSVersion=$($_CMD cat /etc/redhat-release)
TotalProcess=$($_CMD ps axue | grep -vE "^USER|grep|ps" | wc -l)
VmtoolStatus=$($_CMC vmtoolsd -v |awk '{print $5}')
ServerLoad=$($_CMD uptime |awk -F'average:' '{ print $2}'|sed s/,//g | awk '{ print $2}')
Memory=$($_CMD free -m | awk 'NR==2{printf "%.2f%%\t\t", $3*100/$2 }')
Disk=$($_CMD df -h | awk '$NF=="/"{printf "%s\t\t", $5}')
CPU=$($_CMD top -bn1 | grep load | awk '{printf "%.2f%%\t\t\n", $(NF-2)}')
Lastreboottime=$($_CMD who -b | awk '{print $3,$4}')
FailedUserloginCount=$($_CMD cat /var/log/secure |grep "Failed" | wc -l)
#updated data in CSV
echo "$Date,$HostName,$Connectivity,$RootFreeSpace,$Uptime,$OSVersion,$TotalProcess,$VmtoolStatus,$ServerLoad,$Memory,$Disk,$CPU,$Lastreboottime,$FailedUserloginCount" >> $FILE
done
i have a file which contains an JSON object as a string:
{"STATUS":[{"STATUS":"S","When":1530779438,"Code":70,"Msg":"CGMiner stats","Description":"cgminer 4.9.0"}],"STATS":[{"CGMiner":"4.9.0","Miner":"9.0.0.5","CompileTime":"Sat May 26 20:42:30 CST 2018","Type":"Antminer Z9-Mini"},{"STATS":0,"ID":"ZCASH0","Elapsed":179818,"Calls":0,"Wait":0.000000,"Max":0.000000,"Min":99999999.000000,"GHS 5s":"16.39","GHS av":16.27,"miner_count":3,"frequency":"750","fan_num":1,"fan1":5760,"fan2":0,"fan3":0,"fan4":0,"fan5":0,"fan6":0,"temp_num":3,"temp1":41,"temp2":40,"temp3":43,"temp2_1":56,"temp2_2":53,"temp2_3":56,"temp_max":43,"Device Hardware%":0.0000,"no_matching_work":0,"chain_acn1":4,"chain_acn2":4,"chain_acn3":4,"chain_acs1":" oooo","chain_acs2":" oooo","chain_acs3":" oooo","chain_hw1":0,"chain_hw2":0,"chain_hw3":0,"chain_rate1":"5.18","chain_rate2":"5.34","chain_rate3":"5.87"}],"id":1}
now i want to get some values from keys in this object within a sh script.
The following cmd works, but however not for all keys!?
this works: (i get "750")
grep -o '"frequency": *"[^"]*"' LXstats.txt | grep -o '"[^"]*"$'
but this not: (empty)
grep -o '"fan_num": *"[^"]*"' LXstats.txt | grep -o '"[^"]*"$'
same with this:
grep -o '"fan1": *"[^"]*"' LXstats.txt | grep -o '"[^"]*"$'
Working on a xilinx OS, which has no python, so "jq" will not work and grep has no "-P" option. So anyone have an idea to work with that? :)
Thanks and best regards,
dave
When you want to do more than just g/re/p you should be using awk, not combinations of greps+pipes, etc.
$ awk -v tag='frequency' 'match($0,"\""tag"\": *(\"[^\"]*|[0-9]+)") { val=substr($0,RSTART,RLENGTH); sub(/^"[^"]+": *"?/,"",val); print val }' file
750
$ awk -v tag='fan_num' 'match($0,"\""tag"\": *(\"[^\"]*|[0-9]+)") { val=substr($0,RSTART,RLENGTH); sub(/^"[^"]+": *"?/,"",val); print val }' file
1
$ awk -v tag='fan1' 'match($0,"\""tag"\": *(\"[^\"]*|[0-9]+)") { val=substr($0,RSTART,RLENGTH); sub(/^"[^"]+": *"?/,"",val); print val }' file
5760
The above will work with any awk in any shell on any UNIX box. If you have GNU awk for the 3rd arg to match() and gensub() you can write it a bit briefer:
$ awk -v tag='frequency' 'match($0,"\""tag"\": *(\"[^\"]*|[0-9]+)",a) { print gensub(/^"/,"",1,a[1]) }' file
750
Using --pretty=format, you can format git log or git show stdout as you like.
git log \
--pretty=format:'{%n "commit": "%H",%n "author": "%an <%ae>",%n "date": "%ad",%n "message": "%f"%n},' \
$# | \
perl -pe 'BEGIN{print "["}; END{print "]\n"}' | \
perl -pe 's/},]/}]/'
Example above parses author, commit, date, message values. How can we parse the value of Approved-by which is available when a pull-request is approved.
Even the official documentation does not mention that
Approved-by is not a builtin field so Git doesn't have a placeholder for it. We could use other methods to get the fields and format the output.
Suppose the Approved-by line looks like:
Approved-by: Someone Nice
Here is a bash sample:
for commit in $(git log --pretty=%H);do
echo -e "{\n\
\"commit\": \"$commit\",\n\
\"author\": \"$(git log -1 $commit --pretty=%an)\",\n\
\"date\": \"$(git log -1 $commit --pretty=%cd)\",\n\
\"message\": \"$(git log -1 $commit --pretty=%f)\",\n\
\"approved-by\": \"$(git log -1 $commit --pretty=%b | grep Approved-by | awk -F ': ' '{print $NF","}' | xargs echo | sed -e 's/,$//')\"\n\
},"
done | \
perl -pe 'BEGIN{print "["}' | \
sed -e '$s/},/}]/'
It needs improvement to meet your real needs, especially the \"approved-by\" line. Basically it gets all the commit sha1 values first and then parse them to get the fields of each commit and then format the output.
How can I list normal text (.txt) filenames, that don't end with a newline?
e.g.: list (output) this filename:
$ cat a.txt
asdfasdlsad4randomcharsf
asdfasdfaasdf43randomcharssdf
$
and don't list (output) this filename:
$ cat b.txt
asdfasdlsad4randomcharsf
asdfasdfaasdf43randomcharssdf
$
Use pcregrep, a Perl Compatible Regular Expressions version of grep which supports a multiline mode using -M flag that can be used to match (or not match) if the last line had a newline:
pcregrep -LMr '\n\Z' .
In the above example we are saying to search recursively (-r) in current directory (.) listing files that don't match (-L) our multiline (-M) regex that looks for a newline at the end of a file ('\n\Z')
Changing -L to -l would list the files that do have newlines in them.
pcregrep can be installed on MacOS with the homebrew pcre package: brew install pcre
Ok it's my turn, I give it a try:
find . -type f -print0 | xargs -0 -L1 bash -c 'test "$(tail -c 1 "$0")" && echo "No new line at end of $0"'
If you have ripgrep installed:
rg -l '[^\n]\z'
That regular expression matches any character which is not a newline, and then the end of the file.
Give this a try:
find . -type f -exec sh -c '[ -z "$(sed -n "\$p" "$1")" ]' _ {} \; -print
It will print filenames of files that end with a blank line. To print files that don't end in a blank line change the -z to -n.
If you are using 'ack' (http://beyondgrep.com) as a alternative to grep, you just run this:
ack -v '\n$'
It actually searches all lines that don't match (-v) a newline at the end of the line.
The best oneliner I could come up with is this:
git grep --cached -Il '' | xargs -L1 bash -c 'if test "$(tail -c 1 "$0")"; then echo "No new line at end of $0"; exit 1; fi'
This uses git grep, because in my use-case I want to ensure files commited to a git branch have ending newlines.
If this is required outside of a git repo, you can of course just use grep instead.
grep -RIl '' . | xargs -L1 bash -c 'if test "$(tail -c 1 "$0")"; then echo "No new line at end of $0"; exit 1; fi'
Why I use grep? Because you can easily filter out binary files with -I.
Then the usual xargs/tail thingy found in other answers, with the addition to exit with 1 if a file has no newline. So this can be used in a pre-commit githook or CI.
This should do the trick:
#!/bin/bash
for file in `find $1 -type f -name "*.txt"`;
do
nlines=`tail -n 1 $file | grep '^$' | wc -l`
if [ $nlines -eq 1 ]
then echo $file
fi
done;
Call it this way: ./script dir
E.g. ./script /home/user/Documents/ -> lists all text files in /home/user/Documents ending with \n.
This is kludgy; someone surely can do better:
for f in `find . -name '*.txt' -type f`; do
if test `tail -c 1 "$f" | od -c | head -n 1 | tail -c 3` != \\n; then
echo $f;
fi
done
N.B. this answers the question in the title, which is different from the question in the body (which is looking for files that end with \n\n I think).
Most solutions on this page do not work for me (FreeBSD 10.3 amd64). Ian Will's
OSX solution does almost-always work, but is pretty difficult to follow : - (
There is an easy solution that almost-always works too : (if $f is the file) :
sed -i '' -e '$a\' "$f"
There is a major problem with the sed solution : it never gives you the
opportunity to just check (and not append a newline).
Both the above solutions fail for DOS files. I think the most
portable/scriptable solution is probably the easiest one,
which I developed myself : - )
Here is that elementary sh script which combines file/unix2dos/tail. In
production, you will likely need to use "$f" in quotes and fetch tail output
(embedded into the shell variable named last) as \"$f\"
if file $f | grep 'ASCII text' > /dev/null; then
if file $f | grep 'CRLF' > /dev/null; then
type unix2dos > /dev/null || exit 1
dos2unix $f
last="`tail -c1 $f`"
[ -n "$last" ] && echo >> $f
unix2dos $f
else
last="`tail -c1 $f`"
[ -n "$last" ] && echo >> $f
fi
fi
Hope this helps someone.
This example
Works on macOS (BSD) and GNU/Linux
Uses standard tools: find, grep, sh, file, tail, od, tr
Supports paths with spaces
Oneliner:
find . -type f -exec sh -c 'file -b "{}" | grep -q text' \; -exec sh -c '[ "$(tail -c 1 "{}" | od -An -a | tr -d "[:space:]")" != "nl" ]' \; -print
More readable version
Find under current directory
Regular files
That 'file' (brief mode) considers text
Whose last byte (tail -c 1) is not represented by od's named character "nl"
And print their paths
#!/bin/sh
find . \
-type f \
-exec sh -c 'file -b "{}" | grep -q text' \; \
-exec sh -c '[ "$(tail -c 1 "{}" | od -An -a | tr -d "[:space:]")" != "nl" ]' \; \
-print
Finally, a version with a -f flag to fix the offending files (requires bash).
#!/bin/bash
# Finds files without final newlines
# Pass "-f" to also fix those files
fix_flag="$([ "$1" == "-f" ] && echo -true || echo -false)"
find . \
-type f \
-exec sh -c 'file -b "{}" | grep -q text' \; \
-exec sh -c '[ "$(tail -c 1 "{}" | od -An -a | tr -d "[:space:]")" != "nl" ]' \; \
-print \
$fix_flag \
-exec sh -c 'echo >> "{}"' \;
Another option:
$ find . -name "*.txt" -print0 | xargs -0I {} bash -c '[ -z "$(tail -n 1 {})" ] && echo {}'
Since your question has the perl tag, I'll post an answer which uses it:
find . -type f -name '*.txt' -exec perl check.pl {} +
where check.pl is the following:
#!/bin/perl
use strict;
use warnings;
foreach (#ARGV) {
open(FILE, $_);
seek(FILE, -2, 2);
my $c;
read(FILE,$c,1);
if ( $c ne "\n" ) {
print "$_\n";
}
close(FILE);
}
This perl script just open, one per time, the files passed as parameters and read only the next-to-last character; if it is not a newline character, it just prints out the filename, else it does nothing.
This example works for me on OSX (many of the above solutions did not)
for file in `find . -name "*.java"`
do
result=`od -An -tc -j $(( $(ls -l $file | awk '{print $5}') - 1 )) $file`
last_char=`echo $result | sed 's/ *//'`
if [ "$last_char" != "\n" ]
then
#echo "Last char is .$last_char."
echo $file
fi
done
Here another example using little bash build-in commands and which:
allows you to filter for extension (e.g. | grep '\.md$' filters only the md files)
pipe more grep commands for extending the filter (like exclusions | grep -v '\.git' to exclude the files under .git
use the full power of grep parameters to for more filters or inclusions
The code basically, iterates (for) over all the files (matching your chosen criteria grep) and if the last 1 character of a file (-n "$(tail -c -1 "$file")") is not not a blank line, it will print the file name (echo "$file").
The verbose code:
for file in $(find . | grep '\.md$')
do
if [ -n "$(tail -c -1 "$file")" ]
then
echo "$file"
fi
done
A bit more compact:
for file in $(find . | grep '\.md$')
do
[ -n "$(tail -c -1 "$file")" ] && echo "$file"
done
and, of course, the 1-liner for it:
for file in $(find . | grep '\.md$'); do [ -n "$(tail -c -1 "$file")" ] && echo "$file"; done