I'm working on a stream silence detection.
It's working on the following command in ffmpeg:
ffmpeg -i http://mystream.com/stream -af silencedetect=n=-50dB:d=0.5 -f null - 2> log.txt
I would like to get a json output of the logfile.
There is a json option in 'ffprobe' but silencedetect=n=-50dB:d=0.5 is'nt working.
Help!
Cheers!
ffprobe is meant to probe container-level or stream-level metadata. silencedetect is a filter which analyses the content of decoded audio streams; its output isn't controlled by the choice of writer.
What you could do, since silencedetect also logs its result to metadata tags, is output just that data to a file.
ffmpeg -i http://mystream.com/stream -af silencedetect=n=-50dB:d=0.5,ametadata=print:file=log.txt -f null -
Output
frame:281 pts:323712 pts_time:6.744
lavfi.silence_start=6.244
frame:285 pts:328320 pts_time:6.84
lavfi.silence_end=6.84
lavfi.silence_duration=0.596
frame:413 pts:475776 pts_time:9.912
lavfi.silence_start=9.412
frame:1224 pts:1410048 pts_time:29.376
lavfi.silence_end=29.376
lavfi.silence_duration=19.964
Related
I'm trying to take the contents of a config file (JSON format), strip out extraneous new lines and spaces to be concise and then assign it to an environment variable before starting my application.
This is where I've got so far:
pwr_config=`echo "console.log(JSON.stringify(JSON.parse(require('fs').readFileSync(process.argv[2], 'utf-8'))));" | node - config.json | xargs -0 printf '%q\n'` npm run start
This pipes a short node.js app into the node runtime taking an argument of the file name and it parses and stringifies the JSON file to validate it and remove any unnecessary whitespace. So far so good.
The result of this is then piped to printf, or at least it would be but printf doesn't support input in this way, apparently, so I'm using xargs to pass it in in a way it supports.
I'm using the %q formatter to format the string escaping any characters that would be a problem as part of a command, but when calling printf through xargs, printf claims it doesn't support %q. I think this is perhaps because there is more than one version of printf but I'm not exactly sure how to resolve that.
Any help would be appreciated, even if the solution is completely different from what I've started :) Thanks!
Update
Here's the output I get on MacOS:
$ cat config.json | xargs -0 printf %q
printf: illegal format character q
My JSON file looks like this:
{
"hue_host": "192.168.1.2",
"hue_username": "myUsername",
"port": 12000,
"player_group_config": [
{
"name": "Family Room",
"player_uuid": "ATVUID",
"hue_group": "3",
"on_events": ["media.play", "media.resume"],
"off_events": ["media.stop", "media.pause"]
},
{
"name": "Lounge",
"player_uuid": "STVUID",
"hue_group": "1",
"on_events": ["media.play", "media.resume"],
"off_events": ["media.stop", "media.pause"]
}
]
}
Two ways:
Use xargs to pick up bash's printf builtin instead of the printf(1) executable, probably in /usr/bin/printf(thanks to #GordonDavisson):
pwr_config=`echo "console.log(JSON.stringify(JSON.parse(require('fs').readFileSync(process.argv[2], 'utf-8'))));" | node - config.json | xargs -0 bash -c 'printf "%q\n"'` npm run start
Simpler: you don't have to escape the output of a command if you quote it. In the same way that echo "<|>" is OK in bash, this should also work:
pwr_config="$(echo "console.log(JSON.stringify(JSON.parse(require('fs').readFileSync(process.argv[2], 'utf-8'))));" | node - config.json )" npm run start
This uses the newer $(...) form instead of `...`, and so the result of the command is a single word stored as-is into the pwr_config variable.*
Even simpler: if your npm run start script cares about the whitespace in your JSON, it's fundamentally broken :) . Just do:
pwr_config="$(< config.json)" npm run start
The $(<...) returns the contents of config.json. They are all stored as a single word ("") into pwr_config, newlines and all.* If something breaks, either config.json has an error and should be fixed, or the code you're running has an error and needs to be fixed.
* You actually don't need the "" around $(). E.g., foo=$(echo a b c) and foo="$(echo a b c)" have the same effect. However, I like to include the "" to remind myself that I am specifically asking for all the text to be kept together.
I'm new using Tcl and I have the following script:
proc prepare_xml {pdb_id} {
set filename [exec wget ftp://ftp.ebi.ac.uk/pub/databases/msd/sifts/xml/$pdb_id.xml.gz]
set filename_unzip [exec gunzip "$pdb_id.xml.gz"]
set ready_xml [exec sed -i "/entry /c\<entry>" "$pdb_id.xml"]
return $ready_xml
}
The expected output is the file "filename" uncompress and modified. However, when I execute it the first time, it only downloads the file and it does not uncompress it. If I execute it for a second time, I obtained the expected output and a second copy of the original downloaded file.
Can anyone help me with this? I've tried with after and vwait commands but it doesn't work.
Thank you :)
It's hard to say for sure as you're not describing whether any errors are thrown (that'd be the only reason for the code to not run to completion), but I'd expect something like this to be the right approach:
proc prepare_xml {pdb_id} {
# Double quotes on next line just because of Stack Overflow highlighter
set url "ftp://ftp.ebi.ac.uk/pub/databases/msd/sifts/xml/$pdb_id.xml.gz"
set file $pdb_id.xml
append sedcode {/entry /} "c\\\n" {<entry>}
exec wget -q -O - $url | gunzip -c | sed $sedcode > $file
return $file
}
Firstly, I'm keeping complicated bits in (local) variables to stop the exec line from getting too long. Secondly, I've put all the subprocesses together in the one pipeline. Thirdly, I'm using -q and -O - with wget, and -c with gunzip; look up what they do if you don't understand them. Fourthly, I've put the scriptlet for sed in braces where possible to stop there from being trouble with backslashes, but I've used append and a non-backslashed section to make the pattern because the syntax of c in sed is downright weird (it needs a backslash-newline sequence immediately after on at least some platforms…)
I'd actually use native Tcl code to extract and transform the data if I was doing it for me, but that's a rather larger change.
When I used ffprobe against an animated gif, I get, among other things, this:
> ffprobe.exe foo.gif
. . .
Stream #0:0: Video: gif, bgra, 500x372, 6.67 fps, 6.67 tbr, 100 tbn, 100 tbc
Great; this tells me the frame rate is 6.67 frames per second. But I'm going to be using this in a program and want it in a parsed format. ffprobe does json, but when I use it:
> ffprobe.exe -show_streams -of json foo.gif
The json shows:
"r_frame_rate": "20/3",
"avg_frame_rate": "20/3",
But I want the decimal form 6.67 instead of 20/3. Is there a way to have FFProbe produce its JSON output in decimal? I can't seem to find it in the docs.
My platform is Windows; FFProbe is version N-68482-g92a596f.
I did look into using ImageMagick, but the GIF file in question is corrupted (I'm working on a simple repair program); IM's "identify" command halts on it, while FFMpeg & FFProbe handle it just fine.
Addition: this is kind of academic now; I just used (in Python):
framerate_as_decimal = "%4.2f" % (float(fractions.Fraction(framerate_as_fraction)))
But I'm still kind of curious if there's an answer.
I know this is a bit old question but today I have tried to do the same and found two options:
You can use the subprocess module in python and mediainfo: fps = float(subprocess.check_output('mediainfo --Inform="Video;%FrameRate%" input.mp4, shell=True)) here the returned value is a string, that's why I am converting it to float. Unfortunately I wasn't able to execute the same thing without the shell=True but perhaps I am missing something.
Using ffprobe: ffprobe -v error -select_streams v:0 -show_entries stream=avg_frame_rate -of default=noprint_wrappers=1:nokey=1 input.mp4 here the problem is that the output is 50/1 or in your case 20/3 so you need to split the output by "/" and then to convert and divide the two elements of the list. Something like:
fps = subprocess.check_output(['ffprobe', '-v', 'error', '-select_streams', 'v:0', '-show_entries', 'stream=avg_frame_rate', '-of', 'default=noprint_wrappers=1:nokey=1', 'input.mp4'])
fps_lst = fps.split('/')
fps_real = float(fps_lst[0]) / int(fps_lst[1])
So the normal commands for getting the frame rate are:
ffprobe -v error -select_streams v:0 -show_entries stream=r_frame_rate -of default=noprint_wrappers=1:nokey=1 input.mp4 and mediainfo --Inform="Video;%FrameRate%" input.mp4
In Python, you can just use:
frame_rate_str = "15/3"
frame_rate = eval(frame_rate_str)
I'm using jq to parse some of my logs, but some of the log lines can't be parsed for various reasons. Is there a way to have jq ignore those lines? I can't seem to find a solution. I tried to use the --seq argument that was recommended by some people, but --seq ignores all the lines in my file.
Assuming that each log entry is exactly one line, you can use the -R or --raw-input option to tell jq to leave the lines unparsed, after which you can prepend fromjson? | to your filter to make jq try to parse each line as JSON and throw away the ones that error.
I have log stream where some messages are in json format.
I want to pipe the json messages through jq, and just echo the rest.
The json messages are on a single line.
Solution: use grep and tee to split the lines in two streams, those starting with "^{" pipe through jq and the rest just echo to terminal.
kubectl logs -f web-svjkn | tee >(grep -v "^{") | grep "^{" | jq .
or
cat logs | tee >(grep -v "^{") | grep "^{" | jq .
Explanation:
tee generates 2nd stream, and grep -v prints non json info, 2nd grep only pipes what looks like json opening bracket to jq.
This is an old thread, but here's another solution fully in jq. This allows you to both process proper json lines and also print out non-json lines.
jq -R . as $line | try (fromjson | <further processing for proper json lines>) catch $line'
There are several Q&As on the FAQ page dealing with the topic of "invalid JSON", but see in particular the Q:
Is there a way to have jq keep going after it hits an error in the input file?
In particular, this shows how to use --seq.
However, from the the sparse details you've given (SO recommends a minimal example be given), it would seem it might be better simply to use inputs. The idea is to process one JSON entity at a time, using "try/catch", e.g.
def handle: inputs | [., "length is \(length)"] ;
def process: try handle catch ("Failed", process) ;
process
Don't forget to use the -n option when invoking jq.
See also Processing not-quite-valid JSON.
If JSON in curly braces {}:
grep -Pzo '\{(?>[^\{\}]|(?R))*\}' | jq 'objects'
If JSON in square brackets []:
grep -Pzo '\[(?>[^\[\]]|(?R))*\]' | jq 'arrays'
This works if there are no []{} in non-JSON lines.
I have JSON files on my server that needs to be passed into several different Raspberry Pis running Debian. Each of the Pis has their own JSON feed that they will pull from, but essentially, I need to automatically take the value of one key-value pair and use it as an argument for a command that is run in the terminal.
For instance: Fetching https://www.example.com/api/THDCRUG2899CGF8&/manifest.json
{
"version": "1.5.6",
"update_at": "201609010000",
"body": "172.16.1.1"
}
Then that value would be dynamically output into a command that uses the body's value as an argument. EG: ping [body value]
Edit:
The point of this is to have a task that executes every minute to receive weather updates to it.
You are looking for command substition, specifically wrapped around a command that can extract values from a JSON value. First, you can use jq as the JSON-processing command.
$ jq -r '.body' tmp.json
172.16.1.1
Command substitution allows you to capture the output of jq to use as an argument:
$ ping "$(jq -r '.body' tmp.json)"
PING 172.16.1.1 (172.16.1.1): 56 data bytes
...