How to successfully parse the output of FFMpeg in NodeJS - json

So I have seen a lot of topics on FFMPeg and it's a great tool I learnt about today, but I have spent the day perfecting the command and now am a little stuck with the NodeJS part.
In essence the command does the following: take input from a Mac OSX webcam, and then stream it to a web-socket. Now I looked at a lot of the NodeJS libraries but I couldn't find one that did what I need; or did not understand how to. Here is an example of the command that I am using:
ffmpeg -f avfoundation -framerate 30 -video_size 640x480 -pix_fmt uyvy422 -i "0:1" -f mpegts -codec:v mpeg1video -s 640x480 -b:v 1000k -bf 0 http://localhost:8081/stream
This does everything I need for the streaming side of things, but I wish to call it via NodeJS, and then be able to monitor the log, and parse the data that comes back for example:
frame= 4852 fps= 30 q=6.8 size= 30506kB time=00:02:41.74 bitrate=1545.1kbits/s speed= 1x \r
and use it to get a JSON array back for me to output to a webpage.
Now all I am doing is working on ways of actually parsing the data, and I have looked at lots of other answers for things like this, but I can't seem to split/replace/regex it. I can't get anything but a long string from it.
Here is the code I am using (NodeJS):
var ffmpeg = require('child_process').spawn('/usr/local/Cellar/ffmpeg/3.3.1/bin/ffmpeg', ['-f', 'avfoundation', '-framerate', '30', '-video_size', '640x480', '-pix_fmt', 'uyvy422', '-i', '0:1', '-f', 'mpegts', '-codec:v', 'mpeg1video', '-s', '640x480', '-b:v', '1000k', '-bf', '0', 'http://localhost:8081/test']);
ffmpeg.on('error', function (err) {
console.log(err);
});
ffmpeg.on('close', function (code) {
console.log('ffmpeg exited with code ' + code);
});
ffmpeg.stderr.on('data', function (data) {
// console.log('stderr: ' + data);
var tData = data.toString('utf8');
// var a = tData.split('[\\s\\xA0]+');
var a = tData.split('\n');
console.log(a);
});
ffmpeg.stdout.on('data', function (data) {
var frame = new Buffer(data).toString('base64');
// console.log(frame);
});
I have tried splitting with new lines, carridge return, spaces, tabs, but I just can't seem to get a basic array of bits, that I can work with.
Another thing to note, is you will notice the log comes back via stderr, I have seen this online and apparently it does it for a lot of people? So I am not sure what the deal is with that? but the code is is the sdterr callback.
Any help is very appreciated as I am truly confused on what I am doing wrong.
Thanks.

An update on this, I worked with one of the guys off the IRC channel: #ffmpeg on FreeNode. The answer was to send the output via pipe to stdout.
For example I appended the following to the FFMpeg command:
-progress pipe:1
The progress flag is used to give an output every second with information about the stream, so this is pretty much everything you get from the stderr stream every second, but piped to the stdout stream in a format that I can parse. Below is taken from the documentation.
-progress url (global) Send program-friendly progress information to url. Progress information is written approximately every second and at the end of the encoding process. It is made of "key=value" lines. key consists of only alphanumeric characters. The last key of a sequence of progress information is always "progress".
Here is an example of the code I used to parse the stream information:
ffmpeg.stdout.on('data', function (data) {
var tLines = data.toString().split('\n');
var progress = {};
for (var i = 0; i < tLines.length; i++) {
var item = tLines[i].split('=');
if (typeof item[0] != 'undefined' && typeof item[1] != 'undefined') {
progress[item[0]] = item[1];
}
}
// The 'progress' variable contains a key value array of the data
console.log(progress);
});
Thanks to all that commented!

In the spirit of not reinventing the wheel, you might want to try using fluent-ffmpeg. It dispatches a progress event with a number of useful fields
'progress': transcoding progress information
The progress event is emitted every time ffmpeg reports progress
information. It is emitted with an object argument with the following
keys:
frames: total processed frame count
currentFps: framerate at which FFmpeg is currently processing
currentKbps: throughput at which FFmpeg is currently processing
targetSize: current size of the target file in kilobytes
timemark: the timestamp of the current frame in seconds
percent: an estimation of the progress percentage
If you're curious about how they do this, you can read the source, starting from here and here
Ffmpeg uses stderr to output log info because stdout is used for piping the output to other processes. The stuff in stderr is actually just debug information, and not the actual output of the process.
BONUS ROUND
I've seen some hacky video players that use websockets to stream videos, but that approach has a number of issues with it. I'm not going to go over those, but I will explain why I think you should use hls.js.
Support is pretty good; basically works everywhere except old IE. It uses MSE to upgrade the standard video element, so you don't have to wrestle with building a custom player.
Here are the docs for the hls format flag
Here's some code that I'm using to stream from an IPTV box to a web page.
this.ffmpeg = new FFFmpeg()
this.ffmpeg.input(request(this.http_stream))
.videoCodec('copy')
.audioCodec('copy')
.outputOptions([
'-f hls',
'-hls_list_size 6',
'-hls_flags delete_segments'
])
.output( path.join(this.out_dir, 'video.m3u8') )
.run()
It generates a .m3u8 manifest file along with segmented mpeg-ts video files. All you need to do after that is load the m3u8 file into the hls.js player and you have a live stream!
If you're going to re-encode the stream, you will probably see some low fps and glitchiness. I'm lucky since my source stream is already encoded as mpeg-ts.

Related

Read raw Genicam H.264 data to avlib

I try to get familiar with libav in order to process a raw H.264 stream from a GenICam supporting camera.
I'd like to receive the raw data via the GenICam provided interfaces (API), and then forward that data into libav in order to produce a container file that then is streamed to a playing device like VLC or (later) to an own implemented display.
So far, I played around with the GenICam sample code, which transferres the raw H.264 data into a "sample.h264" file. This file, I have put through the command line tool ffmpeg, in order to produce an mp4 container file that I can open and watch in VLC
command: ffmpeg -i "sample.h264" -c:v copy -f mp4 "out.mp4"
Currently, I dig through examples and documentations for each H.264, ffmpeg, libav and video processing in general. I have to admit, as total beginner, it confuses me a lot.
I'm at the point where I think I have found the according libav functions that would help my undertaking:
I think, basically, I need the functions avcodec_send_packet() and avcodec_receive_packet() (since avcodec_decode_video2() is deprecated).
Before that, I set up an avCodedContext structure and open (or combine?!?) it with the H.264 codec (AV_CODEC_ID_H264).
So far, my code looks like this (omitting error checking and other stuff):
...
AVCodecContext* avCodecContext = nullptr;
AVCodec *avCodec = nullptr;
AVPacket *avPacket = av_packet_alloc();
AVFrame *avFrame = nullptr;
...
avCodec = avcodec_find_decoder(AV_CODEC_ID_H264);
avCodecContext = avcodec_alloc_context3(avCodec);
avcodec_open2 ( avCodecContext, avCodec, NULL );
av_init_packet(avPacket);
...
while(receivingRawDataFromCamera)
{
...
// receive raw data via GenICam
DSGetBufferInfo<void*>(hDS, sBuffer.BufferHandle, BUFFER_INFO_BASE, NULL, pPtr)
// libav action
avPacket->data =static_cast<uint8_t*>(pPtr);
avErr = avcodec_send_packet(avCodecContext, avPacket);
avFrame = av_frame_alloc();
avErr = avcodec_receive_frame( avCodecContext, avFrame);
// pack frame in container? (not implemented yet)
..
}
The result of the code above is, that both calls to send_packet() and receive_frame() return error codes (-22 and -11), which I'm not able to decrypt via av_strerror() (it only says, these are error codes 22 and 11).
Edit: Maybe as an additional information for those who wonder if
avPacket->data = static_cast<uint8_t*>(pPtr);
is a valid operation...
After the very first call to this operation, the content of avPacket->data is
{0x0, 0x0, 0x0, 0x1, 0x67, 0x64, 0x0, 0x28, 0xad, 0x84, 0x5,
0x45, 0x62, 0xb8, 0xac, 0x54, 0x74, 0x20, 0x2a, 0x2b, 0x15, 0xc5,
0x62}
which somehow looks as something to be expected becaus of the NAL marker and number in the beginning?
I don't know, since I'm really a total beginner....
The question now is, am I on the right path? What is missing and what do the codes 22 and 11 mean?
The next question would be, what to do afterwards, in order to get a container that I can stream (realtime) to a player?
Thanks in advance,
Maik
At least for the initally asked question I found the solution for myself:
In order to get rid of the errors on calling the functions
avcodec_send_packet(avCodecContext, avPacket);
...
avcodec_receive_frame( avCodecContext, avFrame);
I had to manually fill some parameters of 'avCodecContext' and 'avPacket':
avCodecContext->bit_rate = 8000000;
avCodecContext->width = 1920;
avCodecContext->height = 1080;
avCodecContext->time_base.num = 1;
avCodecContext->time_base.den = 25;
...
avPacket->data = static_cast<uint8_t*>(pPtr);
avPacket->size = datasize;
avPacket->pts = frameid;
whereas 'datasize' and 'frameid' are received via GenICam, and may not be the appropriate parameters for the fields, but at least I do not get any errors anymore.
Since this answers my initial question on how I get the raw data into the structures of libav, I think, the question is answered.
The discussion and suggestions with/from Vencat in the commenst section lead to additional questions I have, but which should be discussed in a new question, I guess.

How to pass wait time or sleep time through json data?

Is it possible to pass wait time or sleep time though Json data.
For example, this is my Json data:
{
"Departuremonth":"5",
"Creditcard":"4012000077777777",
"Firstname":"test",
"Lastname":"user",
"Phone":"8111231311"
}
which i will be fetching in my protractor codes. Now in some places in protractor, I have used sleep to wait for the element.
browser.sleep(3000); //sleep for 3 seconds
So, that sleep time I am writing as a code in protractor. I want to call that sleep or wait from the json data only.
Can anyone suggest something on this?
Yes, it is possible to pass any kind of data from json file. It's quite easy too. Follow below code:
1->Create json file ex-testData.json with following inputs
testData.json:
{ "shortWait":"5000",
"mediumWait":"12000",
"longWait":"20000"
}
2->Import the testData.json file on sepc.js/methods.js file where you would like to use it.
var input=require("../testData.json")//make sure testData.json file path is correct
3->Now you can read the respective wait value from testData.json file using object.name syntax like "input.shortWait" and pass this value to wait/sleep methods as follows:
it('example to pass wait time from json file:'function(){
//line of codes
utility.wait(input.shorWait);//make sure that wait method available in
//utility file
browser.sleep(input.longWait)
});

Append string to a Flash AS3 shared object: For science

So I have a little flash app I made for an experiment where users interact with the app in a lab, and the lab logs the interactions.
The app currently traces a timestamp and a string when the user interacts, it's a useful little data log in the console:
trace(Object(root).my_date + ": User selected the cupcake.");
But I need to move away from using traces that show up in the debug console, because it won't work outside of the developer environment of Flash CS6.
I want to make a log, instead, in a SO ("Shared Object", the little locally saved Flash cookies.) Ya' know, one of these deals:
submit.addEventListener("mouseDown", sendData)
function sendData(evt:Event){
{
so = SharedObject.getLocal("experimentalflashcookieWOWCOOL")
so.data.Title = Title.text
so.data.Comments = Comments.text
so.data.Image = Image.text
so.flush()
}
I don't want to create any kind of architecture or server interaction, just append my timestamps and strings to an SO. Screw complexity! I intend to use all 100kb of the SO allocation with pride!
But I have absolutely no clue how to append data to the shared object. (Cough)
Any ideas how I could create a log file out of a shared object? I'll be logging about 200 lines per so it'd be awkward to generate new variable names for each line then save the variable after 4 hours of use. Appending to a single variable would be awesome.
You could just replace your so.data.Title line with this:
so.data.Title = (so.data.Title is String) ? so.data.Title + Title.text : Title.text; //check whether so.data.Title is a String, if it is append to it, if not, overwrite it/set it
Please consider not using capitalized first letter for instance names (as in Title). In Actionscript (and most C based languages) instance names / variables are usually written with lowercase first letter.

Serial communication with Arduino and serproxy to flash. Expected result just in test mode

I have an Arduino with three sensors connected. Every 100ms the arduino print a new line on serial with the three updated values separated by #:
Ex.:
23#11#50_18_1_14_48_0_226_0_16_33_64_2_1_97_36_128_24_170
26#12#50_18_1_14_48_0_226_0_16_33_64_2_1_97_36_128_24_170
33#11#50_18_1_14_48_0_226_0_16_33_64_2_1_97_36_128_24_170
48#10#50_18_1_14_48_0_226_0_16_33_64_2_1_97_36_128_24_170
Using serproxy to pass this values to Flash and the AS3 socket functions I can trace the serial output in test mode (CTRL+ENTER). At this point all works as expected.
When I publish and run the swf file I can receive the serial data but not as expected... Every 100ms I receive just part of the output. Not always the same part and not always the full expected output as I receive in test mode.
Could it be something related with security?
Here is my flash code to receive the data:
var dataSocket:Socket = new Socket("localhost",5333);
dataSocket.addEventListener(ProgressEvent.SOCKET_DATA, socketDataHandler);
function socketDataHandler(event:ProgressEvent):void {
var sensValue:String;
sensValue = dataSocket.readUTFBytes(dataSocket.bytesAvailable);
trace(sensValue);
var sensData:Array = sensValue.split("#");
sensor1 = sensData[0].toString();
sensor2 = sensData[1].toString();
sensor3 = sensData[2].toString();
}
Any ideas? Thanks
You're running into Socket security errors that are bypassed when you re using debug mode.
Depending on what you need to do:
(1) If you want flash in the browser you will need to look into security http://help.adobe.com/en_US/as3/dev/WS5b3ccc516d4fbf351e63e3d118a9b90204-7c60.html
(2) or if you can use Air instead, you won't have to deal with any of this.

Using Other Data Sources for cubism.js

I like the user experience of cubism, and would like to use this on top of a backend we have.
I've read the API doc's and some of the code, most of this seems to be extracted away. How could I begin to use other data sources exactly?
I have a data store of about 6k individual machines with 5 minute precision on around 100 or so stats.
I would like to query some web app with a specific identifier for that machine and then render a dashboard similar to cubism via querying a specific mongo data store.
Writing the webapp or the querying to mongo isn't the issue.
The issue is more in line with the fact that cubism seems to require querying whatever data store you use for each individual data point (say you have 100 stats across a window of a week...expensive).
Is there another way I could leverage this tool to look at data that gets loaded using something similar to the code below?
var data = [];
d3.json("/initial", function(json) { data.concat(json); });
d3.json("/update", function(json) { data.push(json); });
Cubism takes care of initialization and update for you: the initial request is the full visible window (start to stop, typically 1,440 data points), while subsequent requests are only for a few most recent metrics (7 data points).
Take a look at context.metric for how to implement a new data source. The simplest possible implementation is like this:
var foo = context.metric(function(start, stop, step, callback) {
d3.json("/data", function(data) {
if (!data) return callback(new Error("unable to load data"));
callback(null, data);
});
});
You would extend this to change the "/data" URL as appropriate, passing in the start, stop and step times, and whatever else you want to use to identify a metric. For example, both Cube and Graphite use a metric expression as an additional query parameter.