Does anyone have experience using ffmpeg with AIR? I have been able to load and play a video but I have not been able to communicate with ffmpeg through stdin in order to control the stream – to seek, pause, etc.
Apparently I need to use filters to communicate with a running instance of ffmpeg but I am struggling with the syntax (or else am misinformed 🙄 ).
Based on a few things gleaned from the web I am trying this but it doesn't do anything:
var cmd:String = "-f lavfi -i movie=filename='" + videoPath + "':streams=0+1[out0][out1] -c:a copy -c:v copy -f flv -"
ffmpegProcess.standardInput.writeUTF(cmd + "\n");
Any tips would be most welcome!
Loading code (set up of the NetStream, etc. done elsewhere). This works.
ffmpegArgs = new Vector.<String>();
ffmpegArgs.push("-re", "-i",videoPath,"-c:a", "copy", "-c:v", "copy","-f", "flv", "-");
var nativeProcessStartupInfo:NativeProcessStartupInfo = new NativeProcessStartupInfo();
nativeProcessStartupInfo.executable = ffmpegFile;
nativeProcessStartupInfo.arguments = ffmpegArgs;
ffmpegProcess.start(nativeProcessStartupInfo);
Related
I am recording audio (oga/vorbis) files with MediaRecorder. When I record these file through Chrome I get problems: I cannot edit the files on ffmpeg and when I try to play them on Firefox it says they are corrupt (they do play fine on Chrome though).
Looking at their metadata on ffmpeg I get this:
Input #0, matroska,webm, from '91.oga':
Metadata:
encoder : Chrome
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0(eng): Audio: opus, 48000 Hz, mono, fltp (default)
[STREAM]
index=0
codec_name=opus
codec_long_name=Opus (Opus Interactive Audio Codec)
profile=unknown
codec_type=audio
codec_time_base=1/48000
codec_tag_string=[0][0][0][0]
codec_tag=0x0000
sample_fmt=fltp
sample_rate=48000
channels=1
channel_layout=mono
bits_per_sample=0
id=N/A
r_frame_rate=0/0
avg_frame_rate=0/0
time_base=1/1000
start_pts=0
start_time=0.000000
duration_ts=N/A
duration=N/A
bit_rate=N/A
max_bit_rate=N/A
bits_per_raw_sample=N/A
nb_frames=N/A
nb_read_frames=N/A
nb_read_packets=N/A
DISPOSITION:default=1
DISPOSITION:dub=0
DISPOSITION:original=0
DISPOSITION:comment=0
DISPOSITION:lyrics=0
DISPOSITION:karaoke=0
DISPOSITION:forced=0
DISPOSITION:hearing_impaired=0
DISPOSITION:visual_impaired=0
DISPOSITION:clean_effects=0
DISPOSITION:attached_pic=0
TAG:language=eng
[/STREAM]
[FORMAT]
filename=91.oga
nb_streams=1
nb_programs=0
format_name=matroska,webm
format_long_name=Matroska / WebM
start_time=0.000000
duration=N/A
size=7195
bit_rate=N/A
probe_score=100
TAG:encoder=Chrome
As you can see there are problems with the duration. I have looked at posts like this:
How can I add predefined length to audio recorded from MediaRecorder in Chrome?
But even trying that, I got errors when trying to chop and merge files.For example when running:
ffmpeg -f concat -i 89_inputs.txt -c copy final.oga
I get a lot of this:
[oga # 00000000006789c0] Non-monotonous DTS in output stream 0:0; previous: 57612, current: 1980; changing to 57613. This may result in incorrect timestamps in the output file.
[oga # 00000000006789c0] Non-monotonous DTS in output stream 0:0; previous: 57613, current: 2041; changing to 57614. This may result in incorrect timestamps in the output file.
DTS -442721849179034176, next:42521 st:0 invalid dropping
PTS -442721849179034176, next:42521 invalid dropping st:0
[oga # 00000000006789c0] Non-monotonous DTS in output stream 0:0; previous: 57614, current: 2041; changing to 57615. This may result in incorrect timestamps in the output file.
[oga # 00000000006789c0] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
DTS -442721849179031296, next:42521 st:0 invalid dropping
PTS -442721849179031296, next:42521 invalid dropping st:0
Does anyone know what we need to do to audio files recorded from Chrome for them to be useful? Or is there a problem with my setup?
Recorder js:
if (navigator.getUserMedia) {
console.log('getUserMedia supported.');
var constraints = { audio: true };
var chunks = [];
var onSuccess = function(stream) {
var mediaRecorder = new MediaRecorder(stream);
record.onclick = function() {
mediaRecorder.start();
console.log(mediaRecorder.state);
console.log("recorder started");
record.style.background = "red";
stop.disabled = false;
record.disabled = true;
var aud = document.getElementById("audioClip");
start = aud.currentTime;
}
stop.onclick = function() {
console.log(mediaRecorder.state);
console.log("Recording request sent.");
mediaRecorder.stop();
}
mediaRecorder.onstop = function(e) {
console.log("data available after MediaRecorder.stop() called.");
var audio = document.createElement('audio');
audio.setAttribute('controls', '');
audio.setAttribute('id', 'audioClip');
audio.controls = true;
var blob = new Blob(chunks, { 'type' : 'audio/ogg; codecs="vorbis"' });
chunks = [];
var audioURL = window.URL.createObjectURL(blob);
audio.src = audioURL;
sendRecToPost(blob); // this just send the audio blob to the server by post
console.log("recorder stopped");
}
I found at the ffmpeg documentation that we can set metadata at the conversion using this option:
//-metadata[:metadata_specifier] key=value (output,per-metadata)
//Set a metadata key/value pair.
ffmpeg -i in.avi -metadata title="my title" out.flv
You can also test if the duration conversion limit works on your case:
//-t duration (input/output)
//When used as an input option (before -i), limit the duration of data read from the input file.
//When used as an output option (before an output url), stop writing the output after its duration reaches duration.
I am trying to stream MP4 video as it is encoded from a webserver. I believe I used the appropriate flags, but it is not working correctly. When I download the video from my stream and open it with VLC, it properly shows the duration. Since a socket is not seekable, I assume it writes the metadata to end? My Chrome browser always shows 8 seconds duration. The first 8 seconds plays at the normal speed, but afterwards the pause button turns into play button and the video plays very fast, probably as fast as it is recieved. However the audio is played at normal speed. I tried document.getElementById('myVid').duration = 20000 but it is a readonly field.
I wonder, is there anyway to explicitly state the duration in HTTP headers or in any other way? I cannot find any documentation about it.
ffmpeg -i - -vcodec libx264 -acodec libvo_aacenc -ar 44100 -ac 2 -ab 128000 -f mp4 -movflags frag_keyframe+faststart pipe:1 -fflags +genpts -re -profile baseline -level 30 -preset fast
To close-voters, that thinks it is not programming related, I use it in my own server I coded, and I need to set the duration programatically via JavaScript or setting the HTTP header. I believe it may be related to both ffmpeg or http headers, that's why I posted it here.
app.get("/video/*", function(req,res){
res.writeHead(200, {
'Content-Type': 'video/mp4',
});
var dir = req.url.split("/").splice(2).join("/");
var buf = new Buffer(dir, 'base64');
var src = buf.toString();
var Transcoder = require('stream-transcoder');
var stream = fs.createReadStream(src);
// I added my own flags to this module, they are at below:
new Transcoder(stream)
.videoCodec('libx264')
.audioCodec("libvo_aacenc")
.sampleRate(44100)
.channels(2)
.audioBitrate(128 * 1000)
.format('mp4')
.on('finish', function() {
console.log("finished");
})
.stream().pipe(res);
});
exec function in that stream-transcoder module,
a.push("-fflags");
a.push("+genpts");
a.push("-re");
a.push("-profile");
a.push("baseline");
a.push("-level");
a.push("30");
a.push("-preset");
a.push("fast");
a.push("-strict");
a.push("experimental");
a.push("-frag_duration");
a.push("" + 2 * (1000 * 1000));
var child = spawn('ffmpeg', a, {
cwd: os.tmpdir()
});
I believe the X-Content-Duration header is what you need.
Mozilla documentation on X-Content-Duration*
* The documentation discusses the OGG format, but the principle applies to other video formats
As discussed in a previous question, I have built a prototype (using MVC Web API, NAudio and NAudio.Lame) that is streaming live low quality audio after converting it to mp3. The source stream is PCM: 8K, 16-bit, mono and I'm making use of html5's audio tag.
On both Chrome and IE11 there is a 15-34 second delay (high-latency) before audio is heard from the browser which, I'm told, is unacceptable for our end users. Ideally the latency would be no more than 5 seconds. The delay occurs even when using the preload="none" attribute within my audio tag.
Looking more closely at the issue, it appears as though both browsers will not start playing audio until they have received ~32K of audio data. With that in mind, I can affect the delay by changing Lame's MP3 'bitrate' setting. However, if I reduce the delay (by sending more data to the browser for the same length of audio), I will introduce audio drop-outs later.
Examples:
If I use Lame's V0 encoding the delay is nearly 34 seconds which requires almost 0.5 MB of source audio.
If I use Lame's ABR_32 encoding, I can reduce the delay to 10-15 seconds but I will experience pauses and drop-outs throughout the listening session.
Questions:
Any ideas how I can minimize the start-up delay (latency)?
Should I continue investigating various Lame 'presets' in hopes of picking the "right" one?
Could it be that MP3 is not the best format for live streaming?
Would switching to Ogg/Vorbis (or Ogg/OPUS) help?
Do we need to abandon HTML5's audio tag and use Flash or a java applet?
Thanks.
You can not reduce the delay, since you have no control on the browser code and buffering size. HTML5 specification does not enforce any constraint, so I don't see any reason why it would improve.
You can however implement a solution with webaudio API (it's quite simple), where you handle streaming yourself.
If you can split your MP3's chunk in fixed size (so that each MP3 chunks size is known beforehand, or at least, at receive time), then you can have a live streaming in 20 lines of code. The chunk size will be your latency.
The key is to use AudioContext::decodeAudioData.
// Fix up prefixing
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
var offset = 0;
var byteOffset = 0;
var minDecodeSize = 16384; // This is your chunk size
var request = new XMLHttpRequest();
request.onprogress = function(evt)
{
if (request.response)
{
var size = request.response.length - byteOffset;
if (size < minDecodeSize) return;
// In Chrome, XHR stream mode gives text, not ArrayBuffer.
// If in Firefox, you can get an ArrayBuffer as is
var buf;
if (request.response instanceof ArrayBuffer)
buf = request.response;
else
{
ab = new ArrayBuffer(size);
buf = new Uint8Array(ab);
for (var i = 0; i < size; i++)
buf[i] = request.response.charCodeAt(i + byteOffset) & 0xff;
}
byteOffset = request.response.length;
context.decodeAudioData(ab, function(buffer) {
playSound(buffer);
}, onError);
}
};
request.open('GET', url, true);
request.responseType = expectedType; // 'stream' in chrome, 'moz-chunked-arraybuffer' in firefox, 'ms-stream' in IE
request.overrideMimeType('text/plain; charset=x-user-defined');
request.send(null);
function playSound(buffer) {
var source = context.createBufferSource(); // creates a sound source
source.buffer = buffer; // tell the source which sound to play
source.connect(context.destination); // connect the source to the context's destination (the speakers)
source.start(offset); // play the source now
// note: on older systems, may have to use deprecated noteOn(time);
offset += buffer.duration;
}
I am trying to make a program to convert png files in ATF textures but i am having some trouble when i try to use the NativeProcess... I am using actionscript 3 with IntelliJ IDEA.
I want to pass that prompt command png2atf -c p -i starling-atf.png -o starling.atf, to my NativeProcess...
So, i choose a png file, from a File().load object, and then i want to take this file and send as a parameter to my NativeProcess and make the conversation over the prompt command (png2atf -c p -i starling-atf.png -o starling.atf)....
Any ideas?
#puggsoy u are write, the problem were spaces... i put some spaces with the args, thats why..
here its the right code:
f.nativePath = "C:/projects/SDK/Adobe Gaming SDK 1.0.1/Utilities/ATF Tools/Windows/png2atf.exe";
nativeProcessStartupInfo.executable = f;
// now create the arguments Vector to pass it to the executable file
var processArgs:Vector.<String> = new Vector.<String>();
processArgs[0] = "-c";
processArgs[1] = arg;
processArgs[2] = "-i";
processArgs[3] = input;
processArgs[4] = "-o";
processArgs[5] = output;
nativeProcessStartupInfo.arguments = processArgs;
process = new NativeProcess();
process.start(nativeProcessStartupInfo);
As title, what command/class can i used for that? and if the function is exist whether function to get callback from commandshell?
You can run and communicate with other processes in AIR as per this article.
So, if you wanted to run the Windows command prompt, you would have to provide the location of cmd.exe which is "%windir%\system32\cmd.exe". Unfortunately, AIR won't understand %windir%, so you will have to actually provide the full path to the Windows directory (usually C: but you will have to figure out how to handle cases where it is not C:).
Annoyingly, the command prompt does not seem to act like a normal input stream; I receive errors when trying to write to it. There may be some way around that that I don't know about it. Instead though, you can just start the command prompt with your arguments.
For instance, the following code will start a command prompt (assuming Windows is on C), print "hello" and trace the output (which in this case will just be "hello").
var nativeProcessStartupInfo:NativeProcessStartupInfo = new NativeProcessStartupInfo();
var file:File = File.applicationDirectory.resolvePath("C:\\Windows\\System32\\cmd.exe");
nativeProcessStartupInfo.executable = file;
var processArgs:Vector.<String> = new Vector.<String>();
processArgs.push("/C echo 'hello'");
nativeProcessStartupInfo.arguments = processArgs;
process = new NativeProcess();
process.addEventListener(ProgressEvent.STANDARD_OUTPUT_DATA, onOutputData);
process.start(nativeProcessStartupInfo);
public function onOutputData(event:ProgressEvent):void
{
trace("Got: ", NativeProcess(event.target).standardOutput.readUTFBytes(process.standardOutput.bytesAvailable));
}