handbrake-js (node) no video on html5-Chrome - google-chrome

I'm trying to convert a video using handbrake-js for node.
At first I tried specifying the bitrate, video size, codecs, etc. The goal is to generate several html5 compatible streams to be used as a source on a canvas video for webGL. Everything seems to work fine, it outputs the video, and when I open it using Quicktime or VLC it looks fine, however, when I use it on a tag, there is no video, just audio.
The following code is called whithin a function which receives an "ops" JSON, with the width and height.
hb.spawn({ input: new_location + "original" + ext, output: new_location + ops.name, optimize: true, vb: ops.vb, "width": ops.width, "height": ops.height, "rate": 30 })
The console shows the video being converted, and a clean exit.
but webGL reports:
[.Offscreen-For-WebGL-0x7fbf21074c00]RENDER WARNING: texture bound to texture unit 0 is not renderable. It maybe non-power-of-2 and have incompatible texture filtering.
Note: The video IS a power of 2 (1024x512). Also when I play the original file it doesen't show the error.
To debug, I have even attached the video to the HTML, and changed the CSS to show it. but there is no video. just audio.
I have even tried:
hb.spawn({ input: "input.mp4", output: "output.m4v" })
and a simple
Well... input.mp4 displays fine. output.m4v always fails to show video on HTML (Chrome, safari seems to work just fine).
Any ideas?

If you say preset: Normal works, then you can run handbrake --preset-list (using handbrake-js installed as a command-line app) to see which encoder options the "normal" preset uses:
+ Normal: -e x264 -q 20.0 -a 1 -E ffaac -B 160 -6 dpl2 -R Auto -D 0.0 --audio-copy-mask aac,ac3,dtshd,dts,mp3 --audio-fallback ffac3 -f mp4 --loose-anamorphic --modulus 2 -m --x264-preset veryfast --h264-profile main --h264-level 4.0
So, try running hb.spawn using the options above and remove any options you don't need.

Related

JPEG live stream in html slow

From a raw video source I'm trying to stream jpeg images to html as fast as possible in a embedded platform/board using linux.
At the gstreamer side I can see that the jpeg image is updated at ~37fps the pipeline looks like this:
appscr -> videoconvert -> jpegenc -> multifilesink
based in this question I created the next embedded html:
<!DOCTYPE html>
<html>
<head>
<meta charset='UTF-8' />
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
<script src="app.js"></script>
</head>
<body>
<img id="snapshot" src="snapshot.jpeg"/>
</body>
</html>
and the java script:
$(function() {
function refreshImage(){
$("#snapshot").attr("src", 'snapshot.jpeg?' + Math.random());
setTimeout(refreshImage, 20);
}
refreshImage();
})
Opening a web browser from a PC and typing the platform/board IP I can see the video stream, but the problem is that the image is updated too slow and I would expect a more fluid/fast video given the source frame rate (37fps).
does anyone know what could be the reason why the update is slow?
I think this deserves proper analysis since it is interesting subject (at least for me).
Testing environment
I completely replicated scenario on 2 PCs within same LAN.
PC 1 is creating jpeg images from live stream with following pipeline
gst-launch-1.0 -v rtspsrc location="rtsp://freja.hiof.no:1935/rtplive/_definst_/hessdalen03.stream" \
! rtph264depay ! avdec_h264 \
! timeoverlay halignment=right valignment=bottom \
! videorate ! video/x-raw,framerate=37000/1001 ! jpegenc ! multifilesink location="snapshot.jpeg"
and serving index.html, app.js and (endlessly updated) snapshot.jpeg with python's simple http server
python -m SimpleHTTPServer 8080
PC 2 is accessing index.html using Chrome browser (with developer tools window) and showing images.
For testing purposes
I've added timeoverlay in gstreamer pipeline that adds timestamp on each image in right bottom corner.
I increased refresh period in JS function on 1000 ms.
Analysis of test results
Here is browser's network log
Time column shows periods (in ms) that browser spends to fetch (download) one image from server. Those periods are not always the same with average of ~100ms for images with size ~87KB.
Fetch time interval actually includes:
interval that HTTP GET needs to reach server from browser,
interval that server needs to read image from disk and send it back as HTTP response,
interval that HTTP response needs to reach browser.
1st and 3rd interval are directly depend on "internet" environment: if browser is "farther" away from server interval will be greater.
2nd interval is proportional to server "speed": how fast server can read images from disk and handle HTTP request/response
There is another interval proportional to "speed" of PC which runs browser: how fast PC can handle HTTP GET request/response and re-render image.
Conclusion
There are many unavoidable delay intervals that depend on testing environment - internet and capabilities of server machine and client machine with browser - and your code in browser is executing as fast as it possibly can.
In any case, 37 fps sounds like some live stream video. There are specialized protocols to stream video that can be shown in browser (e.g. MPEG DASH or HLS) by serving video chunk-by-chunk (where each chunk contains many video frames).

Updated(reproducible) - Gaps when recording using MediaRecorder API(audio/webm opus)

----- UPDATE HAS BEEN ADDED BELOW -----
I have an issue with MediaRecorder API (https://www.w3.org/TR/mediastream-recording/#mediarecorder-api).
I'm using it to record the speech from the web page(Chrome was used in this case) and save it as chunks.
I need to be able to play it while and after it is recorded, so it's important to keep those chunks.
Here is the code which is recording data:
navigator.mediaDevices.getUserMedia({ audio: true, video: false }).then(function(stream) {
recorder = new MediaRecorder(stream, { mimeType: 'audio/webm; codecs="opus"' })
recorder.ondataavailable = function(e) {
// Read blob from `e.data`, decode64 and send to sever;
}
recorder.start(1000)
})
The issue is that the WebM file which I get when I concatenate all the parts is corrupted(rarely)!. I can play it as WebM, but when I try to convert it(ffmpeg) to something else, it gives me a file with shifted timings.
For example. I'm trying to convert a file which has duration 00:36:27.78 to wav, but I get a file with duration 00:36:26.04, which is 1.74s less.
At the beginning of file - the audio is the same, but after about 10min WebM file plays with a small delay.
After some research, I found out that it also does not play correctly with the browser's MediaSource API, which I use for playing the chunks. I tried 2 ways of playing those chunks:
In a case when I just merge all the parts into a single blob - it works fine.
In case when I add them via the sourceBuffer object, it has some gaps (i can see them by inspecting buffered property).
697.196 - 697.528 (~330ms)
996.198 - 996.754 (~550ms)
1597.16 - 1597.531 (~370ms)
1896.893 - 1897.183 (~290ms)
Those gaps are 1.55s in total and they are exactly in the places where the desync between wav & webm files start. Unfortunately, the file where it is reproducible cannot be shared because it's customer's private data and I was not able to reproduce such issue on different media yet.
What can be the cause for such an issue?
----- UPDATE -----
I was able to reproduce the issue on https://jsfiddle.net/96uj34nf/4/
In order to see the problem, click on the "Print buffer zones" button and it will display time ranges. You can see that there are two gaps:
0 - 136.349, 141.388 - 195.439, 197.57 - 198.589
136.349 - 141.388
195.439 - 197.57
So, as you can see there are 5 and 2 second gaps. Would be happy if someone could shed some light on why it is happening or how to avoid this issue.
Thank you
It's 7 months later so I guess you resolved this, but in case not...
When we started working with the MediaRecorder we were having a few issues, including recordings disappearing (Maybe going over a RAM quota and then the arrays were deallocated or something like that)
What solved all our issues was to immediately put each chunk into an indexdb objectStore so it is saved to disk, and at the end of the recording, build all those chunks into a blob and download. No further working with the chunks, only the complete file.
I know this doesn't answer your question but maybe it helps.

can't re-stream using FFMPEG to MP4 HTML5 video

I have been struggling to get live streaming working from FFMPEG for many hours so raising the white flag and asking for help here.
My scenario is I have an IP security camera that I can successfully connect via RTSP (h.264) and save the video as file segments, and they play back fine either through a standalone app like VLC or via a node.js web server app that sends the 'video/mp4' & keep-alive header and streams the mp4 files previously saved by FFMPEG to a HTML5 video client.
However I want to take the same RTSP stream and re-stream it live to a HTML5 client. I know the HTML5 client bits and the FFMPEG remuxing to MP4 work as the MP4 recording/streaming works.
I have tried the following:
1) Set the output as a HTTP string. I don't think FFMPEG supports this as I get 'input/output error' and the FFMPEG documentation talks about another app called FFSERVER which isn't supported on Windows
ffmpeg -i rtsp://admin:12345#192.168.1.234:554 -vcodec copy -f mp4 -movflags frag_keyframe+empty_moov http://127.0.0.1:8888
2) As ffmpeg runs as a spawn in node.js, I have tried piping the STDOUT to a node http server, using the same header I use for the recording playback stream. I can view this stream in VLC which is a good sign but I can't get the HTML client to recognize the stream and it shows blank, or occasionally a static image of the stream.
var liveServer = http.createServer(liveStream);
var liveStream = function (req, resp) { // handle each client request by instantiating a new FFMPEG instance
resp.writeHead(200, {"Content-Type": "video/mp4", "Connection": "keep-alive"});
var xffmpeg = child_process.spawn("ffmpeg", [
"-i", "rtsp://admin:12345#192.168.1.234:554" , "-vcodec", "copy", "-f", "mp4", "-movflags", "frag_keyframe+empty_moov", "-" // output to stdout
], {detached: false});
xffmpeg.stdout.pipe(resp);
xffmpeg.on("exit", function (code) {
console.log("Xffmpeg terminated with code " + code);
});
xffmpeg.on("error", function (e) {
console.log("Xsystem error: " + e);
});
xffmpeg.stdout.on("data",function(data) {
console.log('Xdata rcv ' + data);
});
xffmpeg.stderr.on("data", function (data) {
console.log("XFFMPEG -> " + data);
}
}
I have tried both IE11 and Chrome HTML5 clients.
I suspect there is something not quite right with the format of the stream being sent which stops the HTML5 video client but not enough to stop VLC. The irritating thing is that the code above works just fine for playing back MP4 streams that have been recorded.
Any ideas how to get live re-streaming via FFMPEG working? Thanks.
Just curious have you solved it ? I'm doing basically the same thing except for it being screen cast.... I've switched to using webm format which plays perfectly...however the video will "lag" behind by an additional few seconds more... Which is bad for me, but might work for you

How to receive a rtp, rtcp or udp, from a stream of gstreamer, on video HTML5?

I'm trying to get a video stream RTP/RTCP using HTML5, the stream was generated by gstreamer. I used examples of gstreamer, so I can pass through RTP ports:5000, and RTCP:5001, and can receive streams using gstreamer. But using HTML5 could not receive them. So I tried to read a bit about HTML5 and saw that it can receive theora/ogg, webm/vp8, mp4/avc, and protocols may be, HTTP, RTP, RTCP, UDP, and others, but I could not use RTP, RTCP or UDP, HTTP only managed to receive. But I had a very satisfactory result using the VLC plugin for Mozilla Firefox, using the UDP protocol. I wonder if anyone has any tips, I do not want to use source files as src="/tmp/test.avi" needs to be a video stream that can be udp, RTP, RTCP. Thank you!
If you don't need to stream at low fps, you can use GStreamer to transcode your stream in MJPEG and stream it in TCP, and then use VLC to get this TCP stream and stream it to HTTP. It works very well (0.5 sec of latency), but if you decrease the fps (1 fps) VLC introduces a latency of around 11 sec.
Here are some test commands that should work out of the box, using the GStreamer videotestsrc :
GStreamer :
gst-launch -v videotestsrc horizontal-speed=1 ! deinterlace ! videorate ! videoscale ! video/x-raw-yuv, framerate=15/1, width=256,
height=144 ! jpegenc quality=20 ! multipartmux
boundary="--videoboundary" ! tcpserversink host=localhost port=3000
VLC :
vlc -vvv -I rc tcp://localhost:3000 --sout
'#standard{access=http{mime=multipart/x-mixed-replace;boundary=--7b3cc56e5f51db803f790dad720ed50a},mux=mpjpeg,dst=localhost:8081}'
then open a browser to http://localhost:8081 (or create an HTML page with an img tag whose "src" attribute is http://localhost:8081)

Display first page of PDF as Image

I am creating web application where I am displaying images/ pdf in thumbnail format. Onclicking respective image/ pdf it get open in new window.
For PDF, I have (this is code of the new window)
<iframe src="images/testes.pdf" width="800" height="200" />
Using this I can see all PDF in web browser. However for thumbnail purpose, I want to display only first page of PDF as an Image.
I tried
<h:graphicImage value="images/testes.pdf" width="800" height="200" />
however it is not working. Any idea how to get this done?
Update 1
I am providing path of pdf file for example purpose. However I have images in Database. In actual I have code as below.
<iframe src="#{PersonalInformationDataBean.myAttachmentString}" width="800" height="200" />
Update 2
For sake of thumbnail, what I am using is
<h:graphicImage height=200 width=200 value="....">
however I need to achieve same for PDF also.
Hope I am clear what I am expecting...
I'm not sure if all browsers display your embedded PDF (done via <h:graphicImage value="some.pdf" ... />
) equally well.
Extracting 1st Page as PDF
If you insist on using PDF, I'd recommend one of these 2 commandline tools to extract the first page of any PDF:
pdftk
Ghostscript
Both are available for Linux, Mac OS X and Windows.
pdftk command
pdftk input.pdf cat 1 output page-1-of-input.pdf
Ghostscript command
gs -o page-1-of-input.pdf -sDEVICE=pdfwrite -dPDFLastPage=1 input.pdf
(On Windows use gswin32c.exe or gswin64c.exe instead of gs.)
pdftk is slightly faster than Ghostscript when it comes to page extraction, but for a single page that difference is probably neglectable. As of the most recent released version, v9.05, the previous sentence is no longer true. I found that Ghostscript (including all startup overhead) requires ~1 second to extract the 1st page from the 756 page PDF specification, while PDFTK needed ~11 seconds.
Converting 1st Page to JPEG
If you want to be sure that even older browsers can display your 1st page well, then convert it to JPEG. Ghostscript is your friend here (ImageMagick cannot do it by itself, it needs the help of Ghostscript anyway):
gs -o page-1-of-input-PDF.jpeg -sDEVICE=jpeg -dLastPage=1 input.pdf
Should you need page 33, you can do it like this:
gs -o page-33-of-input-PDF.jpeg -sDEVICE=jpeg -dFirstPage=33 -dLastPage33 input.pdf
If you need a range of PDFs, like pages 17-23, try this:
gs -o page-16+%03d-of-input-PDF.jpeg -sDEVICE=jpeg -dFirstPage=17 -dLastPage23 input.pdf
Note, that the %03d notation increments with each page processed, starting with 1. So your first JPEG's name would be page-16+001-of-input-PDF.jpeg.
Maybe PNG is better?
Be aware that JPEG isn't a format suited well for images containing high black+white contrast and sharp edges like text pages. PNG is much better for this.
To create a PNG from the 1st PDF pages with Ghostscript is easy:
gs -o page-1-of-input-PDF.png -sDEVICE=pngalpha -dLastPage=1 input.pdf
The analog options as with JPEGs are true when it comes to extract ranges of pages.
Warning: Don't use Ma9ic's script (posted in another answer) unless you want to...
...make the PDF->JPEG conversion consume much more time + resources than it should be
...give up your own control over the PDF->JPEG conversion process altogether.
While it may work well for you there are so many problems in these 8 little lines of Bash.
First,
it uses identify to extract the number of pages from the input PDF. However, identify (part of ImageMagick) is completely unable to process PDFs all by itself. It has to run Ghostscript as a 'delegate' to handle PDF input. It would be much more efficient to use Ghostscript directly instead of running it indirectly, via ImageMagick.
Second,
it uses convert to PDF->JPEG conversion. Same remark as above: it uses Ghostscript anyway, so why not run it directly?
Third,
it loops over the pages and runs a different convert process for every single page of the PDF, that is 100 converts for a 100 page PDF file. That means: it also runs 100 Ghostscript commands to produce 100 JPEGs.
Fourth,
Fahim Parkar's question was to get a thumbnail from the first page of the PDF, not from all of them.
The script does run at least 201 different commands for a 100 page PDF, when it could all be done in just 1 command. If you Ghostscript directly...
...not only will it run faster and more efficiently,
...but also it will give you more fine-grained and better control over the JPEGs' quality settings.
Use the right tool for the job, and use it correctly!
Update:
Since I was asked, here is my alternative implementation to Ma9ic's script.
#!/bin/bash
infile=${1}
gs -q -o $(basename "${infile}")_p%04d.jpeg -sDEVICE=jpeg "${infile}"
# To get thumbnail JPEGs with a width 200 pixel use the following command:
# gs -q -o name_200px_p%04d.jpg -sDEVICE=jpeg -dPDFFitPage -g200x400 "${infile}"
# To get higher quality JPEGs (but also bigger-in-size ones) with a
# resolution of 300 dpi use the following command:
# gs -q -o name_300dpi_p%04d.jpg -sDEVICE=jpeg -dJPEGQ=100 -r300 "${infile}"
echo "Done"
I've even run a benchmark on it. I converted the 756-page PDF-1.7 specification to JPEGs with both scripts:
Ma9ic's version needs 1413 seconds generate the 756 JPEGs.
My version saves 93% of that time and takes 91 seconds.
Moreover, Ma9ic's script produces on my system mostly black JPEG images, mine are Ok.
This is what I used
Document document = new Document();
try {
document.setFile(myProjectPath);
System.out.println("Parsed successfully...");
} catch (PDFException ex) {
System.out.println("Error parsing PDF document " + ex);
} catch (PDFSecurityException ex) {
System.out.println("Error encryption not supported " + ex);
} catch (FileNotFoundException ex) {
System.out.println("Error file not found " + ex);
} catch (IOException ex) {
System.out.println("Error handling PDF document " + ex);
}
// save page caputres to file.
float scale = 1.0f;
float rotation = 0f;
System.out.println("scale == " + scale);
// Paint each pages content to an image and write the image to file
InputStream fis2 = null;
File file = null;
for (int i = 0; i < 1; i++) {
BufferedImage image = (BufferedImage) document.getPageImage(i,
GraphicsRenderingHints.SCREEN,
Page.BOUNDARY_CROPBOX, rotation, scale);
RenderedImage rendImage = image;
// capture the page image to file
try {
System.out.println("\t capturing page " + i);
file = new File(myProjectActualPath + "myImage.png");
ImageIO.write(rendImage, "png", file);
fis2 = new BufferedInputStream(new FileInputStream(myProjectActualPath + "myImage.png"));
} catch (IOException ioe) {
System.out.println("IOException :: " + ioe);
} catch (Exception e) {
System.out.println("Exception :: " + e);
}
image.flush();
}