When I go through the details of High dynamic range feature, I have come across ICtCp color format (Rec.2020). Is this similar to YCbCr? What is the exact difference between YCbCr and ICtCp? Can I pass ICtCp buffer to video encoder (H264/avc or H265/hevc) directly?
YCbCr and ICtCP are all luma/chroma color space. The difference between them is the chroma channels. HDR has been included in H.265. x265 only supported raw YUV or Y4M but you can have a try.
You can feed it, but only with zscale, not with old but more precise for old YCbCr swscale.
See commands here: https://github.com/sekrit-twc/zimg/issues/138
ffmpeg -v debug -f rawvideo -pix_fmt rgb48le -s:v 192x108 -i SCD_192x108.rgb48.rgb -vf format=gbrp16le,zscale=rangein=full:range=full:npl=10000:matrixin=input:transferin=smpte2084:primariesin=2020:matrix=ictcp:transfer=smpte2084:primaries=2020,format=yuv444p16le -f rawvideo FFMPEG_ICTCP_SCD_192x108.rgb48.plr.ffmpeg.yuv
Related
I'm building an OTA update for my custom Android 10 build as follows:
./build/make/tools/releasetools/ota_from_target_files \
--output_metadata_path metadata.txt \
target-files.zip \
ota.zip
The resulting ota.zip can be applied by extracting the payload.bin and payload_properties.txt according to the android documentation for update_engine_client.
update_engine_client --payload=file:///<wherever>/paypload.bin \
--update \
--headers=<Contents of payload_properties.txt>
This all works so I'm pretty sure from this result that I've created the OTA correctly, however, I'd like to be able to download the metadata and verify that the payload can be applied before having the client download the entire payload.
Looking at the update_engine_client --help options, it appears one can verify the metadata as follows:
update_engine_client --verify --metadata=<path to metadata.txt from above>
This is where I'm failing to achieve the desired result though. I get an error that says it failed to parse the payload header. It's failing with kDownloadInvalidMetadataMagicString which when I read the source appears to be the first 4 bytes of the metadata. Apparently the metadata.txt I created isn't right for the verification tool.
So I'm hoping someone can point me in the right direction to either generate the metadata correctly or tell me how to use the tool correctly.
Turns out the metadata generated by the ota tool is in human readable format. The verify method expects a binary file. That file is not part of the zip contents as a unique file. Instead, it's prepended to the payload.bin. So the first bytes of payload.bin are actually payload_metadata.bin, and those bytes will work correctly with the verify method of update_engine_client to determine if the payload is applicable.
I'm extracting the payload_metadata.bin in a makefile as follows:
$(DEST)/%.meta: $(DEST)/%.zip
unzip $< -d /tmp META-INF/com/android/metadata
python -c 'import re; meta=open("/tmp/META-INF/com/android/metadata").read(); \
m=re.match(".*payload_metadata.bin:([0-9]*):([0-9]*)", meta); \
s=int(m.groups()[0]); l=int(m.groups()[1]); \
z=open("$<","rb").read(); \
open("$#","wb").write(z[s:s+l])'
rm -rf /tmp/META-INF
I'm trying to convert a video using handbrake-js for node.
At first I tried specifying the bitrate, video size, codecs, etc. The goal is to generate several html5 compatible streams to be used as a source on a canvas video for webGL. Everything seems to work fine, it outputs the video, and when I open it using Quicktime or VLC it looks fine, however, when I use it on a tag, there is no video, just audio.
The following code is called whithin a function which receives an "ops" JSON, with the width and height.
hb.spawn({ input: new_location + "original" + ext, output: new_location + ops.name, optimize: true, vb: ops.vb, "width": ops.width, "height": ops.height, "rate": 30 })
The console shows the video being converted, and a clean exit.
but webGL reports:
[.Offscreen-For-WebGL-0x7fbf21074c00]RENDER WARNING: texture bound to texture unit 0 is not renderable. It maybe non-power-of-2 and have incompatible texture filtering.
Note: The video IS a power of 2 (1024x512). Also when I play the original file it doesen't show the error.
To debug, I have even attached the video to the HTML, and changed the CSS to show it. but there is no video. just audio.
I have even tried:
hb.spawn({ input: "input.mp4", output: "output.m4v" })
and a simple
Well... input.mp4 displays fine. output.m4v always fails to show video on HTML (Chrome, safari seems to work just fine).
Any ideas?
If you say preset: Normal works, then you can run handbrake --preset-list (using handbrake-js installed as a command-line app) to see which encoder options the "normal" preset uses:
+ Normal: -e x264 -q 20.0 -a 1 -E ffaac -B 160 -6 dpl2 -R Auto -D 0.0 --audio-copy-mask aac,ac3,dtshd,dts,mp3 --audio-fallback ffac3 -f mp4 --loose-anamorphic --modulus 2 -m --x264-preset veryfast --h264-profile main --h264-level 4.0
So, try running hb.spawn using the options above and remove any options you don't need.
This question is the follow up question to this thread: AR Drone 2 and ffserver + ffmpeg streaming
We are trying to get a stream from our AR Drone through a Debian server and into a flash application.
The big picture looks like this:
AR Drone --> Gstreamer --> CRTMPServer --> Flash Application
We are using the PaveParse plugin for Gstreamer found in this thread: https://projects.ardrone.org/boards/1/topics/show/4282
As seen in the thread the AR Drone is using PaVE, Parrot Video Ecapsulation, which is unrecognizable by most players like VLC. The PaVeParse plugin removes these.
We have used different pipelines and they all yield the same error.
Sample pipeline:
GST_DEBUG=3 gst-launch-0.10 tcpclientsrc host=192.168.1.1 port=5555 ! paveparse ! queue ! ffdec_h264 ! queue ! x264enc ! queue ! flvmux ! queue ! rtmpsink localtion='rtmp://0.0.0.0/live/drone --gst-plugin-path=.
The PaVEParse plugin needs to be located at the gst-plugin-path for it to work.
A sample error output from Gstreamer located in the ffdec_h264 element can be found at: http://pastebin.com/atK55QTn
The same thing will happen if the decoding is taking place in the player/dumper e.g. VLC, FFplay, RTMPDUMP.
The problem comes down to missing headers: PPS Reference is non-existent. We know that the PaVEParse plugin removes PaVE headers but we suspect that when these are removed there are no H264 headers for the decoder/player to identify the frame by.
Is it possible to "restore" these H264 headers either from scratch or by transforming the PaVE headers?
Can you please share a sample of the traffic between gstreamer and crtmpserver?
You can always use the LiveFLV support built inside crtmpserver. Here are more details:
Re-Stream a MPEG2 TS PAL Stream with crtmpserver
I'm trying to get a video stream RTP/RTCP using HTML5, the stream was generated by gstreamer. I used examples of gstreamer, so I can pass through RTP ports:5000, and RTCP:5001, and can receive streams using gstreamer. But using HTML5 could not receive them. So I tried to read a bit about HTML5 and saw that it can receive theora/ogg, webm/vp8, mp4/avc, and protocols may be, HTTP, RTP, RTCP, UDP, and others, but I could not use RTP, RTCP or UDP, HTTP only managed to receive. But I had a very satisfactory result using the VLC plugin for Mozilla Firefox, using the UDP protocol. I wonder if anyone has any tips, I do not want to use source files as src="/tmp/test.avi" needs to be a video stream that can be udp, RTP, RTCP. Thank you!
If you don't need to stream at low fps, you can use GStreamer to transcode your stream in MJPEG and stream it in TCP, and then use VLC to get this TCP stream and stream it to HTTP. It works very well (0.5 sec of latency), but if you decrease the fps (1 fps) VLC introduces a latency of around 11 sec.
Here are some test commands that should work out of the box, using the GStreamer videotestsrc :
GStreamer :
gst-launch -v videotestsrc horizontal-speed=1 ! deinterlace ! videorate ! videoscale ! video/x-raw-yuv, framerate=15/1, width=256,
height=144 ! jpegenc quality=20 ! multipartmux
boundary="--videoboundary" ! tcpserversink host=localhost port=3000
VLC :
vlc -vvv -I rc tcp://localhost:3000 --sout
'#standard{access=http{mime=multipart/x-mixed-replace;boundary=--7b3cc56e5f51db803f790dad720ed50a},mux=mpjpeg,dst=localhost:8081}'
then open a browser to http://localhost:8081 (or create an HTML page with an img tag whose "src" attribute is http://localhost:8081)
I am creating web application where I am displaying images/ pdf in thumbnail format. Onclicking respective image/ pdf it get open in new window.
For PDF, I have (this is code of the new window)
<iframe src="images/testes.pdf" width="800" height="200" />
Using this I can see all PDF in web browser. However for thumbnail purpose, I want to display only first page of PDF as an Image.
I tried
<h:graphicImage value="images/testes.pdf" width="800" height="200" />
however it is not working. Any idea how to get this done?
Update 1
I am providing path of pdf file for example purpose. However I have images in Database. In actual I have code as below.
<iframe src="#{PersonalInformationDataBean.myAttachmentString}" width="800" height="200" />
Update 2
For sake of thumbnail, what I am using is
<h:graphicImage height=200 width=200 value="....">
however I need to achieve same for PDF also.
Hope I am clear what I am expecting...
I'm not sure if all browsers display your embedded PDF (done via <h:graphicImage value="some.pdf" ... />
) equally well.
Extracting 1st Page as PDF
If you insist on using PDF, I'd recommend one of these 2 commandline tools to extract the first page of any PDF:
pdftk
Ghostscript
Both are available for Linux, Mac OS X and Windows.
pdftk command
pdftk input.pdf cat 1 output page-1-of-input.pdf
Ghostscript command
gs -o page-1-of-input.pdf -sDEVICE=pdfwrite -dPDFLastPage=1 input.pdf
(On Windows use gswin32c.exe or gswin64c.exe instead of gs.)
pdftk is slightly faster than Ghostscript when it comes to page extraction, but for a single page that difference is probably neglectable. As of the most recent released version, v9.05, the previous sentence is no longer true. I found that Ghostscript (including all startup overhead) requires ~1 second to extract the 1st page from the 756 page PDF specification, while PDFTK needed ~11 seconds.
Converting 1st Page to JPEG
If you want to be sure that even older browsers can display your 1st page well, then convert it to JPEG. Ghostscript is your friend here (ImageMagick cannot do it by itself, it needs the help of Ghostscript anyway):
gs -o page-1-of-input-PDF.jpeg -sDEVICE=jpeg -dLastPage=1 input.pdf
Should you need page 33, you can do it like this:
gs -o page-33-of-input-PDF.jpeg -sDEVICE=jpeg -dFirstPage=33 -dLastPage33 input.pdf
If you need a range of PDFs, like pages 17-23, try this:
gs -o page-16+%03d-of-input-PDF.jpeg -sDEVICE=jpeg -dFirstPage=17 -dLastPage23 input.pdf
Note, that the %03d notation increments with each page processed, starting with 1. So your first JPEG's name would be page-16+001-of-input-PDF.jpeg.
Maybe PNG is better?
Be aware that JPEG isn't a format suited well for images containing high black+white contrast and sharp edges like text pages. PNG is much better for this.
To create a PNG from the 1st PDF pages with Ghostscript is easy:
gs -o page-1-of-input-PDF.png -sDEVICE=pngalpha -dLastPage=1 input.pdf
The analog options as with JPEGs are true when it comes to extract ranges of pages.
Warning: Don't use Ma9ic's script (posted in another answer) unless you want to...
...make the PDF->JPEG conversion consume much more time + resources than it should be
...give up your own control over the PDF->JPEG conversion process altogether.
While it may work well for you there are so many problems in these 8 little lines of Bash.
First,
it uses identify to extract the number of pages from the input PDF. However, identify (part of ImageMagick) is completely unable to process PDFs all by itself. It has to run Ghostscript as a 'delegate' to handle PDF input. It would be much more efficient to use Ghostscript directly instead of running it indirectly, via ImageMagick.
Second,
it uses convert to PDF->JPEG conversion. Same remark as above: it uses Ghostscript anyway, so why not run it directly?
Third,
it loops over the pages and runs a different convert process for every single page of the PDF, that is 100 converts for a 100 page PDF file. That means: it also runs 100 Ghostscript commands to produce 100 JPEGs.
Fourth,
Fahim Parkar's question was to get a thumbnail from the first page of the PDF, not from all of them.
The script does run at least 201 different commands for a 100 page PDF, when it could all be done in just 1 command. If you Ghostscript directly...
...not only will it run faster and more efficiently,
...but also it will give you more fine-grained and better control over the JPEGs' quality settings.
Use the right tool for the job, and use it correctly!
Update:
Since I was asked, here is my alternative implementation to Ma9ic's script.
#!/bin/bash
infile=${1}
gs -q -o $(basename "${infile}")_p%04d.jpeg -sDEVICE=jpeg "${infile}"
# To get thumbnail JPEGs with a width 200 pixel use the following command:
# gs -q -o name_200px_p%04d.jpg -sDEVICE=jpeg -dPDFFitPage -g200x400 "${infile}"
# To get higher quality JPEGs (but also bigger-in-size ones) with a
# resolution of 300 dpi use the following command:
# gs -q -o name_300dpi_p%04d.jpg -sDEVICE=jpeg -dJPEGQ=100 -r300 "${infile}"
echo "Done"
I've even run a benchmark on it. I converted the 756-page PDF-1.7 specification to JPEGs with both scripts:
Ma9ic's version needs 1413 seconds generate the 756 JPEGs.
My version saves 93% of that time and takes 91 seconds.
Moreover, Ma9ic's script produces on my system mostly black JPEG images, mine are Ok.
This is what I used
Document document = new Document();
try {
document.setFile(myProjectPath);
System.out.println("Parsed successfully...");
} catch (PDFException ex) {
System.out.println("Error parsing PDF document " + ex);
} catch (PDFSecurityException ex) {
System.out.println("Error encryption not supported " + ex);
} catch (FileNotFoundException ex) {
System.out.println("Error file not found " + ex);
} catch (IOException ex) {
System.out.println("Error handling PDF document " + ex);
}
// save page caputres to file.
float scale = 1.0f;
float rotation = 0f;
System.out.println("scale == " + scale);
// Paint each pages content to an image and write the image to file
InputStream fis2 = null;
File file = null;
for (int i = 0; i < 1; i++) {
BufferedImage image = (BufferedImage) document.getPageImage(i,
GraphicsRenderingHints.SCREEN,
Page.BOUNDARY_CROPBOX, rotation, scale);
RenderedImage rendImage = image;
// capture the page image to file
try {
System.out.println("\t capturing page " + i);
file = new File(myProjectActualPath + "myImage.png");
ImageIO.write(rendImage, "png", file);
fis2 = new BufferedInputStream(new FileInputStream(myProjectActualPath + "myImage.png"));
} catch (IOException ioe) {
System.out.println("IOException :: " + ioe);
} catch (Exception e) {
System.out.println("Exception :: " + e);
}
image.flush();
}