How to separate slices to be independent frames in multiple slices h264 stream? - h.264

I have a 4-slices h264 stream, which looks like
0x000001[sps] 0x000001[pps] 0x000001[IDR] 0x000001[IDR] 0x000001[IDR] 0x000001[IDR]
How to separate each slice to be a frame with 1/4 height, maybe look like
0x000001[sps] 0x000001[pps] 0x000001[IDR] 0x000001[sps] 0x000001[pps] 0x000001[IDR] 0x000001[sps] 0x000001[pps] 0x000001[IDR] 0x000001[sps] 0x000001[pps] 0x000001[IDR]
I have tried to modify the first_mb_in_slice in the slice header to be 0, the height pic_height_in_map_units_minus1 from 119 to 29 and insert sps and pps before each slice.
I got 2 slices to decode success but another 2 slices were broken for each frame.
How do I fix it?
Thanks for any help.

Related

Visualforce: Access pricecalculation from Opportunity object

I'm forcing the problem that I have to build a automated invoice generator based on the Opportunity object.
My code is as following:
<apex:page standardController="Opportunity" showHeader="false" Language="de" renderAs="pdf" >
<!-- Reference to Style Sheet, saved under Static Resources in Salesforce. -->
<apex:stylesheet value="{!$Resource.InvoiceCSS}" />
<div>
<!-- Below the optional text paragraph we put the table with all the products selected as Opportunity Line Items. -->
<table class="products" width="100%">
<tr>
<td width="100%" style="vertical-align:top;">
<apex:dataTable width="100%" value="{!Opportunity.OpportunityLineItems}" var="oli">
<apex:column width="200px" headerClass="tableheaderleft" footerClass="tablefooterleft" styleClass="tablebodyleft">
<apex:facet name="header">Description</apex:facet>
<apex:OutputField value="{!oli.Name}"/>
</apex:column>
<apex:column width="{!If(oli.Discount!=null,If((oli.Discount>0),"25px","15px"),"15px")}" headerClass="tableheadercenter" footerClass="tablefootercenter" styleClass="tablebodycenter">
<apex:facet name="header">Quantity</apex:facet>
<apex:OutputField value="{!oli.Quantity}"/>
<apex:facet name="footer"></apex:facet>
</apex:column>
<apex:column width="95px" headerClass="tableheaderright" footerClass="tablefooterright" styleClass="tablebodyright">
<apex:facet name="header">Amount</apex:facet>
<apex:OutputField value="{!oli.UnitPrice}"/>
<apex:facet name="footer"></apex:facet>
</apex:column>
</apex:dataTable>
</td>
</tr>
<tr><td width="50%" headerClass="tableheaderright" footerClass="tablefooterright" styleClass="tablebodyright">Subamount</td><td width="50%">EUR XXXX,XX</td></tr>
<tr><td>VAT</td><td>EUR XXXX,XX</td></tr>
<tr><td>Total amount</td><td>XXXX,XX</td></tr>
</table>
</div>
As the listing of the products works fine so far, now I need to add fields for VAT,subtotal amount and amount and I do not know how to do that.
My Idea was to create custom formula fields that refer to the Pricebook or the Opportunity Products object, but I couldn't access these fields in the formula generator.
And thats exactly where my problem is: I do not know how the relationship between these objects works and which fields I need to refer to to get this calculation done. Is there any possibility to access a standart fields for amount, VAT, and sub amount? If yes, how can I access that?
Many thanks!!
Entity Relationship Diagram can help: https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_erd_products.htm. It's bit crappy though.
If you have your custom bits on top of that try with Setup -> Schema Builder.
Opportunity is in many-to-many relation to Product, via OpportunityLineItem. You don't see it on the ERD but there absolutely is a OpportunityLineItem.Product2Id lookup (foreign key).
Relation to Pricebook is bit messier. Opportunity -> down to line items -> up to pricebook entry -> up to pricebook.
Right, so what you could do... You could make rollup summary fields on Opportunity that take total tax as sum of tax on line items? You'd then display {!Opportunity.Amount} and {!Opportunity.TotalTax__c} or something in your PDF. I don't know how you calculate tax, is it a custom field on OppLineItem? Who decides the rate? Product? Account's country? Pricebook entry and you have diff entries / product? Or do you care only about 1 country so you slap 1 flat rate at it and job done? If it's a flat 20% of total then you already have Opportunity.Amount, make another formula on Opportunity and job done?
If you don't want to make a field - you could add a piece of Apex as controller extension, query data, do the calculation there and display your value. This might be... not great. When I make PDFs related to accounting I try to make them as simple as possible. No calculations, just dutifully take the values calculated by something else and display them as is, maybe with bit of formatting. Users might not always spot errors in PDFs during testing. If it's a real field, something that can be reported on it's more likely to be caught. So even if the tax calculation is too messy for formulas and rollups I'd probably do it with flow/trigger; save the value to helper field, not hide it behind PDF.
(There's also way to do it in pure Visualforce, no custom code with some clever abuse of apex:repeat and apex:variable tags... But again, I'd advise against it. Too important to have rounding errors etc in presentation layer)

Understanding/controlling MLT melt slideshow?

Consider the following bash script (on Ubuntu 18.04, melt 6.6.0), which uses melt to make a slideshow and play it locally in a window (SDL consumer), mostly copied from https://mltframework.org/blog/making_nice_slideshows/ ( edit: I'm aware that's its possible to specify files individually as in https://superuser.com/questions/833232/create-video-with-5-images-with-fadein-out-effect-in-ffmpeg/834035#834035 - but that approach seems to scale images during transition, and takes quite a while to "render" before playing in SDL window, while this one has nearly instant playback):
echo "
description=DV PAL
frame_rate_num=25
frame_rate_den=1
width=720
height=576
progressive=0
sample_aspect_num=59
sample_aspect_den=54
display_aspect_num=4
display_aspect_den=3
colorspace=601
" > my-melt.profile
mkdir tmppics
convert -background lightblue -fill blue -size 3840x2160 -pointsize 200 -gravity center label:"Test A" tmppics/pic_01.jpg
convert -background lightblue -fill blue -size 3840x2160 -pointsize 200 -gravity center label:"Test B" tmppics/pic_02.jpg
melt -verbose -profile ./my-melt.profile \
./tmppics/.all.jpg ttl=6 \
-attach crop center=1 \
-filter luma cycle=6 duration=4 \
-consumer sdl
When I run the above command, the video shows the two images loop, but the frame counter keeps on going, increasing indefinitely. How do I make it stop after the exact amount of frames that the loop is long?
As far as I can see, the size of the output video is controlled by a profile; that is, even if I don't specify -profile, a default one is assumed; is that correct?
The original images look like this:
... and the video looks like this:
... which means the aspect ratio is wrong; additionally I can see jagged edges, meaning the scaled image in the video is not antialiased.
How do I make the image fit in video size with correct aspect ratio, with antialiasing/smoothing? (I guess it has to do with -attach crop center=1, but I couldn't find documentation on that).
When viewing stuff in SDL and stepping through frames, are frames numbered 0-based, - or are they 1-based, and at frame 0 simply the same frame as 1 is shown?
If I use ttl=6 and -filter luma cycle=6 duration=4, I get this:
... that is, visible transition starts at frame 7 (frame 6 is full image A), lasts for frames 7 and 8, and ends at frame 9 (which is full image B); then again at frames 13 and 14 (frame 15 is full image A)
However, if I use ttl=6 and -filter luma cycle=6 duration=2, then I get this:
... that is, there is no transition, image instantly changes at frame 7, then again at frame 13, etc.
So, I'd call the first case a transition duration of 2 frames, and the second case a duration of 0 frames - yet the options are duration=4 and duration=2, respectively. Can anyone explain why? Where are those 2 frames of difference gone?
Can I - and if so, how - do the same kind of slideshow, except with fade to black? I'd like to define a "time to live" (ttl) of 6 frames per image, and a transition of 4 frames, such that:
first, 4 frames are shown of image A;
then one frame image A faded, followed by one frame black (amounting to 6 frames TTL for image A, the last 2 transition);
then two frames image B faded (amounting to 4 frames transition with previous 2), followed by two more frames image B full (so 4 frames here of image B);
then one frame image B faded, followed by one frame black (amounting to 6 frames TTL for image B);
... etc.
Is it possible to persuade melt to use globbing to select images for slideshow, instead of using .all.jpg? As far as I can see on MLT (Media Lovin' Toolkit) Photo Slide Video no - but maybe there is another approach...
Ok, so, I spent some time looking into the commands for melt and turns out there is actually a pretty effective way of altering a bunch of images (if the number of arguments is too long or there are too many characters for your terminal to handle).
What you want to do is to use -serialise <name of file>.melt which will store your commands (you can also create this file manually). Then to execute that file, use melt <name of file>.melt along with any other options you have for your video file.
Example Format:
melt <images and what to do to them> -serialise <name of file>.melt
Example
Create the melt file (with Melt CLI)
melt image1.png out=50 image2.png out=75 -mix 25 -mixer luma image3.png out=75 -mix 25 -mixer luma image3.png out=75 -mix 25 -mixer luma image4.png out=75 -mix 25 -mixer luma <...> -serialise test.melt
.melt file format
test.melt
image1.png
out=50
image2.png
out=75
-mix
25
-mixer
luma
image3.png
out=75
-mix
25
-mixer
luma
image3.png
out=75
-mix
25
-mixer
luma
image4.png
out=75
-mix
25
-mixer
luma
<...>
Run
melt test.melt -profile atsc_1080p_60 -consumer avformat:output.mp4 vcodec=libx264 an=1
Additional Notes
There should be an extra return character at the end of the melt file. If there isn't, Exceeded maximum line length (2048) while reading a melt file. will be outputted
Notice that -serialise <name of file>.melt will not be in the .melt file
Melt will actually take some time to load the melt file before the encoding process begins

Start b clip after mixer transition ends

I'm seeking to mix 2 clips, however, I'd like for clip2 to start after the mixer transition ends, not begins.
Essentially, this should mix clip1 with only clip2's frame 0.
I was wondering if there was a better alternative to my current workaround:
melt \
clip1.mp4 \
clip2.mp4 in=0 out=0 length=300 \
-mix 300 -mixer luma \
clip2.mp4
Perhaps there is something to pause clip2 at frame 0 for 300 frames?
(I'm doing this with 2 .mlt clips, but voiding the audio_index doesn't seem to work on mlt clips, thus I get a small audio jump for 1 frame, so this workaround isn't ideal)
You cannot set audio_index on .mlt virtual clips because audio_index is a property of the avformat producer, but MLT XML is read by the xml producer.
You can use the hold producer to hold a frame and mute audio. It defaults to 25 frames duration; so use out to override it:
melt clip1.mp4 hold:clip2.mp4 frame=0 out=299 -mix 300 -mixer luma clip2.mp4

Azure client manifest entry: n and r elements

While reviewing a client manifest provided by Azure Media Services for an HTTP Smooth Stream, I notice a new element (n) not found in previous IIS manifests and absent from Sam Zhang's blog.
According to previous manifests (clientManifestVersion 2.2), r means "repeat" and is used for compression - indicating duplicate fragment duration.
But by comparing two Azure manifests from the same stream at different times, you can see:
`<c t="868948936" d="2000" r="1770" n="136" />` // (# 8:21 PM)
`<c t="881664896" d="2000" r="1770" n="6494"/>` // (# 11:53 PM)
From what I understand,
d = 2000 indicates the fragment duration (2 seconds)
And where:
n1 = 136
n2 = 6494,
t1 = 868948936
t2 = 881664896,
n2 - n1 = 6358 * d = 12716000 + t1 = t2
Even though r is supposed to be a repeat, r remains the same while n increases over time... So what is r if it is unchanging, and what is n?
The n attribute is the zero-based index of the fragment, incremented by 1 for each new fragment. Just a meaningless counter: 0, 1, 2, 3, 4, ...
The r attribute indicates that r more fragments with the same duration follow the current fragment. It allows you to replace this:
<c t="1000" d="1000" />
<c t="2000" d="1000" />
<c t="3000" d="1000" />
<c t="4000" d="1000" />
With this much more compact representation:
<c t="1000" d="1000" r="3" />
You can think of it as just duplicating the XML element r number of times.
Edit: Based on the comment, I now understand the source of the confusion - the question is not actually about what these attributes are but why, with a live stream, does only n change as time goes along.
To understand this, you must understand how a live video is represented conceptually and how this differs from an on-demand video. The latter has a definite beginning and end, with a fixed number of fragments in between:
(start)123456789(end)
Whereas a live video by definition is one with no end - there may be a "last fragment" but new fragments are continually added to the end and what is currently the "last fragment" will change as time goes along:
(start)1234
(start)12345
(start)123456
Now this works all fine and super but you probably notice a problem here. Adaptive streaming technologies allow you to play any fragment of a video. If your video goes on, essentially, forever then the origin server must store an effectively infinite number of fragments! This cannot be allowed.
To solve this problem, adaptive streaming technologies introduce the concept of a DVR window - a sliding window over the video that contains all the data that can be viewed by players. Any data that slides out of range of this window can be discarded.
(start)[1]
(start)[12]
(start)[123]
(start)1[234]
(start)12[345]
(start)123[456]
(start)1234[567]
(start)12345[678]
(start)123456[789]
Let's discard the fragments we do not need and see how that looks. If your sliding window has a size 3 then the fragments visible to players would progress in time like this:
1
12
123
234
345
456
You notice that the size of the sliding window remains constant (once enough fragments are available to fill it) and that the index of the first fragment plus the sliding window size is sufficient to represent the entire sliding window.
There you have it: r is the number of fragments in the sliding window and n is the index of the first fragment! This is not the only way to represent live video but it is certainly the most efficient, due to the obvious small size of the data in the manifest.

Line chart with fixed y-axis?

I'm trying to have a line chart with a fixed y-axis. that is, I have values that are mostly between 30 and 70, but I'd like to have the chart y-axis as a constant between 0 and 100 so it wouldn't resize as new values are coming in (if they happen to be larger than previous values).
How'd I go about doing this?
Set minimum and maximum properties of LinearAxis. Something like this:
<mx:verticalAxis>
<mx:LinearAxis title="title" displayName="displayName" maximum="100" minimum="0"/>
</mx:verticalAxis>
And don't you want to give a try to existing classes? :)
Yahoo's Astra pack
http://developer.yahoo.com/flash/astra-flash/charts/using.html
Or you may post some sample code so we can see what should be modified.