Reduce FPS on OSMF stream - Issue with MPEG-2 header - actionscript-3

I've been searching all over and can't find a solution. I have a 25 FPS video that I'm playing on OSMF, but OSMF insists on playing with 29-31 FPS. This causes the video to play ~15% faster than real time. The result is extremely noticeable if you open the same video in VLC and play it side by side.
The problem comes in when I try to do a live stream. It will eat through the buffer and catch up to real time then the stream crashes because there's no new video waiting.
I've tried tracing the code to find out where the frames are actually output to the screen, but I hit a dead end at the SWC file. I also have tried searching online but I can't find anything about limiting the FPS - everyone is just interested in increasing it.
I'd rather play at 15 FPS and drop 10 frames per second than catch up to real time and crash tragically.
Edit - after an entire weekend spent staring at this issue I've made some incredible headway. First and foremost, the only way to limit the FPS in OSMF is by sending a custom FLV header with the timestamp set appropriately (1000 / FPS difference between each frame)
Realizing this I could solve this issue I'm having temporarily by manually setting the timestamps based on an internal counter. Each time a frame is processed set timestamp = last_timestamp + 40;. The problem is that I don't know if video will always be 25 FPS. Some day I may have 30 FPS or even 60 FPS video streams. To make this more robust I decided to decode the MPEG-2 header (read the PTS value) and convert it to an FLV header.
Now here's the issue… This video file (theoretically 25 FPS) plays perfectly in QuickTime. As a result I know the headers are fine because an expensive piece of software with billions of dollars behind it properly calculated the frame rate. But when I read the PTS from the header (as per this SO post) and divide by 90 (convert 90Khz clock to millisecond timestamp) each timestamp is 33 or 34 milliseconds apart - the 29~31 FPS I was getting.
So why is the PTS giving me timestamps that are 33-34 milliseconds apart when I know the video is 25 FPS (40 milliseconds apart)? More importantly, how is QuickTime reading the MPEG-2 header so that everything plays correctly?

First, to answer my original question:
Reducing FPS in OSMF
You must create your own implementation of the file handler which parses the video header and modifies the timestamps. There is no way to tell OSMF to play in "slow motion" or "fast forward". For an example of creating your own file handler, look at the FLVParser class. Notice that there are separate classes for parsing video and audio tags. The headers must be updated in EACH of these to ensure that video and audio play back in sync.
When a file is passed to the video parser, each timestamp is considered relative to the first. So if the first timestamp is 1234 then this will be set as "time zero" and all future timestamps will be relative to this. This is important. If you skip a video tag and the first timestamp you send is from later in the video, it will use the wrong value for "time zero" and things will not be sync'd properly.
This bring us to…
My issue
First and foremost, the durations in the M3U8 did not match up with the sum of the differences of timestamps. Starting at the first timestamp and labeling it "time zero", then looking at the last time stamp and subtracting time zero from it, the resulting time span was not equal to the expected duration of the TS file.
It turns out when VLC or QuickTime encounter this situation (the sum of PTS values does not equal the duration of the video) they generate new headers on the fly. Take the duration, divide by the number of frames, there's your new offset between PTS values.
That was my first problem. Once that was solved I was no longer gaining a second every 10 seconds, but instead gaining a second every 2 minutes. Turns out I was also loading the next TS file a bit too early (off-by-one error) which was causing me to drop a packet from each TS. This led to losing one frame every 10 seconds.
Additional Info
I ran into another issue with FPS shortly after this which I have found a solution for. Anyone experiencing OSMF playing videos too quickly, I urge you to grep your code for bufferTimeMax.
When bufferTimeMax > 0 and bufferLength >= bufferTimeMax, audio plays faster until bufferLength reaches bufferTime. If a live stream is video-only, video plays faster until bufferLength reaches bufferTime.
Depending on how much playback is lagging (the difference between bufferLength and bufferTime), Flash Player controls the rate of catch-up between 1.5% and 6.25%. If the stream contains audio, faster playback is achieved by frequency domain downsampling which minimizes audible distortion.
6.25% = 0.975 seconds per 15 (gaining a second every 10~20)

You would need to set the FPS manually.
In Flash Professional, you can change the framerate in Document Properties.
In Flash Builder, you can set the framerate using metadata, as described here and here.
Thus...
[SWF(frameRate='15')]

Related

SWF file has timer that runs faster in debug player than release player (Adobe Animate, AS3)

I have an interactive animation that consists of a button and some lights that are lit in a sequence. The animation also has a timer that shows the passage of time to illustrate the interval between light transitions, so the user knows how much time has passed between each transition in the sequence (it took x amount of time to go from light 'a' to light 'b'). The button controls the speed.
I implemented the timer using the Timer class in AS3:
var MyTimer:Timer = new Timer(1);
Which means the timer should fire every millisecond.
Now, I am aware the this is nowhere near accurate, due to frame rate (24fps) and the fact that it has to run the code inside the handler function. But I am not going for an accurate timer, in fact I do need it to go slower so the user can see the time between transitions, a scaled/slowed timer if you will. The current time is displayed in a dynamic text field.
As is, when using the debug player in Adobe Animate, the speed of the timer runs at about 1/10th normal speed, taking about 10 seconds to show that a second has passed, which incidentally is great for my purposes. However, this animation will be used in a PDF, in which case, a release player of Flash will be used; and in the release version, the timer appears to be about half as fast, about 1/20th the speed, taking about 20 or more seconds to show that a second has passed.
I'm not sure why this is. How do I get the swf file to be played so that the timer behaves the same in both the debug and release versions of the Flash Player?
I have tried unticking 'Permit debugging' and ticking 'Omit trace statements' (and every combination of un/tick) in the Publish Settings as well as all the Hardware acceleration options.
I've also implemented a FPS counter as suggested here by user 'kglad':
published swf file is a lot slower than the 'test' mode?
But both debug and release versions show the FPS at 23-24.'kglad' gives the last reply in the thread as "the debug players are slower than the non-debug players". I'm not sure what he meant by this as this seems to be the opposite problem that me and the OP of that thread are having, unless s/he meant using the players in a browser.
TL;DR
How do I get a timer to behave the same in both the debug and release version of the Flash player?
Thanks in advance for any suggestions.

HLS - how to reduce delay?

Anyone know how configure the HLS media server for reduce a little bit the delay of live streaming video?
what types of parameters i need to change?
I had heard that you could do some tuning using parameters like this: HLSMediaFileDuration
Thanks in advance
A Http Live Streaming system typically has an encoder which produces segments of a certain number of seconds and a media server (web server) which serves playlists containing a list of URLs to these segments to player applications.
Media Files = Segments = .ts files = MPEG2-TS files (in HLS speak)
There are some ways to reduce the delay in HLS:
Reduce the encoded segment length from Apple's recommended 10 seconds to 5 seconds or less. Reducing segment length increases network overhead and load on the web server.
Use lower bitrates, larger .ts files take longer to upload and download. If you use multi-bitrate streams, make sure the first bitrate listed in the playlist is a little lower than the bitrate most of your users use. This will reduce the time to start playing back the stream
Get the segments from the encoder to the web server faster. Upload while still encoding if possible. Update the playlist as soon as the segment has finished uploading
Also remember that the higher the delay the better the quality of your stream (low delay = lower quality). With larger segments, there is less overhead so more space for video data. Taking a longer time to encode results in better quality. More buffering results in less chance of video streams stuttering on playback.
HLS is all about trading quality of playback for longer delay, so you will never be able to use HLS for things like video conferencing. Typical delay in HLS is 30-60 sec, minimum in practice is around 15 sec. If you want low delay use RTP for streaming, but good luck getting good quality on low or variable speed networks.
Please specify which media server you use. Generally speaking, yes - changing chunk size will definitely affect delay time. The less is the first chunk, the quicker the video will be shown in the player.
Actually, Apple recommend to divide your file to small chunks this equal length of file, integers.
In practice, there is huge difference between players. Some of them parse manifest changing this values.
Known practice is to pre-cache in memory first chunks in low & medium resolution (Or try to download them in background of app/page - Amazon does this, though their video is MSS)
I was having the same problem and the keys for me were:
Lower the segment length. I set it to 2s because I'm streaming on a local network. In other type of networks, you need to be careful with the overhead that a low segment length adds that can impact your playback quality.
In your manifest, make sure the #EXT-X-TARGETDURATION is accurate. From here:
The EXT-X-TARGETDURATION tag specifies the maximum Media Segment
duration. The EXTINF duration of each Media Segment in the Playlist
file, when rounded to the nearest integer, MUST be less than or equal
to the target duration; longer segments can trigger playback stalls
or other errors. It applies to the entire Playlist file.
For some reason, the #EXT-X-TARGETDURATION in my manifest was set to 5 and I was seeing a 16-20s delay. After changing that value to 2, which is the correct one according to my segments' length, I am now seeing delays of 6-10s.
In summary, you should expect a delay of at least 3X your #EXT-X-TARGETDURATION. So, lowering the segment length and the #EXT-X-TARGETDURATION value should help reducing the delay.

Is there is memory limitation to seek play flash Audio player AS3?

Flash Audio Player: I have an audio file which is 6 hours long, i am not able to seek audio beyond (03:22:54) is there is memory limitation to seek play beyond this 12173943 milliseconds?
Apparently, some kind of limitation exists. AS3 fails to seek past 12173943 milliseconds, while AS2 does seek past this point, but with less precision. See this question on SO for further details. I'm not aware of any workaround barring splitting the file into chunks smaller than 12000 seconds.

How to play FLV video at "incoming frames speed" (not video-internal) coming from NetStream in ActionScript

How to play NetStream frames immediatly as they arrive without any additional AS framerate logic?
I have recorded some audio & video data packets from RTMP protocol received by Red5, and now I'm trying to send it back to the flash client in a loop by pushing packets to NetStream with incrementing timestamp. The looped sequence has length of about ~20 sec nad is build from about ~200 RTMP packets (VideoData/AudioData)
Environment: both Flash client and server on localhost, no network bottleneck, video is H.264 encoded earlier by same Flash client.
It generaly works, but video is not very fluent - there ale lot of freezes, slowdowns and long pauses. The slower packets transmitting causing the more pauses and freezes., even extreme long pauses like transmiting whole sequence 2x-3x times (~60 sec) without effect - this comes up when forwarding slower than ~2 RTPM packets per second.
The problem looks like some AS-logic is trying to force framerate of a video, not just output received frames, so one of my questions is does AS looks for in-video-frame fps info in live streaming? why it can play faster, but can't play slower? How can I play video "by frames" not synchronizing video fps with RTPM packets timestamps?
On the other side, if I push packets faster than recorder, the video is just faster but almost fluent - I just can't get slower or stable stream (still very irregular speed).
I have analysed some NetStream values:
.bufferLength = ~0 or 0.001, incrasing when I forward packets
extremaly fast (like targeting ~90fps)
.currentFPS = shows real FPS
count seen in Video object, not incoming frames/s
.info.currentBytesPerSecond = ~8 kB/s to ~50kB/s depending on
forwarding speed
.info.droppedFrames = frequently incrases, even if I
stream packets like 2/sec! also jumps after long self-initiated-pause (but buffer
is whole time 0!)
.info.isLive = true
.info.dataBufferLength = same as .bufferLength
It looks like AS is dropping frames, because of too rare RTMP packets receive - like expecting that they will arrive with internal-frame-encoded-fps-speed.
My currently best NetStreamconfiguration:
chatStream.videoReliable = false;
chatStream.audioReliable = false;
chatStream.backBufferTime = 0;
chatStream.bufferTime =0;
Note that if I set bufferTime to 1, video is paused until gathering "1 second of video" but this is not true - buffering is very slow, like assuming that video has FPS of 100 or 200 - even if I'm forwarding packets fast (like targeting ~15fps without buffer), the buffer is filing about 10-20 seconds.
Loop, of course, starts with keyframed video data and keyframe interval of sequence is about 15 frames.
Have you tried netStream.step(1)
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/NetStream.html#step()
Also I would remove 'live' from the play() call.
Finally, maybe flash tries to sync to the audio like it does in regular timeline. Have you tried a video without audio channel?

Actionscript: Playing sound from microphone on speakers (through buffer) with constant delay

I'm looking for an example of code that samples signal from microphone and
plays it on speakers. I need a solution that has a resonably constant delay on
different platforms (PC, android, iphone). Delay around 1-2 s is ok for me, and I don't
mind if it varies everytime the application starts.
I tried using SampleDataEvent.SAMPLE_DATA event on Sound and Microphpne classess.
One event would put data into buffer the other would read data.
But it seems impossible to provide constant delay, either the delay grows constantly or
it gets lower to the point where I have less than 2048 samples to put out and Sound class stops
generating SampleDataEvent.SAMPLE_DATA events.
I wan't to process each incoming frame so using setLoopBack(true) is not an option.
ps This is more a problem on Android devices than on PC. Althought when I start to resize application
window on PC delay starts to grow also.
Please help.
Unfortunately, this won't be possible... at least not directly.
Some sound devices will use a different clock between recording and playback. This will be especially true for cell phones where what is running the microphone may very well be different hardware than the headphone audio output.
Basically, if you record at 44.1kHz and play back at 44.1kHz, but those clocks are not in sync, you may be recording at 44.099kHz and play back at 44.101kHz. Over time, this drift will mean that you won't have enough data in the buffer to send to the output.
Another complication (and more than likely your problem) is that your record and playback sample rates may be different. If you record from the microphone at 11kHz and playback at 48kHz, you will note that 11 doesn't evenly fit into 48. Software is often used to up-sample the recording. Sometimes this is done with a nice algorithm which is guaranteed to give you the necessary output. Other times, that 11kHz will get pushed to 44kHz and is deemed "close enough".
In short, you cannot rely on recording and playback devices being in sync, and will need to synchronize yourself. There are many algorithms out there for handling this. The easiest method is to add a sample here and there that averages the sample before and after it. If you do this with just a few samples, it will be inaudible. Depending on the kind of drift problem you are having, this may be sufficient.