How to set buffer settings in flex? - actionscript-3

How should i bring buffer settings in flex swf? How i achieve buffer setting in flex.
for example:
I use below code to set buffer time with netstream
ns.bufferTime=0;
But the delay in playback varies after some time.I want only a constant delay throughout the playback.
Because I want to set buffertime and all buffersetting of my own in actionscript and testing it to find the delay in playback the video.I want to set the delay time in playback video using buffertime.
Help me plz....

You should take into consideration connection time. Your application connects to the server. And you cannot control this time by ActionScript in particular by bufferTime. You should tune the server.

You're removing the buffer time -- it would be better to set a buffer time to say, 3 seconds or so. This will help with "hiccups".
ns.bufferTime = 3.0;
The standard is around 3+ seconds. I actually prefer a higher buffer time for larger movies.

Related

Is it possible to disable the Jitter Buffer in WebRTC (Chrome/Chromium)

I am trying to reduce the Chromium WebRTC video delay as much as possible for a remote machine control application. Since the transmitting and receiving PCs are directly connected via Ethernet (crossover cable) I'm guessing that receive buffering may not be necessary as there should be no delayed, out-of-order or lost packets.
I have rebuilt Chromium after adjusting the kMaxVideoDelayMs value in jitter_buffer_common.h. This has given mixed results including creating erratic behavior with receiving video (choppy) as well as making googPlisSent steadily rise over time. In addition, googJitterBufferMs and googTargetDelayMs jump around erratically when kMaxVideoDelayMs is set lower than a certain threshold (around 60ms). Everything appears to work well with kMaxVideoDelayMs set to 100ms but I would like to try to reduce overall delay as much as possible.
I would like to know if it is possible to disable or bypass the receive jitter buffer altogether as it seems that might reduce the overall delay between capturing video on the transmitting PC and displaying it on the receiving PC.
You still need a jitter buffer to store the packets until you have an entire frame (and do other related processing that's hung off the jitter buffer). Audio jitter buffers usually effectively run things, and control when audio/video get displayed. That's all deep in NetEq, and likely can't be disabled.
If you run audio and video as separate streams (not synced, or no audio), then video should already run pretty much as fast as possible, but if there is delay, it's due to the OS scheduling, and there may also be some amount of pacing delay in the DeliverFrame code (or rather the code that ends up calling DeliverFrame).

HLS - how to reduce delay?

Anyone know how configure the HLS media server for reduce a little bit the delay of live streaming video?
what types of parameters i need to change?
I had heard that you could do some tuning using parameters like this: HLSMediaFileDuration
Thanks in advance
A Http Live Streaming system typically has an encoder which produces segments of a certain number of seconds and a media server (web server) which serves playlists containing a list of URLs to these segments to player applications.
Media Files = Segments = .ts files = MPEG2-TS files (in HLS speak)
There are some ways to reduce the delay in HLS:
Reduce the encoded segment length from Apple's recommended 10 seconds to 5 seconds or less. Reducing segment length increases network overhead and load on the web server.
Use lower bitrates, larger .ts files take longer to upload and download. If you use multi-bitrate streams, make sure the first bitrate listed in the playlist is a little lower than the bitrate most of your users use. This will reduce the time to start playing back the stream
Get the segments from the encoder to the web server faster. Upload while still encoding if possible. Update the playlist as soon as the segment has finished uploading
Also remember that the higher the delay the better the quality of your stream (low delay = lower quality). With larger segments, there is less overhead so more space for video data. Taking a longer time to encode results in better quality. More buffering results in less chance of video streams stuttering on playback.
HLS is all about trading quality of playback for longer delay, so you will never be able to use HLS for things like video conferencing. Typical delay in HLS is 30-60 sec, minimum in practice is around 15 sec. If you want low delay use RTP for streaming, but good luck getting good quality on low or variable speed networks.
Please specify which media server you use. Generally speaking, yes - changing chunk size will definitely affect delay time. The less is the first chunk, the quicker the video will be shown in the player.
Actually, Apple recommend to divide your file to small chunks this equal length of file, integers.
In practice, there is huge difference between players. Some of them parse manifest changing this values.
Known practice is to pre-cache in memory first chunks in low & medium resolution (Or try to download them in background of app/page - Amazon does this, though their video is MSS)
I was having the same problem and the keys for me were:
Lower the segment length. I set it to 2s because I'm streaming on a local network. In other type of networks, you need to be careful with the overhead that a low segment length adds that can impact your playback quality.
In your manifest, make sure the #EXT-X-TARGETDURATION is accurate. From here:
The EXT-X-TARGETDURATION tag specifies the maximum Media Segment
duration. The EXTINF duration of each Media Segment in the Playlist
file, when rounded to the nearest integer, MUST be less than or equal
to the target duration; longer segments can trigger playback stalls
or other errors. It applies to the entire Playlist file.
For some reason, the #EXT-X-TARGETDURATION in my manifest was set to 5 and I was seeing a 16-20s delay. After changing that value to 2, which is the correct one according to my segments' length, I am now seeing delays of 6-10s.
In summary, you should expect a delay of at least 3X your #EXT-X-TARGETDURATION. So, lowering the segment length and the #EXT-X-TARGETDURATION value should help reducing the delay.

Reduce FPS on OSMF stream - Issue with MPEG-2 header

I've been searching all over and can't find a solution. I have a 25 FPS video that I'm playing on OSMF, but OSMF insists on playing with 29-31 FPS. This causes the video to play ~15% faster than real time. The result is extremely noticeable if you open the same video in VLC and play it side by side.
The problem comes in when I try to do a live stream. It will eat through the buffer and catch up to real time then the stream crashes because there's no new video waiting.
I've tried tracing the code to find out where the frames are actually output to the screen, but I hit a dead end at the SWC file. I also have tried searching online but I can't find anything about limiting the FPS - everyone is just interested in increasing it.
I'd rather play at 15 FPS and drop 10 frames per second than catch up to real time and crash tragically.
Edit - after an entire weekend spent staring at this issue I've made some incredible headway. First and foremost, the only way to limit the FPS in OSMF is by sending a custom FLV header with the timestamp set appropriately (1000 / FPS difference between each frame)
Realizing this I could solve this issue I'm having temporarily by manually setting the timestamps based on an internal counter. Each time a frame is processed set timestamp = last_timestamp + 40;. The problem is that I don't know if video will always be 25 FPS. Some day I may have 30 FPS or even 60 FPS video streams. To make this more robust I decided to decode the MPEG-2 header (read the PTS value) and convert it to an FLV header.
Now here's the issue… This video file (theoretically 25 FPS) plays perfectly in QuickTime. As a result I know the headers are fine because an expensive piece of software with billions of dollars behind it properly calculated the frame rate. But when I read the PTS from the header (as per this SO post) and divide by 90 (convert 90Khz clock to millisecond timestamp) each timestamp is 33 or 34 milliseconds apart - the 29~31 FPS I was getting.
So why is the PTS giving me timestamps that are 33-34 milliseconds apart when I know the video is 25 FPS (40 milliseconds apart)? More importantly, how is QuickTime reading the MPEG-2 header so that everything plays correctly?
First, to answer my original question:
Reducing FPS in OSMF
You must create your own implementation of the file handler which parses the video header and modifies the timestamps. There is no way to tell OSMF to play in "slow motion" or "fast forward". For an example of creating your own file handler, look at the FLVParser class. Notice that there are separate classes for parsing video and audio tags. The headers must be updated in EACH of these to ensure that video and audio play back in sync.
When a file is passed to the video parser, each timestamp is considered relative to the first. So if the first timestamp is 1234 then this will be set as "time zero" and all future timestamps will be relative to this. This is important. If you skip a video tag and the first timestamp you send is from later in the video, it will use the wrong value for "time zero" and things will not be sync'd properly.
This bring us to…
My issue
First and foremost, the durations in the M3U8 did not match up with the sum of the differences of timestamps. Starting at the first timestamp and labeling it "time zero", then looking at the last time stamp and subtracting time zero from it, the resulting time span was not equal to the expected duration of the TS file.
It turns out when VLC or QuickTime encounter this situation (the sum of PTS values does not equal the duration of the video) they generate new headers on the fly. Take the duration, divide by the number of frames, there's your new offset between PTS values.
That was my first problem. Once that was solved I was no longer gaining a second every 10 seconds, but instead gaining a second every 2 minutes. Turns out I was also loading the next TS file a bit too early (off-by-one error) which was causing me to drop a packet from each TS. This led to losing one frame every 10 seconds.
Additional Info
I ran into another issue with FPS shortly after this which I have found a solution for. Anyone experiencing OSMF playing videos too quickly, I urge you to grep your code for bufferTimeMax.
When bufferTimeMax > 0 and bufferLength >= bufferTimeMax, audio plays faster until bufferLength reaches bufferTime. If a live stream is video-only, video plays faster until bufferLength reaches bufferTime.
Depending on how much playback is lagging (the difference between bufferLength and bufferTime), Flash Player controls the rate of catch-up between 1.5% and 6.25%. If the stream contains audio, faster playback is achieved by frequency domain downsampling which minimizes audible distortion.
6.25% = 0.975 seconds per 15 (gaining a second every 10~20)
You would need to set the FPS manually.
In Flash Professional, you can change the framerate in Document Properties.
In Flash Builder, you can set the framerate using metadata, as described here and here.
Thus...
[SWF(frameRate='15')]

Actionscript: Playing sound from microphone on speakers (through buffer) with constant delay

I'm looking for an example of code that samples signal from microphone and
plays it on speakers. I need a solution that has a resonably constant delay on
different platforms (PC, android, iphone). Delay around 1-2 s is ok for me, and I don't
mind if it varies everytime the application starts.
I tried using SampleDataEvent.SAMPLE_DATA event on Sound and Microphpne classess.
One event would put data into buffer the other would read data.
But it seems impossible to provide constant delay, either the delay grows constantly or
it gets lower to the point where I have less than 2048 samples to put out and Sound class stops
generating SampleDataEvent.SAMPLE_DATA events.
I wan't to process each incoming frame so using setLoopBack(true) is not an option.
ps This is more a problem on Android devices than on PC. Althought when I start to resize application
window on PC delay starts to grow also.
Please help.
Unfortunately, this won't be possible... at least not directly.
Some sound devices will use a different clock between recording and playback. This will be especially true for cell phones where what is running the microphone may very well be different hardware than the headphone audio output.
Basically, if you record at 44.1kHz and play back at 44.1kHz, but those clocks are not in sync, you may be recording at 44.099kHz and play back at 44.101kHz. Over time, this drift will mean that you won't have enough data in the buffer to send to the output.
Another complication (and more than likely your problem) is that your record and playback sample rates may be different. If you record from the microphone at 11kHz and playback at 48kHz, you will note that 11 doesn't evenly fit into 48. Software is often used to up-sample the recording. Sometimes this is done with a nice algorithm which is guaranteed to give you the necessary output. Other times, that 11kHz will get pushed to 44kHz and is deemed "close enough".
In short, you cannot rely on recording and playback devices being in sync, and will need to synchronize yourself. There are many algorithms out there for handling this. The easiest method is to add a sample here and there that averages the sample before and after it. If you do this with just a few samples, it will be inaudible. Depending on the kind of drift problem you are having, this may be sufficient.

Flex/Actionscript determine if NetStream has audio by analysing audioBytesPerSecond

I need to "look" at a NetStream and determine if I'm receiving audio. From what I investigated, i may use the property audioBytesPerSecond from NetStreamInfo:
"(audioBytesPerSecond) Specifies the rate at which the NetStream audio
buffer is filled in bytes per second. The value is calculated as a
smooth average for the audio data received in the last second."
I also learned that NetStream may have contain some overhead bytes from the network so, which is the minimum reasonable audioBytesPerSecond value to determine if NetStream is playing audio (and not just noise, for example)?
Can this analysis be done this way?
Thanks in advance!
Yes you can do it this way. It's rather subjective, however.
Try to find a threshold that works for you. We used 5 kilobits/sec in the past. If the amount of data falls below this value, they are likely not sending any audio. Note, we were using the stream.info.byteCount property (you might want a slightly lower value if you're using auiodBytesPerSecond).
This is pretty easy to observe if you speak into the microphone and periodically check audioBytesPerSecond or the other counters/statistics that are available.