Is it possible to disable the Jitter Buffer in WebRTC (Chrome/Chromium) - google-chrome

I am trying to reduce the Chromium WebRTC video delay as much as possible for a remote machine control application. Since the transmitting and receiving PCs are directly connected via Ethernet (crossover cable) I'm guessing that receive buffering may not be necessary as there should be no delayed, out-of-order or lost packets.
I have rebuilt Chromium after adjusting the kMaxVideoDelayMs value in jitter_buffer_common.h. This has given mixed results including creating erratic behavior with receiving video (choppy) as well as making googPlisSent steadily rise over time. In addition, googJitterBufferMs and googTargetDelayMs jump around erratically when kMaxVideoDelayMs is set lower than a certain threshold (around 60ms). Everything appears to work well with kMaxVideoDelayMs set to 100ms but I would like to try to reduce overall delay as much as possible.
I would like to know if it is possible to disable or bypass the receive jitter buffer altogether as it seems that might reduce the overall delay between capturing video on the transmitting PC and displaying it on the receiving PC.

You still need a jitter buffer to store the packets until you have an entire frame (and do other related processing that's hung off the jitter buffer). Audio jitter buffers usually effectively run things, and control when audio/video get displayed. That's all deep in NetEq, and likely can't be disabled.
If you run audio and video as separate streams (not synced, or no audio), then video should already run pretty much as fast as possible, but if there is delay, it's due to the OS scheduling, and there may also be some amount of pacing delay in the DeliverFrame code (or rather the code that ends up calling DeliverFrame).

Related

Chrome DevTools makes tab memory steadily increase until it crashes

I am using Chrome to run a page and it runs fine for hours on end, I can attest that no memory problems rise up when it's running by itself. Whenever I use DevTools to debug something, however, the tab's memory footprint only keeps increasing and increasing (as if there was a memory leak) to the point it just crashes when it reaches the point where Chrome kills a tab due to memory limit.
Some things I could figure out were that it happens if and only if DevTools panel is open (regardless of any options?), and it happens on pretty much any website (websites that load more stuff to memory tend to bloat faster). I also found out that, if the DevTools tab is closed, the memory usage sharply drops to normal usage (something like from 900mb to 200mb, which is what it would be without DevTools), and reopening makes the tab start to bloat again from scratch. I tried running the garbage collector forcibly on Performance tab, but it also didn't do anything. I don't know what causes this, but I also couldn't find anything that refers to this issue.
Any help in this matter is appreciated.

HLS - how to reduce delay?

Anyone know how configure the HLS media server for reduce a little bit the delay of live streaming video?
what types of parameters i need to change?
I had heard that you could do some tuning using parameters like this: HLSMediaFileDuration
Thanks in advance
A Http Live Streaming system typically has an encoder which produces segments of a certain number of seconds and a media server (web server) which serves playlists containing a list of URLs to these segments to player applications.
Media Files = Segments = .ts files = MPEG2-TS files (in HLS speak)
There are some ways to reduce the delay in HLS:
Reduce the encoded segment length from Apple's recommended 10 seconds to 5 seconds or less. Reducing segment length increases network overhead and load on the web server.
Use lower bitrates, larger .ts files take longer to upload and download. If you use multi-bitrate streams, make sure the first bitrate listed in the playlist is a little lower than the bitrate most of your users use. This will reduce the time to start playing back the stream
Get the segments from the encoder to the web server faster. Upload while still encoding if possible. Update the playlist as soon as the segment has finished uploading
Also remember that the higher the delay the better the quality of your stream (low delay = lower quality). With larger segments, there is less overhead so more space for video data. Taking a longer time to encode results in better quality. More buffering results in less chance of video streams stuttering on playback.
HLS is all about trading quality of playback for longer delay, so you will never be able to use HLS for things like video conferencing. Typical delay in HLS is 30-60 sec, minimum in practice is around 15 sec. If you want low delay use RTP for streaming, but good luck getting good quality on low or variable speed networks.
Please specify which media server you use. Generally speaking, yes - changing chunk size will definitely affect delay time. The less is the first chunk, the quicker the video will be shown in the player.
Actually, Apple recommend to divide your file to small chunks this equal length of file, integers.
In practice, there is huge difference between players. Some of them parse manifest changing this values.
Known practice is to pre-cache in memory first chunks in low & medium resolution (Or try to download them in background of app/page - Amazon does this, though their video is MSS)
I was having the same problem and the keys for me were:
Lower the segment length. I set it to 2s because I'm streaming on a local network. In other type of networks, you need to be careful with the overhead that a low segment length adds that can impact your playback quality.
In your manifest, make sure the #EXT-X-TARGETDURATION is accurate. From here:
The EXT-X-TARGETDURATION tag specifies the maximum Media Segment
duration. The EXTINF duration of each Media Segment in the Playlist
file, when rounded to the nearest integer, MUST be less than or equal
to the target duration; longer segments can trigger playback stalls
or other errors. It applies to the entire Playlist file.
For some reason, the #EXT-X-TARGETDURATION in my manifest was set to 5 and I was seeing a 16-20s delay. After changing that value to 2, which is the correct one according to my segments' length, I am now seeing delays of 6-10s.
In summary, you should expect a delay of at least 3X your #EXT-X-TARGETDURATION. So, lowering the segment length and the #EXT-X-TARGETDURATION value should help reducing the delay.

Why am I missing frames while recording with Flash?

I'm recording video from a webcam with Flash, saving it on an Adobe (Flash) Media Server
What are all the things that can contribute to a choppy video full of missing frames? What do I need to adjust to fix this, or what can contribute to the problem?
The server is an Amazon Web Services Medium (M1) EC2 instance. 2 ghz processor, with 3.75gb RAM Looking on the admin console for AMS, The server is never getting maxed out in terms of RAM or CPU percentage
Bandwidth is never over 4mb.
The flash recorder captures at 320x240 at 15fps
I used setQuality(0,100) on the camera. I can still make out individual "pixels" when viewing my recording, but it isn't bad
Server has nothing to do with this. If the computer running the Flash file can't keep up, then you get dropped frames. You basically have 1000/stage.framerate ms to run every calculation for every frame. If your application is running at 30 fps, that is roughly 33ms per frame. You need to make sure everything that needs to happen on each frame can run in that amount of time, which is obviously difficult/impossible to do with the wide range of hardware out there.
Additionally, 15fps itself is too low. The low-end threshhold for what the human eye can perceive as motion is around 24 fps. So at 15fps, you will notice choppiness. Ideally, you want to record at 30fps, which is near the is about where the human eye stops being able to distinguish individual frames.

Flash Embedded FLV Memory Leak

I am making a game where I have several small character MovieClips which appear on screen randomly. There can be several characters of the same type, and when they are removed from the stage I store them in a memory pool to reuse them.
These characters have several different keyframes which I call to make them do specific things, like fly, land, etc. To improve performance flvs were made for their different actions and these have been embedded in the timeline.
I am having a problem where the amount of memory assigned to Video is constantly increasing as the game is played, even though I am not making more instances of the characters. I have been researching into garbage collecting video but all the stuff I find is for when using the FLVPlayback component and I haven't found anything helpful.
Does anyone have any ideas?
Thanks!
How much is your memory increasing? If it's starting at ie. 80 MB and slowy increasing ie. to 140, and then either staying there or decreasing to 120 and again going slightly up, then there's no need to worry. Unfortunately that's how the Flash GC works. Even if you're not leaking any memory it will slowly show memory increase (and then sudden bob down, as the GC collects trash, and again slowly up).
However, it could be also that you have a real memory leak, but for that to assess, you'd need to post some code. Btw, using memory-pools is a great idea in games, good you're doing it already.

Actionscript: Playing sound from microphone on speakers (through buffer) with constant delay

I'm looking for an example of code that samples signal from microphone and
plays it on speakers. I need a solution that has a resonably constant delay on
different platforms (PC, android, iphone). Delay around 1-2 s is ok for me, and I don't
mind if it varies everytime the application starts.
I tried using SampleDataEvent.SAMPLE_DATA event on Sound and Microphpne classess.
One event would put data into buffer the other would read data.
But it seems impossible to provide constant delay, either the delay grows constantly or
it gets lower to the point where I have less than 2048 samples to put out and Sound class stops
generating SampleDataEvent.SAMPLE_DATA events.
I wan't to process each incoming frame so using setLoopBack(true) is not an option.
ps This is more a problem on Android devices than on PC. Althought when I start to resize application
window on PC delay starts to grow also.
Please help.
Unfortunately, this won't be possible... at least not directly.
Some sound devices will use a different clock between recording and playback. This will be especially true for cell phones where what is running the microphone may very well be different hardware than the headphone audio output.
Basically, if you record at 44.1kHz and play back at 44.1kHz, but those clocks are not in sync, you may be recording at 44.099kHz and play back at 44.101kHz. Over time, this drift will mean that you won't have enough data in the buffer to send to the output.
Another complication (and more than likely your problem) is that your record and playback sample rates may be different. If you record from the microphone at 11kHz and playback at 48kHz, you will note that 11 doesn't evenly fit into 48. Software is often used to up-sample the recording. Sometimes this is done with a nice algorithm which is guaranteed to give you the necessary output. Other times, that 11kHz will get pushed to 44kHz and is deemed "close enough".
In short, you cannot rely on recording and playback devices being in sync, and will need to synchronize yourself. There are many algorithms out there for handling this. The easiest method is to add a sample here and there that averages the sample before and after it. If you do this with just a few samples, it will be inaudible. Depending on the kind of drift problem you are having, this may be sufficient.