Wi-Fi/Bluetooth scanning for location service causes hiccups of video decoding - android-wifi

An app decodes H.264/H.265 video streams smoothly and flawlessly with the above scanning turned off. When the scanning is turned on, the decoding has long pauses (up to 3 seconds) periodically. It varies with devices. On some devices, the Wi-Fi scanning causes this and it is the Bluetooth scanning on other devices.
This app uses the following code to get a decoded video frame:
MediaCodec.BufferInfo bi = new MediaCodec.BufferInfo();
int iOutputBufferIndex = myMediaCodec.dequeueOutputBuffer(bi, TIMEOUT_USEC);
When the scanning is on, it occasionally takes up to 3 seconds to have a valid iOutputBufferIndex (>= 0). It is MediaCodec.INFO_TRY_AGAIN_LATER (-1) during the period.
Could anyone shed some light on this and offer a tip on the remedy?

Related

WebRTC native sends packets very slow

I am working on a streaming server based on WebRTC native. For this project, I've hacked WebRTC native source code (version M60, win10 x64) to be able to feed it a pre-encoded H264 bitstream (1080p25, all frames are encoded to I frame). By default, WebRTC use 42e01f as the h264 profile, I changed it to 640032(level 5) to support 1080p. In the h264_encoder_impl.cc, I commented the encoding part, just copy the bytes from the input frame to the buffer of encoded_image_, and generated the fragment information.
It is working, but the speed of sending packets to client (Chrome) is very slow (about 2~3 fps). If I limit the feeding speed to 12 fps, it is working well.
I spent a lot of time to debug the code, what I found is the speed of sending packets in paced_sender.cc is slow, so the packet queue soon will be full, and then the encoder will be blocked and stop putting new packets into the queue until the queue is not full. I tried to remove the bitrate limitation in paced_sender.cc, the sending speed is still slow.
I also checked graphs in the Chrome WebRTC debugging page (chrome://webrtc-internals) to check if the problem could be on the receiver side, the decoding only costs about 2 ms per frame, the rate of receiving frames is about 2~3 fps, no packet is lost.
PS. the LAN is 1 Gbps.
After debugging for days, I have still no idea why the speed of sending packets is so slow. The h264 bitstream is encoded to all I frames, it could be a problem?
Any reply will be appreciated, thanks in advance!
Answer my own question: build WebRTC under release mode.

Long GPU frames – Chrome devtools performance monitor

I'm running into an issue with WebGL in Google Chrome (61.0) with really long frames that occur around once every 60-90 seconds on my machine. Devtools logs these frames as GPU activity, but what is actually causing these frames to be long is opaque to me. The app is using REGL and updating attribute buffers every frame with subdata. I'm looking for insight into what could be causing these long frames, and how to move forward debugging them, as Chrome's devtools provide no details about GPU frames.
This issue does not occur with Safari 11.0
After further investigation it seems related to use of the ANGLE_instanced_arrays extension. This issue does not occur when disabling instancing and drawing each instance with a separate draw call.

Slow text input in html field from barcode scanner

I have a webpage in my LAN in order to input barcodes in real time to a db through a field (framework django + postgresql + nginx). It works fine, but lately we have a customer that uses 72 char barcodes (Code Matrix) that slows down inputs because before next scan, user must wait the redraw of the last one in the field (it takes about 1-2 seconds, redrawing one char after the other).
Is there a way to reduce latency of drawing scanned text in the html field?
Best thing would be to show directly all the scanned barcode, not one char after the other. The scanner is set to add an "Enter" after the scanned text.
At the end, as Brad stated, the problem is more related to scanner's settings (USB in HID mode), although PC speed is also an issue. After several tests, on a dual core linux machine I estimate delay due 85% to the scanner and 15% to PC/browser combo.
To solve the problem I first searched and downloaded the complete manual of our 2D barcode scanner (306 pages), then I focused on USB Keystroke Delay as a cause, but default setting was already set to 'No Delay'.
The setting that affected reading speed was USB Polling Interval, an option that applies only to the USB HID Keyboard Emulation Device.
The polling interval determines the rate at which data can be sent between the scanner and the host computer. A lower number indicates a faster data rate: default was 8ms, wich I lowered to 3ms without problems. Lower rates weren't any faster, probably because it was reached the treshold where PC becomes the bottleneck.
CAUTION: Ensure your host machine can handle the selected data rate, selecting a data rate that is too fast for your host machine may result in lost data: in my case when I lowered polling interval to 1ms there were no data loss within the working PC, but when testing inside a virtual machine there was data loss as soon as I reached 6ms.
Another interesting thing is that browsers tend to respond significantly slower after a long period of use with many tabs open (a couple of hours in my case), probably due to caching.
Tests done with Firefox and Chromium browsers on old dual core PC with OS Lubuntu (linux).
This probably has nothing to do with your page, but with the speed of the scanner interface. Most of those scanners intentionally rate-limit their input so as not to fill the computer's buffer, avoiding characters getting dropped. Think about this... when you copy/paste text, it doesn't take a long time to redraw characters. Everything appears instantly.
Most of those scanners are configurable. Check to see if there is an option on your scanner to increase its character rate.
On Honeywell and many other brand scanners the USB Keystroke Interval is marked as an INTERCHARACHTER DELAY.
Also if there is a BAUD rate that would be something to increase.

How to play FLV video at "incoming frames speed" (not video-internal) coming from NetStream in ActionScript

How to play NetStream frames immediatly as they arrive without any additional AS framerate logic?
I have recorded some audio & video data packets from RTMP protocol received by Red5, and now I'm trying to send it back to the flash client in a loop by pushing packets to NetStream with incrementing timestamp. The looped sequence has length of about ~20 sec nad is build from about ~200 RTMP packets (VideoData/AudioData)
Environment: both Flash client and server on localhost, no network bottleneck, video is H.264 encoded earlier by same Flash client.
It generaly works, but video is not very fluent - there ale lot of freezes, slowdowns and long pauses. The slower packets transmitting causing the more pauses and freezes., even extreme long pauses like transmiting whole sequence 2x-3x times (~60 sec) without effect - this comes up when forwarding slower than ~2 RTPM packets per second.
The problem looks like some AS-logic is trying to force framerate of a video, not just output received frames, so one of my questions is does AS looks for in-video-frame fps info in live streaming? why it can play faster, but can't play slower? How can I play video "by frames" not synchronizing video fps with RTPM packets timestamps?
On the other side, if I push packets faster than recorder, the video is just faster but almost fluent - I just can't get slower or stable stream (still very irregular speed).
I have analysed some NetStream values:
.bufferLength = ~0 or 0.001, incrasing when I forward packets
extremaly fast (like targeting ~90fps)
.currentFPS = shows real FPS
count seen in Video object, not incoming frames/s
.info.currentBytesPerSecond = ~8 kB/s to ~50kB/s depending on
forwarding speed
.info.droppedFrames = frequently incrases, even if I
stream packets like 2/sec! also jumps after long self-initiated-pause (but buffer
is whole time 0!)
.info.isLive = true
.info.dataBufferLength = same as .bufferLength
It looks like AS is dropping frames, because of too rare RTMP packets receive - like expecting that they will arrive with internal-frame-encoded-fps-speed.
My currently best NetStreamconfiguration:
chatStream.videoReliable = false;
chatStream.audioReliable = false;
chatStream.backBufferTime = 0;
chatStream.bufferTime =0;
Note that if I set bufferTime to 1, video is paused until gathering "1 second of video" but this is not true - buffering is very slow, like assuming that video has FPS of 100 or 200 - even if I'm forwarding packets fast (like targeting ~15fps without buffer), the buffer is filing about 10-20 seconds.
Loop, of course, starts with keyframed video data and keyframe interval of sequence is about 15 frames.
Have you tried netStream.step(1)
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/NetStream.html#step()
Also I would remove 'live' from the play() call.
Finally, maybe flash tries to sync to the audio like it does in regular timeline. Have you tried a video without audio channel?

Actionscript: Playing sound from microphone on speakers (through buffer) with constant delay

I'm looking for an example of code that samples signal from microphone and
plays it on speakers. I need a solution that has a resonably constant delay on
different platforms (PC, android, iphone). Delay around 1-2 s is ok for me, and I don't
mind if it varies everytime the application starts.
I tried using SampleDataEvent.SAMPLE_DATA event on Sound and Microphpne classess.
One event would put data into buffer the other would read data.
But it seems impossible to provide constant delay, either the delay grows constantly or
it gets lower to the point where I have less than 2048 samples to put out and Sound class stops
generating SampleDataEvent.SAMPLE_DATA events.
I wan't to process each incoming frame so using setLoopBack(true) is not an option.
ps This is more a problem on Android devices than on PC. Althought when I start to resize application
window on PC delay starts to grow also.
Please help.
Unfortunately, this won't be possible... at least not directly.
Some sound devices will use a different clock between recording and playback. This will be especially true for cell phones where what is running the microphone may very well be different hardware than the headphone audio output.
Basically, if you record at 44.1kHz and play back at 44.1kHz, but those clocks are not in sync, you may be recording at 44.099kHz and play back at 44.101kHz. Over time, this drift will mean that you won't have enough data in the buffer to send to the output.
Another complication (and more than likely your problem) is that your record and playback sample rates may be different. If you record from the microphone at 11kHz and playback at 48kHz, you will note that 11 doesn't evenly fit into 48. Software is often used to up-sample the recording. Sometimes this is done with a nice algorithm which is guaranteed to give you the necessary output. Other times, that 11kHz will get pushed to 44kHz and is deemed "close enough".
In short, you cannot rely on recording and playback devices being in sync, and will need to synchronize yourself. There are many algorithms out there for handling this. The easiest method is to add a sample here and there that averages the sample before and after it. If you do this with just a few samples, it will be inaudible. Depending on the kind of drift problem you are having, this may be sufficient.