High "Receiving Time" for HTTP Responses below 500 bytes in Chrome Devtools - google-chrome

While using devtools Network tab on Chrome 15 (stable) on Windows 7 and
Windows XP, I am seeing cases where "receiving" time for an HTTP
response is >100ms but the response is a 302 redirects or small image
(beacons) - with a payload below 500 bytes (header+content).
Capturing the TCP traffic on Wireshark clearly shows the server sent
the entire HTTP response in a single TCP packet, so receiving time should
have been 0. A good example is CNN homepage, or any major website that has a lot of
ads and tracking beacons.
This brings up a couple of questions:
What is defined as "receiving" in chrome devtools? is this the time
from 1st packet to last packet?
What factors in the client machine/operating systems impact
"receiving" time, outside of the network/server communication?
In my tests I used a virtual machine for Windows XP, while Windows 7
was on a desktop (quad core, 8gb ram).

The "receiving time" is the time between the didReceiveResponse ("Response headers received") and didReceiveData ("A chunk of response data received") WebURLLoaderClient events reported by the network layer, so some internal processing overhead may apply.
In a general case, keep in mind that the HTTP protocol is stream-oriented, so the division of data between TCP packets is not predictable (half of your headers may get into one packet, the rest and the response body may get into the next one, though this does not seem to be your case.)
Whenever possible, use the latest version of Chrome available. It is likely to contain less errors, including the network layer :-)

The Nagle Algorithm and the Delayed ACK Algorithm are two congestion control algorithms that are enabled by default on Windows machines. These will introduce delays in the traffic of small payloads in an attempt to reduce some of the chattiness of TCP/IP.
Delayed ACK will cause ~200ms of additional "Receiving" time in Chrome's network tab when receiving small payloads. Here is a webpage explaining the algorithms and how to disable them on Windows: http://smallvoid.com/article/winnt-nagle-algorithm.html

Related

Webrtc behavior Nack & FEC

We have WebRTC application with two peers and I experience packet loss of around 5% (checked on webrtc-internals) when call is ongoing. I see Nacks as well.
Wants to know if FEC is being implemented in my setup? I do see some SDP parameters related to FEC as below but not sure whether they are used or not.
How to check if Webrtc is using FEC?
a=rtpmap:124 red/90000
a=rtpmap:123 ulpfec/90000
Also is there any suggestions on how to improve packet loss percentage by tweaking Nacks or FEC etc?
Tried with different bandwidth and resolutions and packet loss is almost same.
Easiest way to determine whether FEC is actually used is to run a packet capture using Wireshark or tcpdump and look for RTP packets where the payload type matches the value in the SDP (123 and 124 in your example). If you see these packets, you’re seeing FEC.
One thing to note, FEC could make packet loss worse in some cases, essentially where you have bursts of back to back packets lost because of congestion. FEC is transmitting additional packets, which allows any one or two packets in a group to be lost and recovered from the additional packets.
Found the root cause for packet loss. It was related to setup on network switches. We are using dedicated leaseline and leaseline expects fixed 100Mbps duplex configuration instead of auto configuration on network switch ports. Due to auto configuration, the link went in to half duplex and hence FEC errors.

WebRTC native sends packets very slow

I am working on a streaming server based on WebRTC native. For this project, I've hacked WebRTC native source code (version M60, win10 x64) to be able to feed it a pre-encoded H264 bitstream (1080p25, all frames are encoded to I frame). By default, WebRTC use 42e01f as the h264 profile, I changed it to 640032(level 5) to support 1080p. In the h264_encoder_impl.cc, I commented the encoding part, just copy the bytes from the input frame to the buffer of encoded_image_, and generated the fragment information.
It is working, but the speed of sending packets to client (Chrome) is very slow (about 2~3 fps). If I limit the feeding speed to 12 fps, it is working well.
I spent a lot of time to debug the code, what I found is the speed of sending packets in paced_sender.cc is slow, so the packet queue soon will be full, and then the encoder will be blocked and stop putting new packets into the queue until the queue is not full. I tried to remove the bitrate limitation in paced_sender.cc, the sending speed is still slow.
I also checked graphs in the Chrome WebRTC debugging page (chrome://webrtc-internals) to check if the problem could be on the receiver side, the decoding only costs about 2 ms per frame, the rate of receiving frames is about 2~3 fps, no packet is lost.
PS. the LAN is 1 Gbps.
After debugging for days, I have still no idea why the speed of sending packets is so slow. The h264 bitstream is encoded to all I frames, it could be a problem?
Any reply will be appreciated, thanks in advance!
Answer my own question: build WebRTC under release mode.

How do I handle packet loss when recording video peer to server via WebRTC

We are using the licode MCU to stream recorded video from Google Chrome to the server. There isn't a second instance of Google Chrome to handle the feedback and the server must do this.
One thing that we have encountered is when there is packet loss frames are dropped and the video gets out of sync. This causes very poor video quality.
In ExternalOutput.cpp there is a place where it detects that the current packet of data received has not incremented monotonically. Here you can see that it drops the current frame and resets the search state.
I would like to know how to modify this so that it can recover from this packet loss. Is submitting a NACK packet on the current sequence number the solution? I've also read that there is a mode where Google chrome submits RED packets (redundant) to deal with the packet loss.
Media processing apps has two principal different layers:
Transport layer (RTP/RTCP)
Codec layer
Transport layer is codec independent and deal with RTP/generic RTCP packets. On this layer there are couple of mechanisms for struggling with packet lost/delay/reordering:
Jitter Buffer (Handles packet delays and reordering)
Generick RTCP Feedbacks (Notifies source peer of packet lost)
On codec layer there are also couple of mechanisms for struggling with quality degradation:
Codec Layer RTCP Feedbacks
Forward error correction (FECC/RED)
To overcome Licode imperfections you should:
First of all it ignores any packet delays and reordering. So, you should implement mechanism (Jitter buffer), which will handle packet reodering/network jitter and determine packet lost (Probably, you could reuse webrtc/freeswitch mechanisms)
When your app determines packet lost, you should send feedback (RTCP NACK) to remote peer
Also you should try to handle ffmpeg (used for decoding video and saving it to file) decoding errors and send FIR (Fast Intra Request)/PLI to remote peer for requesting keyframes in case of errors.
Take a note, that p.2,3 requires proper explicit negotiation (via SDP).
Only after passing all this cases you could take a look to FECC/RED, because it's definetely more dificult to handle and implement.

Understanding Websocket Frames in Chrome

When inspecting Websocket frames via Chromes' debug console, is the length field measuring the payload in bytes?
Obviously, it's the length of of the message. But, each character is one byte, right? If that is true, it's safe to say on my screenshot that 56, and 53 bytes were sent?
Yes, the length reported in Chrome is the length of the payload in bytes.
There is some additional overhead in the message itself beyond just what the payload length reports (both webSocket frame overhead and TCP/IP overhead, though it is fairly efficient in overhead). You can see the webSocket frame format here .
In your screenshot, 53 and 56 bytes of message payload were sent, but something a little larger than that went over the actual wire. You could count the characters in the data it reports was sent and that length should match the reported length. Keep in mind that TCP is a reliable protocol so there is extra TCP/IP protocol related to the reliable delivery of any packet, including ACKS sent back to confirm delivery, unique packet numbers, etc..., but that extra data is relatively small.

Considerable delays between calling URLLoader, and seeing bytes on the wire

Context: An AS3 application using a BOSH solution to asynchronously send and receive messages from a server.
I am endeavouring to find the source of a delay between calls to URLLoader.load(), and observing the HTTP request on the wire with Wireshark. The delay is unpredictable and can be 30s or more. A reliable reproduction has not been possible.
I have been able to eliminate:
Network congestion: there is no TCP retransmission, no part of the journey to the server crosses a network that is the Internet or otherwise likely to suffer congestion.
Exceeding the HTTP transaction limit. At most there are two concurrent HTTP transactions.
Has anyone any wisdom they can share share?