I have a large web application. Since 2 weeks, it has started showing a "Site Cant be reached" error on random pages. The error comes for a second and then the page auto refreshes and data gets loaded.
I did an analysis using the netlog tool and found this in the net log viewer. Its showing a net_error -100 and a net_error -383. Any idea what is causing this ?
t=31983 [st= 2] HTTP_TRANSACTION_RESTART_AFTER_ERROR
--> net_error = -100 (ERR_CONNECTION_CLOSED)
t=31983 [st= 2] +HTTP_STREAM_REQUEST [dt=1086]
t=31983 [st= 2] HTTP_STREAM_JOB_CONTROLLER_BOUND
--> source_dependency = 25583 (HTTP_STREAM_JOB_CONTROLLER)
t=33069 [st=1088] HTTP_STREAM_REQUEST_BOUND_TO_JOB
--> source_dependency = 25584 (HTTP_STREAM_JOB)
t=33069 [st=1088] -HTTP_STREAM_REQUEST
t=33069 [st=1088] +URL_REQUEST_DELEGATE_CONNECTED [dt=0]
t=33069 [st=1088] PRIVATE_NETWORK_ACCESS_CHECK
--> client_address_space = "unknown"
--> resource_address_space = "public"
--> result = "blocked-by-inconsistent-ip-address-space"
t=33069 [st=1088] -URL_REQUEST_DELEGATE_CONNECTED
--> net_error = -383 (ERR_INCONSISTENT_IP_ADDRESS_SPACE)
t=33069 [st=1088] -URL_REQUEST_START_JOB
--> net_error = -383 (ERR_INCONSISTENT_IP_ADDRESS_SPACE)
t=33069 [st=1088] URL_REQUEST_DELEGATE_RESPONSE_STARTED [dt=0]
t=33069 [st=1088] -CORS_REQUEST
Posting the solution here which solved the issue.
I think the possible issue was because my servers where in North America and users were in Asia. I switched to another server provider in asia and it solved the issue.
We run some websites that are mapped directly to GCS so website.com has a bucket called website.com and it has all the html pages and static files in it.
Just yesterday we suddenly had a page with significant number of .svg image files (+/- 50 but all very small < 2kb). And also a bunch (lets say 10-15) .png images.
And from those all the .svg files and only 2 of the .png files are failing to load with error:
net::ERR_HTTP2_PROTOCOL_ERROR 200.
It works FINE on Firefox and MS Edge browser. ONLY on Chrome it fails.
I have searched and most issues were to do with bad headers or nginx settings but this is directly to GCS buckets so we do not control any of that.
I ran network log via Chrome:
t= 5285 [st= 700] -HTTP_TRANSACTION_READ_HEADERS
t= 5286 [st= 701] HTTP_CACHE_WRITE_INFO [dt=0]
t= 5286 [st= 701] HTTP_CACHE_WRITE_DATA [dt=0]
t= 5286 [st= 701] +NETWORK_DELEGATE_HEADERS_RECEIVED [dt=12]
t= 5289 [st= 704] HTTP2_STREAM_UPDATE_RECV_WINDOW
--> delta = -543
--> stream_id = 41
--> window_size = 6290913
t= 5298 [st= 713] -NETWORK_DELEGATE_HEADERS_RECEIVED
t= 5298 [st= 713] -URL_REQUEST_START_JOB
t= 5298 [st= 713] URL_REQUEST_DELEGATE_RESPONSE_STARTED [dt=0]
t= 5298 [st= 713] +HTTP_TRANSACTION_READ_BODY [dt=0]
t= 5298 [st= 713] HTTP2_STREAM_UPDATE_RECV_WINDOW
--> delta = 543
--> stream_id = 41
--> window_size = 6291456
t= 5298 [st= 713] -HTTP_TRANSACTION_READ_BODY
t= 5298 [st= 713] URL_REQUEST_JOB_FILTERED_BYTES_READ
--> byte_count = 543
t= 5298 [st= 713] +HTTP_TRANSACTION_READ_BODY [dt=60021]
t=65319 [st=60734] HTTP2_STREAM_ERROR
--> description = "Server reset stream."
--> net_error = "ERR_HTTP2_PROTOCOL_ERROR"
--> stream_id = 41
t=65319 [st=60734] -HTTP_TRANSACTION_READ_BODY
--> net_error = -337 (ERR_HTTP2_PROTOCOL_ERROR)
t=65319 [st=60734] FAILED
--> net_error = -337 (ERR_HTTP2_PROTOCOL_ERROR)
t=65347 [st=60762] -CORS_REQUEST
t=65347 [st=60762] -REQUEST_ALIVE
--> net_error = -337 (ERR_HTTP2_PROTOCOL_ERROR)
(there's more but I figured this was the part that matters; if not I can provide whole log).
Any help to solve this mystery would be much appreciated!
Thanks
The images stored in Cloud Storage are fine and are not the source of the Chrome error. The problem is caused by your JavaScript. There is another issue in that your page performs cross-site actions that Chrome is blocking. The two issues might be related.
Ask the developer that wrote the code to debug and correct this problem.
In summary, this is not a Chrome bug. The issue might be caused by Chrome taking action against your page's behavior. The end result is you must fix your application. The same problem exists in Edge 102.
[UPDATE]
The actual problem is the HTTP header x-goog-meta-link. The size of that header (metadata) is 7,461 bytes. The combined HTTP headers exceeded 8 KB which is the cause of the problem.
Having updated Karate from 0.6.2 to 0.9.5 recently I've had a number of ReferenceError's w.r.t the properties.json I've used throughout my test cases.
I've the following setup:
test-properties.json
{
"headers": {
"x-client-ip": "192.168.3.1",
"x-forwarded-for": "192.168.3.1"
}
}
test-auth.feature
Background:
* def props = read('properties/test-properties.json')
I then use props further down in my first scenario:
And header User-Agent = props.headers.Accept-Language
And header X-Forwarded-For = props.headers.x-forwarded-for
However, when running this I get the following issue:
com.intuit.karate.exception.KarateException: test-auth.feature:14 - javascript evaluation failed: props.headers.Accept-Language, ReferenceError: "Language" is not defined in <eval> at line number 1
I've tried adding the properties file into the same package as the test-auth.feature to no avail. The issue seems to be with reading the json file. I'm aware Karate 0.6.2 could evaluate the file type and parse it internally in its native format. Is this still the case? If not, what is the solution to reading from properties.json in Karate 0.9.5.
Nothing should have changed when it comes to reading JSON files. Karate evaluates the RHS as JS, so I think this is the solution:
And header User-Agent = props.headers['Accept-Language']
And header X-Forwarded-For = props.headers['x-forwarded-for']
EDIT: this works for me:
* def props = { headers: { 'Accept-Language': 'foo', 'x-forwarded-for': 'bar' } }
* url 'http://httpbin.org/headers'
* header User-Agent = props.headers['Accept-Language']
* header X-Forwarded-For = props.headers['x-forwarded-for']
* method get
Resulting in:
1 > GET http://httpbin.org/headers
1 > Accept-Encoding: gzip,deflate
1 > Connection: Keep-Alive
1 > Host: httpbin.org
1 > User-Agent: foo
1 > X-Forwarded-For: bar
So if you are still stuck, please follow this process: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue
I am generating a self-signed certificate which I have added to my keychain and I can see that it is used by Firefox and Chrome.
But when I visit https://world.localhost, the bowser says the certificate is invalid because it is issued for localhost. All the domains below are integrated into the certificate. When I change their order, I can see that the browser only respects the top most entry (DNS.1) when compared to the requested domain, but all the domains are in the certificate when I view it (through the browser).
What is wrong in this case?
[ req ]
default_bits = 2048
default_keyfile = dev.cert.key
distinguished_name = subject
#req_extensions = req_ext
req_extensions = v3_req
x509_extensions = x509_ext
string_mask = utf8only
[ x509_ext ]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment
subjectAltName = #alternate_names
[ v3_req ]
subjectKeyIdentifier = hash
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment
subjectAltName = #alternate_names
[ alternate_names ]
DNS.1 = localhost
DNS.2 = *.localhost
DNS.3 = *.test.localhost
DNS.4 = *.www.localhost
DNS.5 = *.api.localhost
The reason this is not working as expected is, the browser classifies the most right part as the Top-Level domain.
Wildcard certificates like *.com are not valid, and so is *.localhost.
When I put another label into, the whole thing works.
*.domain.localhost matches www.domain.localhost and is valid.
*.localhost matches www.localhost but is not valid.
I'm trying to modify a custom web server app to work with HTML5 video.
It serves a HTML5 page with a basic <video> tag and then it needs to handle the requests for actual content.
The only way I could get it to work so far is to load the entire video file into the memory and then send it back in a single response. It's not a practical option. I want to serve it piece by piece: send back, say, 100 kb, and wait for the browser to request more.
I see a request with the following headers:
http_version = 1.1
request_method = GET
Host = ###.###.###.###:##
User-Agent = Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20100101 Firefox/16.0
Accept = video/webm,video/ogg,video/*;q=0.9,application/ogg;q=0.7,audio/*;q=0.6,*/*;q=0.5
Accept-Language = en-US,en;q=0.5
Connection = keep-alive
Range = bytes=0-
I tried to send back a partial content response:
HTTP/1.1 206 Partial content
Content-Type: video/mp4
Content-Range: bytes 0-99999 / 232725251
Content-Length: 100000
I get a few more GET requests, as follows
Cache-Control = no-cache
Connection = Keep-Alive
Pragma = getIfoFileURI.dlna.org
Accept = */*
User-Agent = NSPlayer/12.00.7601.17514 WMFSDK/12.00.7601.17514
GetContentFeatures.DLNA.ORG = 1
Host = ###.###.###.###:##
(with no indication that the browser wants any specific part of the file.) No matter what I send back to the browser, the video does not play.
As stated above, the same video will play correctly if I try to send the entire 230 MB file at once in the same HTTP packet.
Is there any way to get this all working nicely through partial content requests? I'm using Firefox for testing purposes, but it needs to work with all browsers eventually.
I know this is an old question, but if it helps you can try the following "Model" that we use in our code base.
class Model_DownloadableFile {
private $full_path;
function __construct($full_path) {
$this->full_path = $full_path;
}
public function get_full_path() {
return $this->full_path;
}
// Function borrowed from (been cleaned up and modified slightly): http://stackoverflow.com/questions/157318/resumable-downloads-when-using-php-to-send-the-file/4451376#4451376
// Allows for resuming paused downloads etc
public function download_file_in_browser() {
// Avoid sending unexpected errors to the client - we should be serving a file,
// we don't want to corrupt the data we send
#error_reporting(0);
// Make sure the files exists, otherwise we are wasting our time
if (!file_exists($this->full_path)) {
header('HTTP/1.1 404 Not Found');
exit;
}
// Get the 'Range' header if one was sent
if (isset($_SERVER['HTTP_RANGE'])) {
$range = $_SERVER['HTTP_RANGE']; // IIS/Some Apache versions
} else if ($apache = apache_request_headers()) { // Try Apache again
$headers = array();
foreach ($apache as $header => $val) {
$headers[strtolower($header)] = $val;
}
if (isset($headers['range'])) {
$range = $headers['range'];
} else {
$range = false; // We can't get the header/there isn't one set
}
} else {
$range = false; // We can't get the header/there isn't one set
}
// Get the data range requested (if any)
$filesize = filesize($this->full_path);
$length = $filesize;
if ($range) {
$partial = true;
list($param, $range) = explode('=', $range);
if (strtolower(trim($param)) != 'bytes') { // Bad request - range unit is not 'bytes'
header("HTTP/1.1 400 Invalid Request");
exit;
}
$range = explode(',', $range);
$range = explode('-', $range[0]); // We only deal with the first requested range
if (count($range) != 2) { // Bad request - 'bytes' parameter is not valid
header("HTTP/1.1 400 Invalid Request");
exit;
}
if ($range[0] === '') { // First number missing, return last $range[1] bytes
$end = $filesize - 1;
$start = $end - intval($range[0]);
} else if ($range[1] === '') { // Second number missing, return from byte $range[0] to end
$start = intval($range[0]);
$end = $filesize - 1;
} else { // Both numbers present, return specific range
$start = intval($range[0]);
$end = intval($range[1]);
if ($end >= $filesize || (!$start && (!$end || $end == ($filesize - 1)))) {
$partial = false;
} // Invalid range/whole file specified, return whole file
}
$length = $end - $start + 1;
} else {
$partial = false; // No range requested
}
// Determine the content type
$finfo = finfo_open(FILEINFO_MIME_TYPE);
$contenttype = finfo_file($finfo, $this->full_path);
finfo_close($finfo);
// Send standard headers
header("Content-Type: $contenttype");
header("Content-Length: $length");
header('Content-Disposition: attachment; filename="' . basename($this->full_path) . '"');
header('Accept-Ranges: bytes');
// if requested, send extra headers and part of file...
if ($partial) {
header('HTTP/1.1 206 Partial Content');
header("Content-Range: bytes $start-$end/$filesize");
if (!$fp = fopen($this->full_path, 'r')) { // Error out if we can't read the file
header("HTTP/1.1 500 Internal Server Error");
exit;
}
if ($start) {
fseek($fp, $start);
}
while ($length) { // Read in blocks of 8KB so we don't chew up memory on the server
$read = ($length > 8192) ? 8192 : $length;
$length -= $read;
print(fread($fp, $read));
}
fclose($fp);
} else {
readfile($this->full_path); // ...otherwise just send the whole file
}
// Exit here to avoid accidentally sending extra content on the end of the file
exit;
}
}
You then use it like this:
(new Model_DownloadableFile('FULL/PATH/TO/FILE'))->download_file_in_browser();
It will deal with sending part of the file or the full file etc and works well for us in this and lots of other situations. Hope it helps.
I want partial range requests, because I'll be doing realtime transcoding, I can't have the file completely transcoded and available upon request.
For response which you don't know the full body content yet (you can't guess the Content-Length, live encoding), use chunk encoding:
HTTP/1.1 200 OK
Content-Type: video/mp4
Transfer-Encoding: chunked
Trailer: Expires
1E; 1st chunk
...binary....data...chunk1..my
24; 2nd chunk
video..binary....data....chunk2..con
22; 3rd chunk
tent...binary....data....chunk3..a
2A; 4th chunk
nd...binary......data......chunk4...etc...
0
Expires: Wed, 21 Oct 2015 07:28:00 GMT
Each chunk is send when it's available: when few frames are encoded or when the output buffer is full, 100kB are generated, etc.
22; 3rd chunk
tent...binary....data....chunk3..a
Where 22 give the chunk byte length in hexa (0x22 = 34 bytes),
; 3rd chunk is extra chunk infos (optional) and tent...binary....data....chunk3..a is the content of the chunk.
Then, when the encoding is finished and all chunks are sent, end by:
0
Expires: Wed, 21 Oct 2015 07:28:00 GMT
Where 0 means there not more chunks, followed by zero or more trailer (allowed header fields) defined in the header (Trailer: Expires and Expires: Wed, 21 Oct 2015 07:28:00 GMT are not required) to provide checksums or digital signatures, etc.
Here is the equivalent of the server's response if the file was already generated (no live encoding):
HTTP/1.1 200 OK
Content-Type: video/mp4
Content-Length: 142
Expires: Wed, 21 Oct 2015 07:28:00 GMT
...binary....data...chunk1..myvideo..binary....data....chunk2..content...binary....data....chunk3..and...binary......data......chunk4...etc...
For more information: Chunked transfer encoding — Wikipedia, Trailer - HTTP | MDN