GCS HTTP2_STREAM_ERROR - net::ERR_HTTP2_PROTOCOL_ERROR 200 - google-chrome

We run some websites that are mapped directly to GCS so website.com has a bucket called website.com and it has all the html pages and static files in it.
Just yesterday we suddenly had a page with significant number of .svg image files (+/- 50 but all very small < 2kb). And also a bunch (lets say 10-15) .png images.
And from those all the .svg files and only 2 of the .png files are failing to load with error:
net::ERR_HTTP2_PROTOCOL_ERROR 200.
It works FINE on Firefox and MS Edge browser. ONLY on Chrome it fails.
I have searched and most issues were to do with bad headers or nginx settings but this is directly to GCS buckets so we do not control any of that.
I ran network log via Chrome:
t= 5285 [st= 700] -HTTP_TRANSACTION_READ_HEADERS
t= 5286 [st= 701] HTTP_CACHE_WRITE_INFO [dt=0]
t= 5286 [st= 701] HTTP_CACHE_WRITE_DATA [dt=0]
t= 5286 [st= 701] +NETWORK_DELEGATE_HEADERS_RECEIVED [dt=12]
t= 5289 [st= 704] HTTP2_STREAM_UPDATE_RECV_WINDOW
--> delta = -543
--> stream_id = 41
--> window_size = 6290913
t= 5298 [st= 713] -NETWORK_DELEGATE_HEADERS_RECEIVED
t= 5298 [st= 713] -URL_REQUEST_START_JOB
t= 5298 [st= 713] URL_REQUEST_DELEGATE_RESPONSE_STARTED [dt=0]
t= 5298 [st= 713] +HTTP_TRANSACTION_READ_BODY [dt=0]
t= 5298 [st= 713] HTTP2_STREAM_UPDATE_RECV_WINDOW
--> delta = 543
--> stream_id = 41
--> window_size = 6291456
t= 5298 [st= 713] -HTTP_TRANSACTION_READ_BODY
t= 5298 [st= 713] URL_REQUEST_JOB_FILTERED_BYTES_READ
--> byte_count = 543
t= 5298 [st= 713] +HTTP_TRANSACTION_READ_BODY [dt=60021]
t=65319 [st=60734] HTTP2_STREAM_ERROR
--> description = "Server reset stream."
--> net_error = "ERR_HTTP2_PROTOCOL_ERROR"
--> stream_id = 41
t=65319 [st=60734] -HTTP_TRANSACTION_READ_BODY
--> net_error = -337 (ERR_HTTP2_PROTOCOL_ERROR)
t=65319 [st=60734] FAILED
--> net_error = -337 (ERR_HTTP2_PROTOCOL_ERROR)
t=65347 [st=60762] -CORS_REQUEST
t=65347 [st=60762] -REQUEST_ALIVE
--> net_error = -337 (ERR_HTTP2_PROTOCOL_ERROR)
(there's more but I figured this was the part that matters; if not I can provide whole log).
Any help to solve this mystery would be much appreciated!
Thanks

The images stored in Cloud Storage are fine and are not the source of the Chrome error. The problem is caused by your JavaScript. There is another issue in that your page performs cross-site actions that Chrome is blocking. The two issues might be related.
Ask the developer that wrote the code to debug and correct this problem.
In summary, this is not a Chrome bug. The issue might be caused by Chrome taking action against your page's behavior. The end result is you must fix your application. The same problem exists in Edge 102.
[UPDATE]
The actual problem is the HTTP header x-goog-meta-link. The size of that header (metadata) is 7,461 bytes. The combined HTTP headers exceeded 8 KB which is the cause of the problem.

Related

Getting Site cant be reached on random pages in Chrome and Edge but not in firefox

I have a large web application. Since 2 weeks, it has started showing a "Site Cant be reached" error on random pages. The error comes for a second and then the page auto refreshes and data gets loaded.
I did an analysis using the netlog tool and found this in the net log viewer. Its showing a net_error -100 and a net_error -383. Any idea what is causing this ?
t=31983 [st= 2] HTTP_TRANSACTION_RESTART_AFTER_ERROR
--> net_error = -100 (ERR_CONNECTION_CLOSED)
t=31983 [st= 2] +HTTP_STREAM_REQUEST [dt=1086]
t=31983 [st= 2] HTTP_STREAM_JOB_CONTROLLER_BOUND
--> source_dependency = 25583 (HTTP_STREAM_JOB_CONTROLLER)
t=33069 [st=1088] HTTP_STREAM_REQUEST_BOUND_TO_JOB
--> source_dependency = 25584 (HTTP_STREAM_JOB)
t=33069 [st=1088] -HTTP_STREAM_REQUEST
t=33069 [st=1088] +URL_REQUEST_DELEGATE_CONNECTED [dt=0]
t=33069 [st=1088] PRIVATE_NETWORK_ACCESS_CHECK
--> client_address_space = "unknown"
--> resource_address_space = "public"
--> result = "blocked-by-inconsistent-ip-address-space"
t=33069 [st=1088] -URL_REQUEST_DELEGATE_CONNECTED
--> net_error = -383 (ERR_INCONSISTENT_IP_ADDRESS_SPACE)
t=33069 [st=1088] -URL_REQUEST_START_JOB
--> net_error = -383 (ERR_INCONSISTENT_IP_ADDRESS_SPACE)
t=33069 [st=1088] URL_REQUEST_DELEGATE_RESPONSE_STARTED [dt=0]
t=33069 [st=1088] -CORS_REQUEST
Posting the solution here which solved the issue.
I think the possible issue was because my servers where in North America and users were in Asia. I switched to another server provider in asia and it solved the issue.

chrome not including 3rd-party cookies in requests

The IT department at the company I work for has recently upgraded me to Chrome 91.0.4472.124, and now I am having a problem with an application I'm responsible for.
3rd-party cookies are not being included in cross-domain requests, despite having correct SameSite settings - here is the entire cookie:
JSESSIONID=F4C3A14243123754355B3A6645AFFECD; Max-Age=43200; Expires=Sun, 04-Jul-2021 05:54:55 GMT; Path=/; Secure; HttpOnly; SameSite=None
I've captured the following NetLog trace (actual urls redacted) using chrome's NetLog tool and can see a 'DO_NOT_SAVE_COOKIES' flag being set for my cross-site requests:
=15002 [st= 0] +REQUEST_ALIVE [dt=28]
--> priority = "MEDIUM"
--> traffic_annotation = 101845102
--> url = "https://<siteA><url>"
t=15002 [st= 0] NETWORK_DELEGATE_BEFORE_URL_REQUEST [dt=0]
t=15002 [st= 0] +URL_REQUEST_START_JOB [dt=27]
--> initiator = "https://<siteB>"
--> load_flags = 16448 (DO_NOT_SAVE_COOKIES | SUPPORT_ASYNC_REVALIDATION)
--> method = "GET"
--> network_isolation_key = "https://<domain> https://<domain>"
--> privacy_mode = "enabled"
--> request_type = "other"
--> site_for_cookies = "SiteForCookies: {site=https://<domain>; schemefully_same=true}"
--> url = "https://<siteA><url>"
Why is DO_NOT_SAVE_COOKIES being set? Is that why cookies are not being included in requests?

Keras --- About Masking Layer followed by a Reshape Layer

I want to using mask before LSTM, but the output of Lstm must be reshape to 4 dim.
So my code:
main_input = Input(shape=(96,1000), name='main_input')
pre_input = BatchNormalization()(main_input)
aaa= Masking(mask_value=0)(pre_input)
recurrent1 = LSTM(256,return_sequences=True)(aaa)
r_out= Reshape((1,96,256))(recurrent1)`
But it runs with error:
[![enter image description here][1]][1]
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-2-d1107015501b> in <module>()
17 recurrent1 = LSTM(256,return_sequences=True)(aaa)
18
---> 19 r_out= Reshape((1,96,256))(recurrent1)
/usr/local/lib/python2.7/dist-packages/keras/engine/topology.pyc in __call__(self, x, mask)
512 if inbound_layers:
513 # this will call layer.build() if necessary
--> 514 self.add_inbound_node(inbound_layers, node_indices, tensor_indices)
515 input_added = True
516
/usr/local/lib/python2.7/dist-packages/keras/engine/topology.pyc in add_inbound_node(self, inbound_layers, node_indices, tensor_indices)
570 # creating the node automatically updates self.inbound_nodes
571 # as well as outbound_nodes on inbound layers.
--> 572 Node.create_node(self, inbound_layers, node_indices, tensor_indices)
573
574 def get_output_shape_for(self, input_shape):
/usr/local/lib/python2.7/dist-packages/keras/engine/topology.pyc in create_node(cls, outbound_layer, inbound_layers, node_indices, tensor_indices)
148 if len(input_tensors) == 1:
149 output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0]))
--> 150 output_masks = to_list(outbound_layer.compute_mask(input_tensors[0], input_masks[0]))
151 # TODO: try to auto-infer shape if exception is raised by get_output_shape_for
152 output_shapes = to_list(outbound_layer.get_output_shape_for(input_shapes[0]))
/usr/local/lib/python2.7/dist-packages/keras/engine/topology.pyc in compute_mask(self, input, input_mask)
605 else:
606 raise Exception('Layer ' + self.name + ' does not support masking, ' +
--> 607 'but was passed an input_mask: ' + str(input_mask))
608 # masking not explicitly supported: return None as mask
609 return None
Exception: Layer reshape_1 does not support masking, but was passed an input_mask: Any{2}.0
I have print out, the outshape of recurrent1 is (96,256)
How could I make it right?

Error reading a JSON file into Pandas

I am trying to read a JSON file into Pandas. It's a relatively large file (41k records) mostly text.
{"sanders": [{"date": "February 8, 2016 Monday", "source": "Federal News
Service", "subsource": "MSNBC \"MSNBC Live\" Interview with Sen. Bernie
Sanders (I-VT), Democratic", "quotes": ["Well, it's not very progressive to
take millions of dollars from Wall Street as well.", "That's a very good
question, and I wish I could give her a definitive answer. QUOTE SHORTENED FOR
SPACE"]}, {"date": "February 7, 2016 Sunday", "source": "CBS News Transcripts", "subsource": "SHOW: CBS FACE THE NATION 10:30 AM EST", "quotes":
["Well, John -- John, I think that`s a media narrative that goes around and
around and around. I don`t accept that media narrative.", "Well, that`s what
she said about Barack Obama in 2008. "]},
I tried:
quotes = pd.read_json("/quotes.json")
I expected it to read in cleanly because it was file created in python. However, I got this error:
ValueError Traceback (most recent call last)
<ipython-input-19-c1acfdf0dbc6> in <module>()
----> 1 quotes = pd.read_json("/Users/kate/Documents/99Antennas/Client\
Files/Fusion/data/quotes.json")
/Users/kate/venv/lib/python2.7/site-packages/pandas/io/json.pyc in
read_json(path_or_buf, orient, typ, dtype, convert_axes, convert_dates,
keep_default_dates, numpy, precise_float, date_unit)
208 obj = FrameParser(json, orient, dtype, convert_axes,
convert_dates,
209 keep_default_dates, numpy, precise_float,
--> 210 date_unit).parse()
211
212 if typ == 'series' or obj is None:
/Users/kate/venv/lib/python2.7/site-packages/pandas/io/json.pyc in parse(self)
276
277 else:
--> 278 self._parse_no_numpy()
279
280 if self.obj is None:
/Users/kate/venv/lib/python2.7/site-packages/pandas/io/json.pyc in _
parse_no_numpy(self)
493 if orient == "columns":
494 self.obj = DataFrame(
--> 495 loads(json, precise_float=self.precise_float),
dtype=None)
496 elif orient == "split":
497 decoded = dict((str(k), v)
ValueError: Expected object or value
After reading the documentation and stackoverflow, I also tried adding convert_dates=False to the parameters, but that did not correct the problem. I would welcome suggestions as to how to handle this error.
Try removing the forward slash in the filename. If you run this python code from the same directory where the file is sitting, it should work.
quotes = pd.read_json("quotes.json")
SPKoder mentioned the forward slash. I was looking for an answer when I realized I hadn't added a / when combing filename and path (i.e. c:/path/herefile.json, instead of c:/path/here/file.json). Anyways the error I received was ...
ValueError: Expected object or value
Not a very intuitive error message, but that is what causes it.

MapServer SOS (Sensor Observation Service) Configuration

I tried to set up MapServer SOS but I faced a problem: the SOS doesn't return anything. You may see the map file I have created below:
MAP
NAME "SOS_DEMO"
STATUS ON
SIZE 400 300
EXTENT -180 -90 180 90
UNITS METERS
SHAPEPATH "C:\ms4w\apps\tutorial\data"
IMAGECOLOR 255 255 255
WEB
IMAGEPATH "C:\ms4w\apps\tutorial\templates"
IMAGEURL "C:\ms4w\apps\tutorial\images"
METADATA
"sos_onlineresource" "http://127.0.0.1:8282/cgi-bin/mapserv.exe?map=c:/ms4w/mysos.map?"
"sos_title" "My SOS Demo Server"
"sos_srs" "EPSG:4326"
"sos_enable_request" "*"
END
END
PROJECTION
"init=epsg:4326"
END
LAYER
NAME "sos_point"
METADATA
"sos_procedure" "ifgi-sensor-1"
"sos_offering_id" "WQ1289"
"sos_observedproperty_id" "Water Quality"
"sos_describesensor_url" "http://127.0.0.1:8181/DescribeSensor.xml"
END
TYPE POINT
STATUS ON
DATA 'sospoint'
PROJECTION
"init=epsg:4326"
END
CLASS
NAME 'sospoint'
STYLE
COLOR 255 128 128
END
END
END
END
As you see, I tried to retrieve sensor data from a shapefile. The message returned by the SOS is:
<om:ObservationCollection xmlns:gml="http://www.opengis.net/gml" xmlns:ows="http://www.opengis.net/ows/1.1" xmlns:swe="http://www.opengis.net/swe/1.0.1" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:sos="http://www.opengis.net/sos/1.0" xmlns:ms="http://mapserver.gis.umn.edu/mapserver" xmlns:om="http://www.opengis.net/om/1.0" gml:id="WQ1289" xsi:schemaLocation="http://www.opengis.net/om/1.0 http://schemas.opengis.net/om/1.0.0/om.xsd http://mapserver.gis.umn.edu/mapserver http://127.0.0.1:8282/cgi-bin/mapserv.exe?map=c:/ms4w/mysos.map?service=WFS&version=1.1.0&request=DescribeFeatureType&typename=urban">
<om:member>
<om:Observation>
<om:procedure xlink:href="urn:ogc:def:procedure:ifgi-sensor-1"/>
<om:observedProperty>
<swe:CompositePhenomenon gml:id="Water Quality" dimension="3">
<swe:component xlink:href="urn:ogc:def:property:OGC-SWE:1:Id"/>
<swe:component xlink:href="urn:ogc:def:property:OGC-SWE:1:sensor_nam"/>
<swe:component xlink:href="urn:ogc:def:property:OGC-SWE:1:sensor_val"/>
</swe:CompositePhenomenon>
</om:observedProperty>
<om:resultDefinition>
<swe:DataBlockDefinition>
<swe:components>
<swe:DataRecord/>
</swe:components>
<swe:encoding>
<swe:TextBlock tokenSeparator="," blockSeparator=" " decimalSeparator="."/>
</swe:encoding>
</swe:DataBlockDefinition>
</om:resultDefinition>
<om:result></om:result>
</om:Observation>
</om:member>
</om:ObservationCollection>
Although I put 6 observations into the shapefile but the SOS doesn't return any. Would you please let me know what I should do to resolve the problem?!
Thanks,
Ebrahim
Perhaps better ask here? https://gis.stackexchange.com/