Chrome cookies not working after tomcat web server reboot - google-chrome

I noticed recently, that when I reboot my Tomcat web server, that the Chrome browser can no longer store cookies. i.e. tomcat uses cookies for http sessions, and the browser can no longer get its http session, also the cookie we use to store the logged in user fails, and the user does not remain logged in.
This seems to be a new issue with Chrome, perhaps from a recent update, I do not remember seeing it before. If I close the Chrome browser, then reopen it, it is fine again (until the server is rebooted again).
The issue does not happen on Firefox, seems like a bug in Chrome.
Has anyone else noticed this issue, or know of a solution?
I found some posts about Chrome/tomcat cookie issues and the suggestion to set,
sessionCookiePathUsesTrailingSlash=false in the context.xml
but this does not fix the issue.
It seems it might be related to the website supporting both https and http, and switching between the two (although it did occur on a website that did not support https as well...)
Okay, I can now recreate the issue, steps are.
connect to website via https
logout / login
connect to website via http
Tomcat JSESSIONID cookie can no longer be stored (oddly user/password cookies are stored)
This only happens on Chrome, and only since the Chrome update that add the "insecure" flag on login pages that use http
Okay I added this to my web.xml
<session-config>
<cookie-config>
<http-only>true</http-only>
<secure>true</secure>
</cookie-config>
</session-config>
This did not fix the issue, but made the issue always occur through http, i.e. make http no longer able to store the JSESSIONID cookie.
I tried <secure>false</secure> but still get the old issue.
So, it is related to this setting at least. Anyone have any ideas?
Logged bug on Chrome,
https://bugs.chromium.org/p/chromium/issues/detail?id=698741

I was able to reproduce your problem with Chrome: Just it is needed to create HttpSession from HTTPS zone. Any subsequent HTTP request will not send the session cookie and any attempt to Set-Cookie:JSESSIONID= through HTTP is ignored by chrome.
The problem is localized when the user switch from HTTPS to HTTP. The HTTPS session cookie is maintained even if server is restarted and is working properly. (I tested with Tomcat6, Tomcat 9, and using an apache proxy for SSL)
This is response header sent by Tomcat when session is created from HTTPS
Set-Cookie: JSESSIONID=CD93A1038E89DFD39F420CE0DD460C72;path=/cookietest;Secure;HttpOnly
and this one for HTTP (note Secure is missing)
Set-Cookie:SESSIONID=F909DBEEA37960ECDEA9829D336FD239;path=/cookietest;HttpOnly
Chrome ignores the second set-Cookie. On the other hand Firefox and Edge replace the Secure cookie with the not-secured. To determine what the correct behaviour should be I have reviewed RFC2109
4.3.3 Cookie Management
If a user agent receives a Set-Cookie response header whose NAME is
the same as a pre-existing cookie, and whose Domain and Path
attribute values exactly (string) match those of a pre-existing
cookie, the new cookie supersedes the old.
So, It is clear is a chrome bug, as you supposed in the question: The HTTP cookie should replace the one setted by HTTPS
Removing cookie manually from Chrome or invalidating the session at server side makes it work again (if after these actions the session is created using HTTP)
By default the JSESSIONID cookie is created with Secure when is requested from HTTPS. I guess this is the reason that Chrome do not allow to overwrite the cookie. But if you try to set <secure>false</secure> in web.xml Tomcat ignores it and the Set-Cookie header is sent with Secure
<session-config>
<cookie-config>
<http-only>true</http-only>
<secure>true</secure>
</cookie-config>
</session-config>
Changing cookie name, setting sessionCookiePathUsesTrailingSlash or removing HttpOnly has had no effect
I could not find a workaround for this issue except invalidating server session when logged user switch from HTTPS to HTTP.
Finally I opened a bug in chromium: https://bugs.chromium.org/p/chromium/issues/detail?id=698839
UPDATED
The issue is finally marked as Won't Fix because it is an intentional change. See https://www.chromestatus.com/feature/4506322921848832
Strict Secure Cookies
This adds restrictions on cookies marked with the 'Secure' attribute. Currently, Secure cookies cannot be accessed by insecure (e.g. HTTP) origins. However, insecure origins can still add Secure cookies, delete them, or indirectly evict them. This feature modifies the cookie jar so that insecure origins cannot in any way touch Secure cookies. This does leave a carve out for cookie eviction, which still may cause the deletion of Secure cookies, but only after all non-Secure cookies are evicted.

I remember seeing this a couple of times and as far as I can remember this was the only recommendation on the matter, as you mentioned:
A possible solution to this might be adding sessionCookiePathUsesTrailingSlash=false in the context.xml and see how that goes.
Some info on the matter from here
A discussion here (same solution)
Hope I didn't confuse the issues and this helps you, let me know with a comment if I need to edit/if worked/if I should delete, thanks!

There is a draft document to deprecate the modification of 'secure' cookies from non-secure origins (submitted by Google). It specifies the recommendations to amend the HTTP State Management Mechanism document.
Abstract of the document:
This document updates RFC6265 by removing the ability for a non-
secure origin to set cookies with a 'secure' flag, and to overwrite
cookies whose 'secure' flag is set. This deprecation improves the
isolation between HTTP and HTTPS origins, and reduces the risk of
malicious interference.
Chrome already implemented this feature in v 52 and same feature is also implemented in Mozilla few days back.
To solve this issue, I think you should connect to website via https only.

The bad way I think is to set sessionCookieName = "JSESSIONIDForHttp" in context.xml
Let Browser's cookie know:
If secure https condition use default "JSESSIONID".
If not secure http condition use "JSESSIONIDForHttp".

Related

ERR_HTTP2_PROTOCOL_ERROR in Chrome. Works in Firefox [duplicate]

I'm currently working on a website, which triggers a net::ERR_HTTP2_PROTOCOL_ERROR 200 error on Google Chrome. I'm not sure exactly what can provoke this error, I just noticed it pops out only when accessing the website in HTTPS. I can't be 100% sure it is related, but it looks like it prevents JavaScript to be executed properly.
For instance, the following scenario happens :
I'm accessing the website in HTTPS
My Twitter feed integrated via https://publish.twitter.com isn't loaded at all
I can notice in the console the ERR_HTTP2_PROTOCOL_ERROR
If I remove the code to load the Twitter feed, the error remains
If I access the website in HTTP, the Twitter feed appears and the error disappears
Google Chrome is the only web browser triggering the error: it works well on both Edge and Firefox.
(NB: I tried with Safari, and I have a similar kcferrordomaincfnetwork 303 error)
I was wondering if it could be related to the header returned by the server since there is this '200' mention in the error, and a 404 / 500 page isn't triggering anything.
Thing is the error isn't documented at all. Google search gives me very few results. Moreover, I noticed it appears on very recent Google Chrome releases; the error doesn't pop on v.64.X, but it does on v.75+ (regardless of the OS; I'm working on Mac tho).
Might be related to Website OK on Firefox but not on Safari (kCFErrorDomainCFNetwork error 303) neither Chrome (net::ERR_SPDY_PROTOCOL_ERROR)
Findings from further investigations are the following:
error doesn't pop on the exact same page if server returns 404 instead of 2XX
error doesn't pop on local with a HTTPS certificate
error pops on a different server (both are OVH's), which uses a different certificate
error pops no matter what PHP version is used, from 5.6 to 7.3 (framework used : Cakephp 2.10)
As requested, below is the returned header for the failing ressource, which is the whole web page. Even if the error is triggering on each page having a HTTP header 200, those pages are always loading on client's browser, but sometimes an element is missing (in my exemple, the external Twitter feed). Every other asset on the Network tab has a success return, except the whole document itself.
Google Chrome header (with error):
Firefox header (without error):
A curl --head --http2 request in console returns the following success:
HTTP/2 200
date: Fri, 04 Oct 2019 08:04:51 GMT
content-type: text/html; charset=UTF-8
content-length: 127089
set-cookie: SERVERID31396=2341116; path=/; max-age=900
server: Apache
x-powered-by: PHP/7.2
set-cookie: xxxxx=0919c5563fc87d601ab99e2f85d4217d; expires=Fri, 04-Oct-2019 12:04:51 GMT; Max-Age=14400; path=/; secure; HttpOnly
vary: Accept-Encoding
Trying to go deeper with the chrome://net-export/ and https://netlog-viewer.appspot.com tools is telling me the request ends with a RST_STREAM :
t=123354 [st=5170] HTTP2_SESSION_RECV_RST_STREAM
--> error_code = "2 (INTERNAL_ERROR)"
--> stream_id = 1
For what I read in this other post, "In HTTP/2, if the client wants to abort the request, it sends a RST_STREAM. When the server receives a RST_STREAM, it will stop sending DATA frames to the client, thereby stopping the response (or the download). The connection is still usable for other requests, and requests/responses that were concurrent with the one that has been aborted may continue to progress.
[...]
It is possible that by the time the RST_STREAM travels from the client to the server, the whole content of the request is in transit and will arrive to the client, which will discard it. However, for large response contents, sending a RST_STREAM may have a good chance to arrive to the server before the whole response content is sent, and therefore will save bandwidth."
The described behavior is the same as the one I can observe. But that would mean the browser is the culprit, and then I wouldn't understand why it happens on two identical pages with one having a 200 header and the other a 404 (same goes if I disable JS).
In my case it was - no disk space left on the web server.
For several weeks I was also annoyed by this "bug":
net :: ERR_HTTP2_PROTOCOL_ERROR 200
In my case, it occurred on images generated by PHP.
It was at header() level, and on this one in particular:
header ('Content-Length:'. Filesize($cache_file));
It did obviously not return the exact size, so I deleted it and everything works fine now.
So Chrome checks the accuracy of the data transmitted via the headers, and if it does not correspond, it fails.
EDIT
I found why content-length via filesize was being miscalculated: the GZIP compression is active on the PHP files, so excluding the file in question will fix the problem. Put this code in the .htaccess:
SetEnvIfNoCase Request_URI ^ / thumb.php no-gzip -vary
It works and we keep the header Content-length.
I am finally able to solve this error after researching some things I thought is causing the error for 24 errors. I visited all the pages across the web. And I am happy to say that I have found the solution.
If you are using NGINX, then set gzip to off and add proxy_max_temp_file_size 0; in the server block like I have shown below.
server {
...
...
gzip off;
proxy_max_temp_file_size 0;
location / {
proxy_pass http://127.0.0.1:3000/;
....
Why? Because what actually happening was all the contents were being compressed twice and we don't want that, right?!
The fix for me was setting minBytesPerSecond in IIS to 0. This setting can be found in system.applicationHost/webLimits in IIS's Configuration Editor. By default it's set to 240.
It turns out that some web servers will cut the connection to a client if the server's data throughput to the client passes below a certain limit. This is to protect against "slow drip" denial of service attacks. However, this limit can also be triggered in cases where an innocent user requests many resources all at once (such as lots of images on a single page), and the server is forced to ration the bandwidth for each request so much that it causes one or more requests to drop below the throughput limit, which causes the server to cut the connection and shows up as net::ERR_HTTP2_PROTOCOL_ERROR in Chrome.
For example, if you request 11 GIF images all at once, and each individual GIF is 10 megabytes (11 * 10 = 110 megabytes total), and the server is only able to serve at 100 megabytes per second (per thread), the server will have to slow the throughput on the last GIF image until the first 10 are finished. If the throughput on that last GIF is slowed so much that it drops below the minBytesPerSecond limit, it will cut the connection.
I was able to resolve this by following these steps:
I used Chrome's Network Log Export tool at chrome://net-export/ to see exactly what was behind the ERR_HTTP2_PROTOCOL_ERROR error. I started the log, reproduced the error, and stopped the log.
I imported the log into the log viewer at https://netlog-viewer.appspot.com/#import, and saw an interesting event titled HTTP2_SESSION_RECV_RST_STREAM, with error code 8 (CANCEL).
I did some Googling on the term "RST_STREAM" (which appears to be an abbreviated form of "reset stream") and found a discussion between some people talking about an IIS setting called minBytesPerSecond (discussion here: https://social.msdn.microsoft.com/Forums/en-US/aeb01c46-bcdf-40ed-a417-8a3558221137). I also found another discussion where there was some debate about whether minBytesPerSecond was intended to protect against slow HTTP DoS (slow drip) attacks (discussion here: IIS 8.5 low minBytesPerSecond not working against slow HTTP POST). In any case, I learned that IIS uses minBytesPerSecond to determine whether to cancel a connection if it cannot sustain the minimum throughput. This is relevant in cases where a single user makes many requests to a large resource, and each new connection ends up starving all the other unfinished ones, to the point where some may fall below the minBytesPerSecond threshold.
To confirm that the server was canceling requests due to a minBytesPerSecond error, I checked my server's HTTPERR log at c:\windows\system32\logfiles\httperr. Sure enough, I opened the file and did a text search for "MinBytesPerSecond" and there were tons of entries for it.
So after I changed the minBytesPerSecond to 0, I was no longer able to reproduce the ERR_HTTP2_PROTOCOL_ERROR error. So, it appears that the ERR_HTTP2_PROTOCOL_ERROR error was being caused by my server (IIS) canceling the request because the throughput rate from my server fell below the minBytesPerSecond threshold.
So for all you reading this right now, if you're not using IIS, maybe there is a similar setting related to minimum throughput rate you can play with to see if it gets rid of the ERR_HTTP2_PROTOCOL_ERROR error.
I experienced a similar problem, I was getting ERR_HTTP2_PROTOCOL_ERROR on one of the HTTP GET requests.
I noticed that the Chrome update was pending, so I updated the Chrome browser to the latest version and the error was gone next time when I relaunched the browser.
I encountered this because the http2 server closed the connection when sending a big response to the Chrome.
Why?
Because it is just a setting of the http2 server, named WriteTimeout.
I had this problem when having a Nginx server that exposing the node-js application to the external world. The Nginx made the file (css, js, ...) compressed with gzip and with Chrome it looked like the same.
The problem solved when we found that the node-js server is also compressed the content with gzip. In someway, this double compressing leading to this problem. Canceling node-js compression solved the issue.
I didn't figure out what exactly was happening, but I found a solution.
The CDN feature of OVH was the culprit. I had it installed on my host service but disabled for my domain because I didn't need it.
Somehow, when I enable it, everything works.
I think it forces Apache to use the HTTP2 protocol, but what I don't understand is that there indeed was an HTTP2 mention in each of my headers, which I presume means the server was answering using the right protocol.
So the solution for my very particular case was to enable the CDN option on all concerned domains.
If anyone understands better what could have happened here, feel free to share explanations.
I faced this error several times and, it was due to transferring large resources(larger than 3MB) from server to client.
This error is currently being fixed: https://chromium-review.googlesource.com/c/chromium/src/+/2001234
But it helped me, changing nginx settings:
turning on gzip;
add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
In my case, Nginx acts as a reverse proxy for Node.js application.
We experienced this problem on pages with long Base64 strings. The problem occurs because we use CloudFlare.
Details: https://community.cloudflare.com/t/err-http2-protocol-error/119619.
Key section from the forum post:
After further testing on Incognito tabs on multiple browsers, then
doing the changes on the code from a BASE64 to a real .png image, the
issue never happened again, in ANY browser. The .png had around 500kb
before becoming a base64,so CloudFlare has issues with huge lines of
text on same line (since base64 is a long string) as a proxy between
the domain and the heroku. As mentioned before, directly hitting
Heroku url also never happened the issue.
The temporary hack is to disable HTTP/2 on CloudFlare.
Hope someone else can produce a better solution that doesn't require disabling HTTP/2 on CloudFlare.
In our case, the reason was invalid header.
As mentioned in Edit 4:
take the logs
in the viewer choose Events
chose HTTP2_SESSION
Look for something similar:
HTTP2_SESSION_RECV_INVALID_HEADER
--> error = "Invalid character in header name."
--> header_name = "charset=utf-8"
By default nginx limits upload size to 1MB.
With client_max_body_size you can set your own limit, as in
location /uploads {
...
client_max_body_size 100M;
}
You can set this setting also on the http or server block instead (See here).
This fixed my issue with net::ERR_HTTP2_PROTOCOL_ERROR
Just posting here to let people know that ERR_HTTP2_PROTOCOL_ERROR in Chrome can also be caused by an unexpected response to a CORS request.
In our case, the OPTIONS request was successful, but the following PUT that should upload an image to our infrastructure was denied with a 410 (because of a missing configuration allowing uploads) resulting in Chrome issuing a ERR_HTTP2_PROTOCOL_ERROR.
When checking in Firefox, the error message was much more helpful:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://www.[...] (Reason: CORS header ‘Access-Control-Allow-Origin’ missing). Status code: 410.
My recommendation would be to check an alternative browser in this case.
I'm not convinced this was the issue but through cPanel I'd noticed the PHP version was on 5.6 and changing it to 7.3 seemed to fix it. This was for a WordPress site. I noticed I could access images and generic PHP files but loading WordPress itself caused the error.
Seems like many issues may cause ERR_HTTP2_PROTOCOL_ERROR: in my case it was a minor syntax error in a php-generated header, Content-Type : text/plain . You might notice the space before the colon... that was it. Works no problem when the colon is right next to the header name like Content-Type: text/plain. Only took a million hours to figure out... The error happens with Chrome only, Firefox loaded the object without complaint.
If simply restarting e.g., Chrome Canary, with a fresh profile fixes the problem, then one surely
is the "victim" of a failed Chrome Variation! Yes, there are ways to opt out of being a Guinea pig in Chrome's field testing.
In my case
header params can not set null or empty string
{
'Authorization': Authorization //Authorization can't use null or ''
}
I got the same issue (asp, c# - HttpPostedFileBase) when posting a file that was larger than 1MB (even though application doesn't have any limitation for file size), for me the simplification of model class helped. If you got this issue, try to remove some parts of the model, and see if it will help in any way. Sounds strange, but worked for me.
I have been experiencing this problem for the last week now as I've been trying to send DELETE requests to my PHP server through AJAX. I recently upgraded my hosting plan where I now have an SSL Certificate on my host which stores the PHP and JS files. Since adding an SSL Certificate I no longer experience this issue. Hoping this helps with this strange error.
I also faced this error and I believe there can be multiple reasons behind it. Mine was, ARR was getting timed-out.
In my case, browser was making a request to a reverse proxy site where I have set my redirection rules and that proxy site is eventually requesting the actual site. Now for huge data it was taking more than 2 minutes 5 seconds and Application Request Routing timeout for my server was set to 2 minutes. I fixed this by increasing the ARR timeout by below steps:
1. Go to IIS
2. Click on server name
3. Click on Application Request Routing Cache in the middle pane
4. Click Server Proxy settings in right pane
5. Increase the timeout
6. Click Apply
My team saw this on a single javascript file we were serving up. Every other file worked fine. We switched from http2 back to http1.1 and then either net::ERR_INCOMPLETE_CHUNKED_ENCODING or ERR_CONTENT_LENGTH_MISMATCH. We ultimately discovered that there was a corporate filter (Trustwave) that was erroneously detecting an "infoleak" (we suspect it detected something in our file/filename that resembled a social security number). Getting corporate to tweak this filter resolved our issues.
For my situation this error was caused by having circular references in json sent from the server when using an ORM for parent/child relationships. So the quick and easy solution was
JsonConvert.SerializeObject(myObject, new JsonSerializerSettings { ReferenceLoopHandling = ReferenceLoopHandling.Ignore })
The better solution is to create DTOs that do not contain the references on both sides (parent/child).
I had another case that caused an ERR_HTTP2_PROTOCOL_ERROR that hasn't been mentioned here yet. I had created a cross reference in IOC (Unity), where I had class A referencing class B (through a couple of layers), and class B referencing class A. Bad design on my part really. But I created a new interface/class for the method in class A that I was calling from class B, and that cleared it up.
I hit this issue working with Server Sent Events. The problem was solved when I noticed that the domain name I used to initiate the connection included a trailing slash, e.g. https://foo.bar.bam/ failed with ERR_HTTP_PROTOCOL_ERROR while https://foo.bar.bam worked.
In my case (nginx on windows proxying an app while serving static assets on its own) page was showing multiple assets including 14 bigger pictures; those errors were shown for about 5 of those images exactly after 60 seconds; in my case it was a default send_timeout of 60s making those image requests fail; increasing the send_timeout made it work
I am not sure what is causing nginx on windows to serve those files so slow - it is only 11.5MB of resources which takes nginx almost 2 minutes to serve but I guess it is subject for another thread
In my case, the problem was that Bitdefender provided me with a local ssl certificate, when the website was still without a certificate.
When I disabled Bitdefender and reloaded the page, the actual valid server ssl certificate was loaded, and the ERR_HTTP2_PROTOCOL_ERROR was gone.
In my case, it was WordPress that now requires PHP 7.4 and I was running 7.2.
As soon as I updated, the errors disappeared.
Happened again and this time it was the ad-blocker that didn't like the name of my images (yt.png, ig.png, url.png). I added a prefix and all loaded ok.
In my case, the time on my computer (browser client) was out of date, synced it using settings in windows, and then the error got away
I had line breaks in my Content-Security-Policy in my nginx.conf that produced this error when used in an docker container running in Kube in GCP (serving angular but I doubt that matters).
Putting them all back on the same line and the problem went away.
A curl -v helped diagnose.
http2 error: Invalid HTTP header field was received: frame type: 1, stream: 1, name: [content-security-policy], value: [script-src 'unsafe-inline' 'self....
It was much easier to edit on separate lines but never again!

chrome blocking the cookies even with samesite=None

I have a flask application hosted in heroku embedded as an iframe to one of my website.
Let's say a.com renders this <heroku_url>.com as an iframe.
When user visits a.com, <heroku_url>.com is rendered and session is created.
from flask import session, make_response
#app.route("/")
def index():
session['foo'] = 'bar'
response = make_response("setting cookie")
response.headers.add('Set-Cookie', 'cross-site-cookie=bar; SameSite=None; Secure')
return response
In Chrome dev tools, I see the cookie getting blocked. Works fine in firefox though.
Am I setting the cookie properly?
I understand this is due to chrome80 update, but not sure about the workaround
Setting samesite attribute in the session cookie to None seems to have solved the problem.
Had to update werkzeug (WSGI web application library which is wrapped by flask) and update the session cookie.
i.e
app.config['SESSION_COOKIE_SAMESITE'] = 'None'
app.config['SESSION_COOKIE_SECURE'] = True
However, this also depends on the user's preference in 'chrome://settings/cookies'.
Chrome will block the session cookies even if samesite is set to None if one of the below options is selected
Block third-party cookies
Block all cookies
Block third-party cookies in Incognito (blocks in incognito mode).
You can check your browser is treating the cookies as expected by checking the test site at https://samesite-sandbox.glitch.me/
If all the rows contain green checks (✔️) then there it's likely there is some kind of issue with the cookie and I would suggest checking the Issues tab and Network tab in DevTools to confirm the set-cookie header definitely contains what it should.
If there are any red or orange crosses (✘) on the test site, then something in your browser is affecting cookies. Check that you are not blocking third-party cookies (chrome://settings/cookies) or running an extension that may do something similar.

Cypress.io on Chrome with "SameSite by default cookies" issue

We're running Cypress.io locally via http:// using Chrome, and when "SameSite by default cookies" is on (which they are starting to roll out to all users), our login tests fail because the session cookie cannot be set (is blocked because the connection is not secure). Any suggestions on a workaround? I looked into setting a Chrome flag as per:
https://docs.cypress.io/api/plugins/browser-launch-api.html#Examples
with flag:
https://peter.sh/experiments/chromium-command-line-switches/#unsafely-treat-insecure-origin-as-secure
but couldn't find an appropriate flag. Thanks.
I ended up fixing this by simply changing the session cookie's samesite attribute for my local/test environment from none, which requires secure, to lax. Hope this helps someone else!
I ran in to the same issue.
My solution was going to chrome://flags (after running chrome from cypress) and set SameSite by default cookies to Disabled.
You can actually write an interceptor in cypress to rewrite the headers of requests to use SameSite="None". This blog explains how in more detail.
https://www.tomoliver.net/posts/cypress-samesite-problem
This technique is good if you don't have control over the server sending the response. e.g a third party auth server.

How to get the Request Headers using the Chrome Devtool Protocol

The new chrome versions 72+ does not send the requestHeaders .
there was a solution:
DevTools Protocol network inspection is located quite high in the network stack. This architecture doesn't let us collect all the headers that are added to the requests. So the ones we report in Network.requestWillBeSent and Network.requestIntercepted are not complete; this will stay like this for the foreseeable future.
There are a few ways to get real request headers:
• the crude one is to use proxy
• the more elegant one is to rely on Network.responseReceived DevTools protocol event. The actual headers are reported there as requestHeaders field in the Network.Response.
This worked fine with the old chromes but not with the last versions. here is a small summery I made for the versions a coulded test
a solution for chrome v67 was to add this flags to disable Site Isolation :
chrome --disable-site-isolation-trials --disable-features=IsolateOrigins,site-per-process --disable-web-security
Now all of this does not work with the last chrome v73
maybe it is caused by this:
Issue 932674: v72 broke devtools request interception inside cross-domain iframes
you can use Fetch protocol domain that is available since m74
the solution gaven does not work neither, the Fetch.requestPaused does not contain the request headers...
I found some info that maybe causes that:
DevTools: do not expose raw headers for cross-origin requests
DevTools: do not report raw headers and cookies for protected subresources. In case subresource request's site needs to have its document protected, don't send raw headers and cookies into the frame's renderer.
or it is caused when it is an HTTP/2 server?
Does the HTTP/2 header frame factor into a response’s encodedDataLength? (Remote Debugging Protocol)
...headersText is undefined for HTTP/2 requests
link
1- How can I get the Request Headers using the Chrome Devtool Protocol with chrome v73+?
2- Can a webextension solve that?
3- Is there another way which will be stable and last longuer? like tshark+sslkeylogfile which I'm attempting to avoid. thank you

Do Mobile Browsers send httpOnly cookies via the HTML5 Audio-Tag?

I try to play some mp3 files via the html5 audio-tag. For the desktop this works great (with Chrome), but when it comes to the mobile browsers (also Chrome (for Android)), there seem to be some difficulties:
I protected the stream with some password an therefore the streaming server needs to find a special authentification cookie (spring security remember-me). But somehow the mobile browser doesn't send this cookie when it accesses the mp3-stream via the audio tag. When I enter the stream URL directly to the address bar everything works just fine.
While I searched for the lost cookie I found out, that the mobile browser still sends some cookies (e.g. the JSESSIONID) but not all. Further investigations (quick PoC with PHP) revealed that the mobile browsern seems to refuse to send cookies via the audio-tag which have the HttpOnly Flag set. So my question is:
Is this a specified behaviour, why are there differences between the mobile and the desktop versions (of Chrome) and is there a way control the behaviour from the client side?
By looking more deeply into the HTTP packages I found out, that the Android browser doesn't request the mp3-stream itself, but delegates this to stagefright (some android multimedia client). A quick search revealed, that for the old Android versions (before 4.0) stagefright cannot handle cookies:
https://code.google.com/p/android/issues/detail?id=17553 <-- (Status: spam) WTF...
https://code.google.com/p/android/issues/detail?id=17281
https://code.google.com/p/android/issues/detail?id=10567
https://code.google.com/p/android/issues/detail?id=19958
My own tests confirmed this. The old stagefright (Android 2.3.x) doesn't send any cookies at all, the stagefright from a european S3 (android 4.1.2, stagefright 1.2) sends only the the cookies which do NOT have the httpOnly flag.
So I think that everybody has to decide himself which solution he wants to use:
enable httpOnly: android has no access at all but its secure
disable httpOnly: less secure against XSS, but works for Android >4.0
disable cookie authentication at all: insecure but works for all
Note: The problem with simply disabling httpOnly is that you make your whole application vulnerable to cookie hijackers. Another possible solution would be to have a special rememberme cookie for the stream (without httpOnly) and another rememberme cookie with httpOnly enabled.
I had the same problem and disabling HttpOnly or Secure flags on cookies didn't solve the problem on Android 4.2 and 4.4 chrome browser.
Finally I figured the cause. I had a cookie with its value containing special characters colon ( : ) and pipe ( | ), etc. After disabling that cookie with special characters the videos play fine in Android 4.2 and 4.4.
Hope this helps someone.