Chrome ignores 302 HTTPS to HTTP redirect of resource and causes CERT error - google-chrome

I have a page served via HTTPS. All content is public and no sensitive data is on this page. I also have resources (audio mp3) on a google cloud bucket which has no HTTPS support and is a subdomain (DNS C-NAME entry only), but is reachable via HTTPS but returns a Google certificate if accessed by https. Until now I had my server respond with a 302 redirect to my subdomain downgrading static mp3-resources from HTTPS to HTTP and it worked. See this request:
Request:
Request URL: https://x.com/q/ics/edit/r.6c19e2d99e220f648b3c1799ed05dc99.mp3
Request Method: GET
Status Code: 302
Remote Address: 172.217.19.115:443
Referrer Policy: no-referrer-when-downgrade
Response:
content-length: 259
content-type: text/html; charset=UTF-8
date: Wed, 24 Jun 2020 05:42:46 GMT
location: http://subdomain.x.com/ics/_resources/r.6c19e2d99e220f648b3c1799ed05dc99.mp3
server: Google Frontend
status: 302
x-cloud-trace-context: 3086726898e017b2b99858fcea43c5e0
But suddenly Chrome started ignoring location: http: and instead resolves the HTTPS version, which causes ERR_CERT_COMMON_NAME_INVALID and this fails the download.
I know this can be resolved by using a load-balancer, but that would almost double our hosting costs (maintenance/setup/learning costs not included!) and we really do not need a load balancer with AppEngine classic.
The problem occurred a few weeks ago the first time, but went away within a day, but is persistent now!
Does anyone know why Chrome is interpreting the redirect this way? Is this required by any specification?
Update: The only solution I found so far is to downgrade the whole page with a redirect from HTTPS to HTTP. That is really a sad solution.

Related

Chrome ignores alt-svc header and doesn't send HTTP/3 requests

I've been trying to set up a webserver on localhost which supports HTTP/3. I've successfully run an caddy server run in docker which answers to GET requests with this header:
alt-svc: h3-27=":443"; ma=2592000
content-encoding: gzip
content-length: 1521
content-type: text/html; charset=utf-8
date: Thu, 07 May 2020 07:27:44 GMT
server: Caddy
status: 200
vary: Accept-Encoding
X-DNS-Prefetch-Control: off
Even though the alt-scv header was received I couldn't detect any h3-27 requests in the network logs of the developer tools.
Also created a CA, which I added to chrome, and signed the certificate of the server which Chrome accepts. I ran Chrome with the flags --enable-quic --quic-version="h3-27", as suggested in this article. I've tried the same with an nginx server based on this image and couldn't make it work as well.
What am I missing?
Caddyfile:
{
experimental_http3
}
localhost {
root * /usr/share/caddy/
encode zstd gzip
templates
file_server
tls /etc/caddy/certs/localhost.crt /etc/caddy/certs/localhost.key
}
Caddy Output:
2020/05/07 07:23:50.939 INFO using provided configuration {"config_file": "/etc/caddy/Caddyfile", "config_adapter": "caddyfile"}
2020/05/07 07:23:51.252 INFO admin admin endpoint started {"address": "tcp/localhost:2019", "enforce_origin": false, "origins": ["127.0.0.1:2019", "localhost:2019", "[::1]:2019"]}
2020/05/07 07:23:51 [INFO][cache:0xc00088da90] Started certificate maintenance routine
2020/05/07 07:23:51 [WARNING] Stapling OCSP: no OCSP stapling for [localhost bar.localhost]: no OCSP server specified in certificate
2020/05/07 07:23:51.254 INFO http skipping automatic certificate management because one or more matching certificates are already loaded {"domain": "localhost", "server_name": "srv0"}
2020/05/07 07:23:51.254 INFO http enabling automatic HTTP->HTTPS redirects {"server_name": "srv0"}
2020/05/07 07:23:51.255 INFO tls cleaned up storage units
2020/05/07 07:23:51.256 INFO http enabling experimental HTTP/3 listener {"addr": ":443"}
2020/05/07 07:23:51.257 INFO autosaved config {"file": "/config/caddy/autosave.json"}
2020/05/07 07:23:51.257 INFO serving initial configuration
Found the reason myself. The current version of Chrome (Version 81.0.4044.138) does not support this version of Quic (h3-27). It could be fixed by using using chrome-dev (Version 84.0.4136.5).

HTTP request with json data getting bad request error

POST https://api.smtp2go.com/v3/stats/email_summary HTTP/1.1
Host: api.smtp2go.com
Content-Type: application/json
Connection: close
Content-Length: 52
{ "api_key":"api-" }
GETTING response as
400 Bad Request
Bad Request Your browser sent a request that this server could not understand. Apache/2.4.10 (Debian) Server at us-api-1.smtp2go.com Port 80
For security reasons, you should not post your API Key publicly. I would recommend removing it from this post. As for your problem, I would recommend that you double check that your API key is active. That could be part of the problem.
If that is not your problem. I'd recommend checking out this site which includes all API error codes. https://kb.insideview.com/hc/en-us/articles/202959833-API-Error-Codes

Dnsmasq failing to catch 307 redirect for https?

I am currently doing some debugging on my website which involves calling the facebook API. I've installed dnsmasq to work with my mac os X to redirect all request to facebook.com to 127.0.0.1
This is my entry in dnsmasq.conf:
address=/facebook.com/127.0.0.1
I also have /etc/resolver/com with nameserver 127.0.0.1
When I turn dnsmasq on, visiting facebook.com will result in a PAGE NOT FOUND error in chrome. This shows that my dnsmasq is working.
However, I noticed that chrome will redirect http://www.facebook.com to https://www.facebook.com due to HSTS. I went on to chrome://net-internals#hsts to delete facebook.com's entry.
The strange thing is, when I am debugging, I see that facebook.com is indeed returning 307 redirects for http://www.facebook.com (See image)
This is very strange because the domain facebook.com is currently resolved to be 127.0.0.1 on my computer! Furthermore, when I dig more into the request, I do see that the request is valid:
Where is this 307 redirect coming from if facebook.com is unresolvable?
307 is an internal browser based redirect for HTTP Strict Transport Security (HSTS). It does not come from the server - it's a fake response created by the browser.

How to debug mod_perl with reverse proxy of mod_proxy?

I'm new to this with a very low background so bear with me. I want to debug my issue of prematurely terminating loading the page from the reverse proxied site as it request something like filename.json file. I noticed in the request header it has Content-Type:application/x-www-form-urlencoded, and the response headers are
$WSEP:
Cache-Control:no-cache
Connection:Keep-Alive
Content-Encoding:gzip
Content-Language:en-US
Content-Length:55
Content-Type:text/html;charset=UTF-8
Date:Sun, 12 Jun 2016 04:54:18 GMT
Expires:Thu, 01 Jan 1970 00:00:00 GMT
Keep-Alive:timeout=5, max=97
Server:Apache
Vary:Accept-Encoding
X-Cnection:Close
but I'm wondering why it's text/html content-type when it's json, so I tried manipulating the response with application/json and even trying to instead of downloading the page from the remote server, I made it download locally from my server /var/www/html by creating dummy json file. I was thinking because this is json file it should be application/json not text/html as Content-Type. But the issue still exists. It just stops when it reached requesting that json file.
Now my question is, how can I make sure that requesting to that json is the culprit? Or more appropriately, how can I debug the mod_perl. I tried adding
use warning;
in the code
PerlWarn On
in the apache config but still I don't see any warning/errors that will guide me to debug.
Can you give me advice on how to do find the fault?
By the way, the site of that json file being requested returns 404 but I assume that the proxy server should ignore it and continue downloading other files.

Why isn't the browser loading cdn file from cache?

Here is a very simple example to illustrate my question using JQuery from a CDN to modify the page:
<html>
<body>
<p>Hello Dean!</p>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
<script>$("p").html("Hello, Gabe!")</script>
</body>
</html>
When you load this page with an internet connection, the page displays "Hello Gabe". When I then turn off the internet connection, the page displays "Hello Dean" with an error -- JQuery is not available.
My understanding is that CDNs have a long Cache-Control and Expire in the header response, which I understand to mean that the browser caches the file locally.
$ curl -s -D - https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.3/jquery.min.js | head
HTTP/1.1 200 OK
Server: cloudflare-nginx
Date: Fri, 17 Apr 2015 16:30:33 GMT
Content-Type: application/javascript
Transfer-Encoding: chunked
Connection: keep-alive
Last-Modified: Thu, 18 Dec 2014 17:00:38 GMT
Expires: Wed, 06 Apr 2016 16:30:33 GMT
Cache-Control: public, max-age=30672000
But this does not appear to be happening. Can someone please explain what is going on? Also -- how can I get the browser to use the copy of JQuery in the cache somewhere?
This question came up because we want to be using CDN's to serve external libraries, but also want to be able to develop the page offline -- like on an airplane.
I get similar behavior using Chrome and Firefox.
There's nothing to deal with CDN. When then browser encounters a script tag, it will request it to the server, whether it's hosted on a CDN or on your server. If the browser previously loaded it, on the same address, the server tells whether it should be reloaded or not (sending 304 HTTP status code).
What you are probably looking for is to cache your application for offline use. It's possible with HTML5 cache manifest file. You need to create a file listing all files needed to be cached for explicit offline use
Since the previous answer recommended using a cache manifest file, I just wanted to make people aware that this feature is being dropped from the web standards.
Info available from Mozilla:
https://developer.mozilla.org/en-US/docs/Web/HTML/Using_the_application_cache
It's recommended to use Service workers instead of the cache manifest.