I setup my Nginx conf as per Digital Ocean paper,
and now http2 is available.
But in Chrome (Version 54.0.2840.98 (64-bit)) Dev tool, it's always on HTTP 1/1:
NAME METHOD STATUS PROTOCOL
shell.js?v=xx.. GET 200 http/1/1
My server is running Ubuntu 16.04 LTS which supports both ALPN & NPN, and the openssl version shipped with it is 1.0.2g.
I checked http2 support with this tool site and the result is:
Yeah! example.com supports HTTP/2.0. ALPN supported...
Also checking with curl is OK:
$ curl -I --http2 https://www.example.com
HTTP/2 200
server: nginx/1.10.0 (Ubuntu)
date: Tue, 13 Dec 2016 15:59:13 GMT
content-type: text/html; charset=utf-8
content-length: 5603
x-powered-by: Express
cache-control: public, max-age=0
etag: W/"15e3-EUyjnNnyevoQO+tRlVVZxg"
vary: Accept-Encoding
strict-transport-security: max-age=63072000; includeSubdomains
x-frame-options: DENY
x-content-type-options: nosniff
I also checked with is-http2 cli from my console:
is-http2 www.amazon.com
× HTTP/2 not supported by www.amazon.com
Supported protocols: http/1.1
is-http2 www.example.com
✓ HTTP/2 supported by www.example.com
Supported protocols: h2 http/1.1
Why doesn't Chrome recognise it?
How can I check it also with Safari (v 10.0.1)?
Will likely be one of two reasons:
You are using anti-virus software and it is MITM your traffic and so downgrading you to HTTP/1.1. Turn off https traffic monitoring on your AV to connect directly to the server. You can check if this is the case by using an online tool to test your site for HTTP/2 support.
You are using older TLS ciphers and specifically one that Chrome disallows for HTTP/2 (https://http2.github.io/http2-spec/#BadCipherSuites) as per Step 5 of above guide. Scan your site using https://www.ssllabs.com/ssltest/ to check your TLS config and improve it.
The third reason is lack of ALPN support in your SSL/TLS library (i.e. You are using openssl 1.0.1 and need to be one 1.0.2 or later, for example) but you have already confirmed you have ALPN support so skipping that for this answer.
I had the same issue. I my case it was because I enabled TLS1.3 in NGINX. See Why is my site not using http/2 while it is http/2 enabled
In my case, chrome generated following excerpt in chrome-net-export-log.json file.
HTTP2_SESSION_RECV_INVALID_HEADER--> error = "Invalid character in header name."--> header_name = "x-xss-protection:"--> header_value = "1; mode=block"
After removing : from the header name, the problem was resolved.
Related
I've been trying to set up a webserver on localhost which supports HTTP/3. I've successfully run an caddy server run in docker which answers to GET requests with this header:
alt-svc: h3-27=":443"; ma=2592000
content-encoding: gzip
content-length: 1521
content-type: text/html; charset=utf-8
date: Thu, 07 May 2020 07:27:44 GMT
server: Caddy
status: 200
vary: Accept-Encoding
X-DNS-Prefetch-Control: off
Even though the alt-scv header was received I couldn't detect any h3-27 requests in the network logs of the developer tools.
Also created a CA, which I added to chrome, and signed the certificate of the server which Chrome accepts. I ran Chrome with the flags --enable-quic --quic-version="h3-27", as suggested in this article. I've tried the same with an nginx server based on this image and couldn't make it work as well.
What am I missing?
Caddyfile:
{
experimental_http3
}
localhost {
root * /usr/share/caddy/
encode zstd gzip
templates
file_server
tls /etc/caddy/certs/localhost.crt /etc/caddy/certs/localhost.key
}
Caddy Output:
2020/05/07 07:23:50.939 INFO using provided configuration {"config_file": "/etc/caddy/Caddyfile", "config_adapter": "caddyfile"}
2020/05/07 07:23:51.252 INFO admin admin endpoint started {"address": "tcp/localhost:2019", "enforce_origin": false, "origins": ["127.0.0.1:2019", "localhost:2019", "[::1]:2019"]}
2020/05/07 07:23:51 [INFO][cache:0xc00088da90] Started certificate maintenance routine
2020/05/07 07:23:51 [WARNING] Stapling OCSP: no OCSP stapling for [localhost bar.localhost]: no OCSP server specified in certificate
2020/05/07 07:23:51.254 INFO http skipping automatic certificate management because one or more matching certificates are already loaded {"domain": "localhost", "server_name": "srv0"}
2020/05/07 07:23:51.254 INFO http enabling automatic HTTP->HTTPS redirects {"server_name": "srv0"}
2020/05/07 07:23:51.255 INFO tls cleaned up storage units
2020/05/07 07:23:51.256 INFO http enabling experimental HTTP/3 listener {"addr": ":443"}
2020/05/07 07:23:51.257 INFO autosaved config {"file": "/config/caddy/autosave.json"}
2020/05/07 07:23:51.257 INFO serving initial configuration
Found the reason myself. The current version of Chrome (Version 81.0.4044.138) does not support this version of Quic (h3-27). It could be fixed by using using chrome-dev (Version 84.0.4136.5).
I have an S3 Bucket fronted with a Cloudfront CDN. In that bucket, I have some woff2 fonts that were automatically tagged with the content type octet-stream. When trying to load that font from a CSS file on a live production website, I get the following error:
Access to Font at 'https://cdn.example.com/fonts/my-font.woff2' from origin
'https://www.live-website.com' has been blocked by CORS policy:
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'https://www.live-website.com' is therefore not allowed access.
The thing is that a curl reveals that the Access-Control-Allow-Origin is present:
HTTP/1.1 200 OK
Content-Type: binary/octet-stream
Content-Length: 98488
Connection: keep-alive
Date: Wed, 08 Aug 2018 19:43:01 GMT
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET
Access-Control-Max-Age: 3000
Last-Modified: Mon, 14 Aug 2017 14:57:06 GMT
ETag: "<redacted>"
Accept-Ranges: bytes
Server: AmazonS3
Age: 84847
X-Cache: Hit from cloudfront
Via: 1.1 <redacted>
X-Amz-Cf-Id: <redacted>
Everything is working fine in Firefox, so I guess that Chrome is doing an extra validation that blocks my font.
Turns out that the problem was actually with the Content-Type. As soon as I changed the content type to application/font-woff2 and invalidated the cache of these woff2 files, everything went through smoothly.
My problem with CORS and multi domain was that Cloudfront was taking in cache the first request so I had to select in Whitelist Headers the Origin option. And it works.
enter image description here
I am calling Woo commerce's API through a REST Client however the API is responding with 200 OK however the body (Should be JSON) is returning "1".
the method i am calling is
Method: GET
URL: https://site/wc-api/v3/products?consumer_key=consumerkey&consumer_secret=consumersecret
Header: Accept: application/json;
The response;
response header:
Status Code: 200 OK
Connection: keep-alive
Content-Encoding: gzip
Content-Type: text/html
Date: Wed, 16 Dec 2015 02:50:14 GMT
Server: nginx/1.6.1
Transfer-Encoding: chunked
X-CF-Powered-By: WP 1.3.14
X-Powered-By: PHP/5.4.32
response body: 1
WC Version: 2.4.12
WP Version: 4.3.1
I am not sure what the issue is, i have tried HTTPS using oAuth and HTTP using base Auth.
Thanks.
Ok, so i have found a solution...
The dev store is on a sub-domain (dev.site.com). The webserver was configured incorrectly and redirecting to the production site.
The production site was running an earlier version of WooCommerce - one that didn't have version 3 of the API (The dev site had the latest version of WooCommerce).
I have a caching issue. Chrome does not always load newer versions of site assets, most often Javascript files loaded by Require.js. Right now I've been having this problem for over 24 hours with a particular file.
If I load the page with devtools (network tab) open, the offending files typically show a HTTP 200 response, but in the "Size" column it shows "(from cache)". In the Headers details it shows "Provisional headers are shown". Wireshark shows that the file is indeed not requested from the server.
Chrome shows the Last-Modified date of the file as Sat, 06 Dec 2014 01:27:55 GMT, but my below raw request to the server clearly indicates the file has changed much more recently.
If I do a raw request myself I don't see anything in the headers returned by the server that should cause this problem:
GET /js/path/to/file.js HTTP/1.1
Host: static.mydomain.com
User-Agent: Matt
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Vary: Accept-Encoding
Content-Type: application/javascript
Accept-Ranges: bytes
ETag: "4203477418"
Last-Modified: Fri, 16 Jan 2015 18:28:30 GMT
Content-Length: 5704
Date: Fri, 16 Jan 2015 21:05:06 GMT
Server: lighttpd/1.4.33
.... data here ...
The issue has been reported by chrome users on multiple OSes with different versions of chrome, but I do not typically receive reports of caching issues on other browsers. (Right now I'm on "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36")
Edit:
The problem seems to be most offensive with files loaded by Require.js, though I have encountered it with javascript directly referenced in the page as well.
What am I missing here? Why won't chrome check for a new version of the file?
As it turns out, browsers have poorly documented behavior regarding caching when the Cache-Control header is not specified in the server response (well, at least when no caching behavior is specified). In general, it seems that in this case a browser determines how long to cache the item for based on the file's last modified date (if declared in the response), the current date, and ????
See: https://webmasters.stackexchange.com/questions/53942/why-is-this-response-being-cached
Sadly, Google's official page on HTTP caching does not mention what happens if the header is not set: https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching
Edit:
I ran across some more specific information about the heuristics used, here: What heuristics do browsers use to cache resources not explicitly set to be cachable?
I'm using nginx and Dojo to build an embedded UI driven by a set of JSON files. Our primary target browser is Chrome, but it should work with all modern browsers.
Changing the JSON files can change the UI drastically, and I use this to give different presentations to different users. See my previous question for the details (Configure nginx to return different files to different authenticated users with the same URI), but basically my nginx configuration is such that the same URI with different users can yield different content.
This all works very well, except when someone switches to a different user. Some browsers will grab those JSON files from their own internal cache without even checking with the server, which leaves the UI display the previous user's presentation. Reloading the page fixes it, but boy! would I rather the right thing happened automatically.
The obvious solution is to use the various cache headers, but they don't appear to help. I'm using the following nginx directives:
expires epoch;
etag off;
if_modified_since off;
add_header Last-Modified "";
... which yields the following response headers:
HTTP/1.1 200 OK
Server: nginx/1.4.1
Date: Wed, 24 Sep 2014 16:58:32 GMT
Content-Type: application/octet-stream
Content-Length: 1116
Connection: keep-alive
Expires: Thu, 01 Jan 1970 00:00:01 GMT
Cache-Control: no-cache
Accept-Ranges: bytes
This looks pretty conclusive to me, but the problem still occurs with Chrome 36 for OS X and Opera 24 for OS X (although Firefox 29 and 32 do the right thing). Chrome is content to grab files from its cache without even referring to the server.
Here's a detailed example, with headers pulled from Chrome's Network debug panel. The first time Chrome fetches /app/resources/states.json, Chrome reports
Remote Address:75.144.159.89:8765
Request URL:http://XXXXXXXXXXXXXXX/app/resources/screens.json
Request Method:GET
Status Code:200 OK
with request headers:
Accept:*/*
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Authorization:Basic dm9sdGFpcndlYjp2b2x0YWly
Cache-Control:max-age=0
Connection:keep-alive
Content-Type:application/x-www-form-urlencoded
DNT:1
Host:suitable.dyndns.org:8765
Referer:http://XXXXXXXXXXXXXXXXXXXXXX/
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36
X-Requested-With:XMLHttpRequest
and response headers:
Accept-Ranges:bytes
Cache-Control:no-cache
Connection:keep-alive
Content-Length:2369
Content-Type:application/octet-stream
Date:Wed, 24 Sep 2014 17:19:46 GMT
Expires:Thu, 01 Jan 1970 00:00:01 GMT
Server:nginx/1.4.1
Again, all fine and good. But, when I change the user (by restarting Chrome and then reloading the parent page), I get the following Chrome report:
Remote Address:75.144.159.89:8765
Request URL:http://suitable.dyndns.org:8765/app/resources/states.json
Request Method:GET
Status Code:200 OK (from cache)
with no apparent contact to the server.
This doesn't seem to happen with all files. A few .js files are cached, most are not; none of the .css files seem to be cached; all the .html files are cached, and all of the .json files are cached.
How can I tell the browser (I'm looking at you, Chrome!) that these files are good at the moment it requests them, but will never again be good? Is this a Chrome bug? (If so, it's strange that Opera also shows the problem.)
I believe I've found the problem. Apparently "Cache-Control: no-cache" is insufficient to tell the browser to, um, not cache the data. I added "no-store":
Cache-Control:no-store, no-cache
and that did the trick. No more caching by Chrome or Opera.
I had the same problem, with json being cached...
If you control the client application-code, a possible workaround is to just add a random-value query-parameter at the end of the URL.
So instead of calling:
http://XXXXXXXXXXXXXXX/app/resources/screens.json
you call, for example:
http://XXXXXXXXXXXXXXX/app/resources/screens.json?rand=rrrrrrrrrr
where rrrrrrrrrr is some random-value that is different in each call.
Then, the browser will not be able to reuse any cached values.