Google Cloud Identity Aware Proxy (App Engine) - Strange web browser behavior? - google-chrome

I am seeing some strange behavior using App Engine with Identity Aware Proxy in Chrome (Desktop & Mobile) / Firefox (Desktop & Mobile) / Safari (Desktop) / curl (Desktop)
I launched a static-file site on App Engine using these settings
app.yaml:
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /(.*)
static_files: index.html
upload: index.html
secure: always
index.html:
<html>
<body>
Hello World!
</body>
</html>
I then used the cloud console to enable the Identity Aware Proxy.
As expected, I was asked to sign in using the google account needed to access the page. All good.
However, sometimes I can access the site from a browser without credentials, or even from curl, which I feel should definitely not be possible?
It takes a bunch of refreshes / retries, but once it is reproduced I can reliably get the index page without authentication using Chrome, Firefox, Opera, and curl.
Questions:
Am I doing something completely stupid? Is it expected behavior to sometimes be able to access the page even in incognito/private mode, or using curl?
I know there is a default 10 minute caching header on static files served by App Engine, how does that factor in?
How does curl get mixed up in all of this? AFAIK https can not be cached by anyone except the UA making the request (and internally on Google's end)? Is there a cache on my computer that all of these sources talk to that I am not aware of?
Is this a problem on my computer/phone (i.e. once the page is cached somehow all UAs on that device can see the page without authenticating)?
Is this a problem on Google's end?
For completeness, here's the output from curl -v
curl -v https://xxxxxxxxxxxx.appspot.com
* Rebuilt URL to: https://xxxxxxxxxxxx.appspot.com/
* Trying 172.217.22.180...
* TCP_NODELAY set
* Connected to xxxxxxxxxxxx.appspot.com (172.217.22.180) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: C=US; ST=California; L=Mountain View; O=Google Inc; CN=*.appspot.com
* start date: Mar 28 14:17:04 2018 GMT
* expire date: Jun 20 13:24:00 2018 GMT
* subjectAltName: host "xxxxxxxxxxxx.appspot.com" matched cert's "*.appspot.com"
* issuer: C=US; O=Google Trust Services; CN=Google Internet Authority G3
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7ff81780a400)
> GET / HTTP/2
> Host: xxxxxxxxxxxx.appspot.com
> User-Agent: curl/7.54.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 200
< date: Fri, 20 Apr 2018 17:43:10 GMT
< expires: Fri, 20 Apr 2018 17:53:10 GMT
< etag: "8wDEQg"
< x-cloud-trace-context: 8e9c1b6803383aac532d48d9f0ac5fc2
< content-type: text/html
< content-encoding: gzip
< server: Google Frontend
< cache-control: public, max-age=600
< content-length: 54
< age: 371
< alt-svc: hq=":443"; ma=2592000; quic=51303433; quic=51303432; quic=51303431; quic=51303339; quic=51303335,quic=":443"; ma=2592000; v="43,42,41,39,35"
<
���(�ͱ�I�O���
* Connection #0 to host xxxxxxxxxxxx.appspot.com left intact
I-.Q�ч�l�!����Z�_$%
The output above SHOULD show a 302 redirect to IAP's login page, but as previously stated - it does not always do that!
TL;DR Why can I access App Engine static pages protected by IAP on my computer from contexts that should not be allowed access?
Thanks!

Ah, you've run into an interesting corner case! There's some documentation of this at https://cloud.google.com/iap/docs/concepts-best-practices -- TL;DR, App Engine does some caching for static_files that interacts poorly with IAP. That page has some instructions you can apply if you want to protect your static_files. --Matthew, Google IAP Engineering

Related

AWS S3 CORS error only in Chrome, when using MapBox loadImage function

I'm using MapBox and I'm looking to add some images to my map that are in an AWS S3 bucket.
The MapBox function that I'm using is loadImage. The loadImage docs state that "External domains must support CORS."
My JS code is similar to:
this.map.on('load', () => {
...
this.map.loadImage("https://my-test-bucket.s3-us-west-1.amazonaws.com/long-uuid-here.png", (error, image) => {
if (error) {
console.log(error)
throw error;
}
// The rest doesn't matter
...
When my map loads in chrome I get the following error:
Access to fetch at 'https://my-test-bucket.s3-us-west-1.amazonaws.com/long-uuid-here.png' from origin 'https://localhost:7000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
My AWS S3 CORS config is the following:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"POST",
"DELETE"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
Using curl -H "origin: localhost" -v "https://my-test-bucket.s3-us-west-1.amazonaws.com/long-uuid-here.png", I get the following output:
* Connected to my-test-bucket.s3-us-west-1.amazonaws.com (<IP HERE>) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: C=US; ST=Washington; L=Seattle; O=Amazon.com, Inc.; CN=*.s3-us-west-1.amazonaws.com
* start date: Jul 30 00:00:00 2020 GMT
* expire date: Aug 4 12:00:00 2021 GMT
* subjectAltName: host "my-test-bucket.s3-us-west-1.amazonaws.com" matched cert's "*.s3-us-west-1.amazonaws.com"
* issuer: C=US; O=DigiCert Inc; OU=www.digicert.com; CN=DigiCert Baltimore CA-2 G2
* SSL certificate verify ok.
> GET /default.jpg HTTP/1.1
> Host: my-test-bucket.s3-us-west-1.amazonaws.com
> User-Agent: curl/7.64.1
> Accept: */*
> origin: localhost
>
< HTTP/1.1 200 OK
< x-amz-id-2: bLicG+33kfSamR29vMA3BnhmSV27Afooba6yU6hVOPt0mbckO5gefhXN8Ho7hgAEP58s4hKjCf0=
< x-amz-request-id: E760D53EDC5A9804
< Date: Wed, 04 Nov 2020 22:31:38 GMT
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Methods: GET, PUT, POST, DELETE
< Vary: Origin, Access-Control-Request-Headers, Access-Control-Request-Method
< Last-Modified: Tue, 11 Aug 2020 22:37:31 GMT
< ETag: "39eb0bbf2cc33ba02f53f8585004f820"
< Accept-Ranges: bytes
< Content-Type: image/jpeg
< Content-Length: 16579
< Server: AmazonS3
So, it looks like I've got Access-Control-Allow-Origin: * header coming back from the AWS S3 server.
I don't receive any CORS error in Firefox.
Is there a problem with my AWS S3 CORS config? Why am I getting these CORS errors in Chrome v86.0.4240.80 (Official Build) (x86_64)?
Note: my bucket isn't actually named "my-test-bucket". I've changed the URL/bucket name for this question. Also, locally I am using https://localhost (set it up with a certificate since I need to use the W3C geolocation API which only works over HTTPS), if it matters
It looks like this is a caching issue with Chrome. I found an answer on this question: CORS problems with Amazon S3 on the latest Chomium and Google Canary, from #nassan that suggests adding ?cacheblock=true as a query parameter to the GET request.
So, changing my code to:
this.map.loadImage(`${dealInfo.properties.logo}?cacheblock=true`, (error, image) => {
...
})
Resolves the CORS errors that occur in Chrome.
Looks like this is the issue: https://bugs.chromium.org/p/chromium/issues/detail?id=409090.
I also added crossorigin="anonymous" to my script tag.
Seems like this being a Chrome caching issue (header caching?) also explains why I could see the expected headers using curl, but Chrome complained that they weren't there.

Chrome ignores alt-svc header and doesn't send HTTP/3 requests

I've been trying to set up a webserver on localhost which supports HTTP/3. I've successfully run an caddy server run in docker which answers to GET requests with this header:
alt-svc: h3-27=":443"; ma=2592000
content-encoding: gzip
content-length: 1521
content-type: text/html; charset=utf-8
date: Thu, 07 May 2020 07:27:44 GMT
server: Caddy
status: 200
vary: Accept-Encoding
X-DNS-Prefetch-Control: off
Even though the alt-scv header was received I couldn't detect any h3-27 requests in the network logs of the developer tools.
Also created a CA, which I added to chrome, and signed the certificate of the server which Chrome accepts. I ran Chrome with the flags --enable-quic --quic-version="h3-27", as suggested in this article. I've tried the same with an nginx server based on this image and couldn't make it work as well.
What am I missing?
Caddyfile:
{
experimental_http3
}
localhost {
root * /usr/share/caddy/
encode zstd gzip
templates
file_server
tls /etc/caddy/certs/localhost.crt /etc/caddy/certs/localhost.key
}
Caddy Output:
2020/05/07 07:23:50.939 INFO using provided configuration {"config_file": "/etc/caddy/Caddyfile", "config_adapter": "caddyfile"}
2020/05/07 07:23:51.252 INFO admin admin endpoint started {"address": "tcp/localhost:2019", "enforce_origin": false, "origins": ["127.0.0.1:2019", "localhost:2019", "[::1]:2019"]}
2020/05/07 07:23:51 [INFO][cache:0xc00088da90] Started certificate maintenance routine
2020/05/07 07:23:51 [WARNING] Stapling OCSP: no OCSP stapling for [localhost bar.localhost]: no OCSP server specified in certificate
2020/05/07 07:23:51.254 INFO http skipping automatic certificate management because one or more matching certificates are already loaded {"domain": "localhost", "server_name": "srv0"}
2020/05/07 07:23:51.254 INFO http enabling automatic HTTP->HTTPS redirects {"server_name": "srv0"}
2020/05/07 07:23:51.255 INFO tls cleaned up storage units
2020/05/07 07:23:51.256 INFO http enabling experimental HTTP/3 listener {"addr": ":443"}
2020/05/07 07:23:51.257 INFO autosaved config {"file": "/config/caddy/autosave.json"}
2020/05/07 07:23:51.257 INFO serving initial configuration
Found the reason myself. The current version of Chrome (Version 81.0.4044.138) does not support this version of Quic (h3-27). It could be fixed by using using chrome-dev (Version 84.0.4136.5).

Chrome devtools show h2 instead of quic/h3 (even if the page is using HTTP/3.0)

As a developer I am very excited about the next version of HTTP/3.0. I noticed some time ago that when I opened Google.com then I could see in Devtools > Network that the protocol appeared as Quic. But now when I open it I only see h2 instead. Why is that?
I also noticed that the alt-svc header indicates that the resource is also available over quick. But it's still loading over h2.
alt-svc: quic=":443";
I noticed the same thing using curl, that the protocol used is h2 not h3. But Google has been using Quic for years now. Why this change?
curl -v https://www.google.com/
* ALPN, offering h2
* ALPN, offering http/1.1
* ALPN, server accepted to use h2
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
> GET / HTTP/2
> Host: www.google.com
> User-Agent: curl/7.61.1
> Accept: */*

how to know from which node the file / block was retrieved when querying https://ipfs.io/ipfs/

When a file is queried from a public gateway using the hash, the gateway queries the peer nodes and gives back the content as http response.
For example,
a file added locally generated a hash QmXjFR1MiAMYprPjwLQwXXonYK52FihQVEL6a2dh3uhUey, which
when used as https://ipfs.io/ipfs/QmXjFR1MiAMYprPjwLQwXXonYK52FihQVEL6a2dh3uhUey the gateway queries my local node (which has the file) and gives back that retrieved content in the browser.
The question is:
is there any way, say response header or some other way, of knowing from which peer / remote machine the gateway retrieved the content ? (in the above example my machine's peer-id)
I think the answer is no (as of v0.4.20). The process IPFS goes through when retrieving a block is:
Do the following at the same time:
Notify all current peers that you want the block (e.g. QmXjFR1...)
Ask the network to find providers of the block you want (e.g. QmXjFR1...)
If a peer has the block, it will send it over and you will disregard any providers that are found.
If no peer has the block and a provider is found, then the provider is added as a peer and is notified that you want the block (e.g. QmXjFR1...), at which point the peer starts sending the block over.
In theory, if you run the IPFS node you could maybe do things to determine what peer a block is coming from. But the gateway doesn't offer that information through its interface AFAIK.
Anyway, I don't see anything in the response headers. Here's an example from hitting the gateway API:
λ curl -v https://gateway.ipfs.io/api/v0/cat/QmT78zSuBmuS4z925WZfrqQ1qHaJ56DQaTfyMUF7F8ff5o
* Trying 2602:fea2:2::1...
* TCP_NODELAY set
* Connected to gateway.ipfs.io (2602:fea2:2::1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=ipfs.io
* start date: May 7 21:37:01 2019 GMT
* expire date: Aug 5 21:37:01 2019 GMT
* subjectAltName: host "gateway.ipfs.io" matched cert's "*.ipfs.io"
* issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7ff002806600)
> GET /api/v0/cat/QmT78zSuBmuS4z925WZfrqQ1qHaJ56DQaTfyMUF7F8ff5o HTTP/2
> Host: gateway.ipfs.io
> User-Agent: curl/7.54.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 200
< server: nginx
< date: Wed, 22 May 2019 04:32:43 GMT
< content-type: text/plain
< vary: Accept-Encoding
< trailer: X-Stream-Error
< vary: Origin
< x-content-length: 12
< x-stream-output: 1
< access-control-allow-origin: *
< access-control-allow-methods: GET, POST, OPTIONS
< access-control-allow-headers: X-Requested-With, Range, Content-Range, X-Chunked-Output, X-Stream-Output
< access-control-expose-headers: Content-Range, X-Chunked-Output, X-Stream-Output
< x-ipfs-pop: gateway-bank2-sjc1
< strict-transport-security: max-age=31536000; includeSubDomains; preload
<
hello world
* Connection #0 to host gateway.ipfs.io left intact

Why doesn't Chrome browser recognize my http2 server?

I setup my Nginx conf as per Digital Ocean paper,
and now http2 is available.
But in Chrome (Version 54.0.2840.98 (64-bit)) Dev tool, it's always on HTTP 1/1:
NAME METHOD STATUS PROTOCOL
shell.js?v=xx.. GET 200 http/1/1
My server is running Ubuntu 16.04 LTS which supports both ALPN & NPN, and the openssl version shipped with it is 1.0.2g.
I checked http2 support with this tool site and the result is:
Yeah! example.com supports HTTP/2.0. ALPN supported...
Also checking with curl is OK:
$ curl -I --http2 https://www.example.com
HTTP/2 200
server: nginx/1.10.0 (Ubuntu)
date: Tue, 13 Dec 2016 15:59:13 GMT
content-type: text/html; charset=utf-8
content-length: 5603
x-powered-by: Express
cache-control: public, max-age=0
etag: W/"15e3-EUyjnNnyevoQO+tRlVVZxg"
vary: Accept-Encoding
strict-transport-security: max-age=63072000; includeSubdomains
x-frame-options: DENY
x-content-type-options: nosniff
I also checked with is-http2 cli from my console:
is-http2 www.amazon.com
× HTTP/2 not supported by www.amazon.com
Supported protocols: http/1.1
is-http2 www.example.com
✓ HTTP/2 supported by www.example.com
Supported protocols: h2 http/1.1
Why doesn't Chrome recognise it?
How can I check it also with Safari (v 10.0.1)?
Will likely be one of two reasons:
You are using anti-virus software and it is MITM your traffic and so downgrading you to HTTP/1.1. Turn off https traffic monitoring on your AV to connect directly to the server. You can check if this is the case by using an online tool to test your site for HTTP/2 support.
You are using older TLS ciphers and specifically one that Chrome disallows for HTTP/2 (https://http2.github.io/http2-spec/#BadCipherSuites) as per Step 5 of above guide. Scan your site using https://www.ssllabs.com/ssltest/ to check your TLS config and improve it.
The third reason is lack of ALPN support in your SSL/TLS library (i.e. You are using openssl 1.0.1 and need to be one 1.0.2 or later, for example) but you have already confirmed you have ALPN support so skipping that for this answer.
I had the same issue. I my case it was because I enabled TLS1.3 in NGINX. See Why is my site not using http/2 while it is http/2 enabled
In my case, chrome generated following excerpt in chrome-net-export-log.json file.
HTTP2_SESSION_RECV_INVALID_HEADER--> error = "Invalid character in header name."--> header_name = "x-xss-protection:"--> header_value = "1; mode=block"
After removing : from the header name, the problem was resolved.