Google Cloud CDN serves stale content AFTER revalidation - google-cloud-cdn

I am attempting to use Google Cloud CDN with the stale-while-revalidate feature (Google Docs). However, even though I can see asynchronous revalidation requests being made to my backend server, the new content is never served by the CDN.
Here is an example of the response headers from the CDN:
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Age: 5202
Cache-Control: max-age=60, public, stale-while-revalidate=86400
Content-Length: 90
Content-Type: application/json
Date: Tue, 08 Feb 2022 16:39:09 GMT
Server: Development/1.0
Via: 1.1 google
You can see that the age is 5205 seconds even though the max-age is 60 sec.
If it has been more than 60 seconds since the last revalidation request, I will see the revalidation request in my server logs, but the content served from the CDN continues to be outdated on future requests.
Here is an example of the response headers directly from my backend server:
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Cache-Control: max-age=60, public, stale-while-revalidate=86400
Content-Length: 645
Content-Type: application/json
Date: Tue, 08 Feb 2022 18:09:39 GMT
Server: Development/1.0
And here is the Google Cloud Logging entry for the revalidation request. I believe it is showing that it received a 200 response as well:
{
"insertId": "1sxvbdng2vrqnhl",
"jsonPayload": {
"parentInsertId": "1sxvbdng2vrqng7",
"#type": "type.googleapis.com/google.cloud.loadbalancing.type.LoadBalancerLogEntry",
"statusDetails": "response_sent_by_backend",
"cacheId": "LAX-ba56a406"
},
"httpRequest": {
"requestMethod": "GET",
"requestUrl": "***masked request url***",
"requestSize": "1888",
"status": 200,
"responseSize": "1259",
"userAgent": "Cloud-CDN-Google (GFE/2.0)",
"remoteIp": "***masked request ip***",
"cacheLookup": true,
"cacheFillBytes": "1259",
"serverIp": "***masked request ip***",
"latency": "0.082249s"
},
"resource": {
"type": "http_load_balancer",
"labels": {
"backend_service_name": "test-be-service",
"target_proxy_name": "test-lb-target-proxy",
"project_id": "cdn-test-2-340220",
"zone": "global",
"url_map_name": "test-lb",
"forwarding_rule_name": "test-lb-forwarding-rule"
}
},
"timestamp": "2022-02-08T19:36:48.094836Z",
"severity": "INFO",
"logName": "projects/cdn-test-2-340220/logs/requests",
"trace": "projects/cdn-test-2-340220/traces/f6a372e62f789d61c3654e50111ffcb9",
"receiveTimestamp": "2022-02-08T19:36:48.360094896Z",
"spanId": "d20eb70cb8d7dff4"
}
This behavior is the same whether I activate stale-while-revalidate via the Cache-Control response header or the Cloud CDN configuration interface.
Either I am not understanding how this is supposed to work, or this is a bug in Google Cloud CDN. Any help resolving this would be greatly appreciated.

Related

Lambda Edge 502 with custom response from viewer response

I'm using a URL query string to debug my viewer-request and viewer-response lambda#edge functions by returning the event as JSON to the frontend (FYI so I can check for the presence/absence of certain things via an external monitoring tool).
This works fine with the viewer-request: if I go to https://example.org/?debug_viewer_request_event I get a JSON of the viewer-request event:
import json
def lambda_handler(event, context):
request = event["Records"][0]["cf"]["request"]
if "debug_viewer_request_event" in request["querystring"]:
response = {
"status": "200",
"statusDescription": "OK",
"headers": {
"cache-control": [
{
"key": "Cache-Control",
"value": "no-cache"
}
],
"content-type": [
{
"key": "Content-Type",
"value": "application/json"
}
]
},
"body": json.dumps(event)
}
return response
# rest of viewer-request logic...
Testing with cURL:
curl -i https://example.org/?debug_viewer_request_event
HTTP/2 200
content-type: application/json
content-length: 854
server: CloudFront
date: Mon, 26 Apr 2021 06:05:28 GMT
cache-control: no-cache
x-cache: LambdaGeneratedResponse from cloudfront
via: 1.1 xxxxxxxxxxx.cloudfront.net (CloudFront)
x-amz-cf-pop: AMS50-C1
x-amz-cf-id: pU0ItvQA1-r5v3yR1Dl6Z3VpPW_EuuUCHhnOD60uLhng...
{"Records": [{"cf": {"config": {"distributionDomainName": "xxxxxxx.cloudfront.net", "distributionId": "xxxxxxx", "eventType": "viewer-request", "requestId": "pU0ItvQA1-r5v3yR1Dl6Z3VpPW_EuuUCHhnOD60uLhng...
However when I do the same with the viewer-response I get a 502 error:
the code is the same except debug_viewer_request_event is debug_viewer_response_event
if I don't include the debug query string, the response is 200 OK so I know overall both lambdas are working properly (with the exception of the debug for the viewer-response)
Here is the cURL output:
curl -i https://example.org/?debug_viewer_response_event
HTTP/2 502
content-type: text/html
content-length: 1013
server: CloudFront
date: Mon, 26 Apr 2021 06:07:39 GMT
x-cache: LambdaValidationError from cloudfront
via: 1.1 xxxxxxxxx.cloudfront.net (CloudFront)
x-amz-cf-pop: AMS50-C1
x-amz-cf-id: NqXQ-FFEsIX-fEt8IvlHFTYoQdrZSGPScq1H-KNwVWR0-xxxxxx
The Lambda function result failed validation: The function tried to add, delete, or change a read-only header
If I look at the docs, the list of "Read-only Headers for CloudFront Viewer Response Events" is:
Content-Encoding
Content-Length
Transfer-Encoding
Warning
Via
As far as I can see I'm not directly changing any of these headers, but I'm guessing because I'm modifying the response, headers such as Content-Length are modified
Q: Is there a way to return the viewer-response event as JSON to the frontend for debugging or is it simply not possible due to not being able to change Content-Length?
As far as I can see I'm not directly changing any of these headers,
but I'm guessing because I'm modifying the response, headers such as
Content-Length are modified
I agree, I think your issue is that you are returning the response instead of calling
callback(null, response);
where callback should be the third argument to your lambda handler func:
def lambda_handler(event, context, callback):
Since content-length is not mutable, we should assume (and I checked this is true in practice at least for viewer request functions), cloudfront will generate it for you when you generate a response in the edge function.

OAuth 2.0 Playground - Timeout Limit?

I'm using the OAuth 2.0 Playground to execute a Google App Script function. Using:
https://script.googleapis.com/v1/scripts/MSnRgD0GQVGwCP-h1YWmtpwV62A3zXXXX:run
The function is very simple, it just pauses for a set amount of time and then returns true.
function funSleep(intSeconds) {
Utilities.sleep(intSeconds * 1000);
return true;
}
If I run the function for < 55 seconds it returns:
HTTP/1.1 200 OK
Content-length: 133
X-xss-protection: 1; mode=block
X-content-type-options: nosniff
Transfer-encoding: chunked
Vary: Origin, X-Origin, Referer
Server: ESF
-content-encoding: gzip
Cache-control: private
Date: Tue, 20 Mar 2018 15:11:29 GMT
X-frame-options: SAMEORIGIN
Alt-svc: hq=":443"; ma=2592000; quic=51303431; quic=51303339; quic=51303335,quic=":443"; ma=2592000; v="41,39,35"
Content-type: application/json; charset=UTF-8
{
"done": true,
"response": {
"#type": "type.googleapis.com/google.apps.script.v1.ExecutionResponse",
"result": true
}
}
If I run the function for > 60 seconds it returns:
Something bad happened: 500 HTTP error. Message:
500 Server Error Error: Server Error The server encountered an error and could not complete your request.Please try again in 30 seconds.
Questions:
Does the OAuth 2.0 Playground have a 1 minute timeout on the REST requests? I can't find this documented anywhere and I can't see if it's possible to increase it.
I've also experimented with Postman and successfully called this function with a wait up to 4 minutes. The Google documentation states the limit is 6 minutes, but I've been unable to achieve this result. Has anyone managed a REST call through the Google App Script API of this length?
Any help would be appreciated.

SoapUI vs Postman posting json

quite new to testing, but currently Im testing rest services...
Really like SoapUI (did some soap testing in the past), but I have problem with testing one service - it is service for creating an event - have tried it in postman(working) and in soapui(not working).
When I try to run it in SoapUI Im getting error:
Object reference not set to an instance of an object.
Here is code from postman request (getting status: 200 OK):
POST /Event.Api/v1/events HTTP/1.1
Host: 172.26.2.66
Content-Type: application/json
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJCT0hVU0xBViIsImZhbWlseV9uYW1lIjoiRE9LT1VQSUwiLCJQT0lEIjoiOGYyMTdiZWItZmQ3My00OThjLWFhZjktOWY0ZTk0YmRhMjIzIiwiZXhwIjoxNTE1MTUzNTg0fQ.2uB_pPXyl3wSAqonaDb5pLAwDb-BujMIU6Rdeg_73Jw
Cache-Control: no-cache
Postman-Token: 1d92a58e-c4de-f790-caca-fd983392a17e
{
"name": "Michalova událost",
"place": "AC",
"start": "2018-01-05T15:00:39.164Z",
"end": "2018-01-05T16:00:39.164Z",
"wholeDay": true,
"repeating": "NEOPAKUJE_SE",
"description": "Popis michalovy události"
}
edit: screen of soapUI request added
and here is request code from soapUI, dont undestand why, but there is not the json content I added in the post window
POST http://172.26.2.66/Event.Api/v1/events HTTP/1.1
Accept-Encoding: gzip,deflate
Content-Type: application/json
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJCT0hVU0xBViIsImZhbWlseV9uYW1lIjoiRE9LT1VQSUwiLCJQT0lEIjoiZWJhMWVmNjMtNWUzNS00YTU3LTljNWMtNDY3ZmJhZmI5YTIyIiwiZXhwIjoxNTE1MTU4MTI3fQ.s5dNZxV6NME1G8rbYrsuv8sb1nMiB8z1GSVGHe3auVA
Content-Length: 220
Host: 172.26.2.66
Connection: Keep-Alive
User-Agent: Apache-HttpClient/4.1.1 (java 1.5)
and here is raw response from soapUI:
HTTP/1.1 500 Internal Server Error
Transfer-Encoding: chunked
Server: Kestrel
X-Powered-By: ASP.NET
Date: Fri, 05 Jan 2018 11:17:25 GMT
{
"errors": [
{
"code": "COMMON_ERROR",
"description": "Object reference not set to an instance of an object."
}
]
}
Anyone can help? Thanks for your replys.
P.S:think it is something with that json content, cause when Im testing get methods, everything is fine and Im getting expected results even in the soapUI
MJ

QPX express API is returning error 403

Guys I'm using the google console to test qpx express api and i constantly get back this 403 error.
403 Forbidden
- Hide headers -
cache-control: private, max-age=0
content-encoding: gzip
content-length: 260
content-type: application/json; charset=UTF-8
date: Mon, 30 Mar 2015 08:42:13 GMT
expires: Mon, 30 Mar 2015 08:42:13 GMT
server: GSE
vary: Origin, X-Origin
{
"error": {
"errors": [
{
"domain": "usageLimits",
"reason": "accessNotConfigured",
"message": "Access Not Configured. The API (QPX Express API) is not enabled for your project. Please use the Google Developers Console to update your configuration.",
"extendedHelp": "https://console.developers.google.com"
}
],
"code": 403,
"message": "Access Not Configured. The API (QPX Express API) is not enabled for your project. Please use the Google Developers Console to update your configuration."
}
}
my request is this...
POST https://www.googleapis.com/qpxExpress/v1/trips/search?key={YOUR_API_KEY}
{
"request": {
"passengers": {
"adultCount": 1
},
"slice": [
{
"destination": "LCY",
"origin": "ATH",
"date": "2015-08-27"
}
]
}
}
as defined by the qpx doc...
my project has the qpx api enabled.What am i missing here..????
The message is quite clear, the reason is indicated: accessNotConfigured.
Please, review again your API enabled:
from Google Developer Console, choose your Project.
choose API & Auth/APIs option
select the tab "Enabled APIs"
check QPX Express Airfare API and enable it
SOLVED.
The problem was tha I used android key instead of a Browser key that was required fro my application.
That is why qpx api was blocking my request.
STEPS.
Inside APIs & auth ---> credentials
created a browser key and left referers blank so it is by default set to any refers allowed.

Google Drive API call to insert Public Share permissions on Fusiontables causes Internal Error

I have been trying to use the Google Drive API to make a Fusiontable publicly readable, and have not been able to get it to work. I am able to use the OAuth 2.0 Playground to insert public share permissions for other Google Drive documents, but for Fusiontables I get an HTTP 500 error, "Internal Error". Note that I have tried including every scope available under "Drive API v2" and "Fusion Tables API v1".
I'm aware that Google is no longer developing and supporting Fusiontables, but I'm wondering if anyone has found a workaround that allows them to get around this problem? I haven't tried legacy/deprecated versions of the API either.
Here's are the actual API request format and responses from OAuth Playground for a Fusiontable permissions insert (HTTP 500), then a Doc permissions insert (HTTP 200). The only difference between requests is the fusiontable_id or document_id in the Request URI:
Request:
POST /drive/v2/files/<fusiontable_id or document_id>/permissions HTTP/1.1
Host: www.googleapis.com
Content-length: 33
Content-type: application/json
Authorization: Bearer <access_token>
{"role":"reader","type":"anyone"}
Fusiontable Response:
HTTP/1.1 500 Internal Server Error
Content-length: 180
X-xss-protection: 1; mode=block
X-content-type-options: nosniff
Expires: Tue, 04 Nov 2014 23:51:58 GMT
Vary: Origin,Referer,X-Origin
Server: GSE
Cache-control: private, max-age=0
Date: Tue, 04 Nov 2014 23:51:58 GMT
X-frame-options: SAMEORIGIN
Content-type: application/json; charset=UTF-8
{
"error": {
"code": 500,
"message": "Internal Error",
"errors": [
{
"domain": "global",
"message": "Internal Error",
"reason": "internalError"
}
]
}
}
Doc response:
HTTP/1.1 200 OK
Content-length: 281
X-xss-protection: 1; mode=block
X-content-type-options: nosniff
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Vary: Origin,Referer,X-Origin
Server: GSE
Etag: "M4l5RvCt2StP2jOGfgyJPGdTZTc/dgFZz37LrQjIXplUkmnh3VtemzQ"
Pragma: no-cache
Cache-control: no-cache, no-store, max-age=0, must-revalidate
Date: Wed, 05 Nov 2014 15:35:56 GMT
X-frame-options: SAMEORIGIN
Content-type: application/json; charset=UTF-8
{
"kind": "drive#permission",
"etag": "\"M4l5RvCt2StP2jOGfgyJPGdTZTc/dgFZz37LrQjIXplUkmnh3VtemzQ\"",
"role": "reader",
"type": "anyone",
"id": "anyone",
"selfLink": "https://www.googleapis.com/drive/v2/files/<document_id>/permissions/anyone"
}
This appears to be due to a bug in the Drive API. I've located the internal error and have raised the issue with the engineering team. At this time there are no known workarounds.
I have a good news!
I've received an email from googletables-feedback. They says that it should be working now.
My code on Google Client JS API works fine
var setAccess = function setAccessF() {
gapi.client.request({
path : '/drive/v2/files/{fileID}/permissions',
method : 'post',
body : {
'value' : 'anyone',
'type' : 'anyone',
'role' : 'reader'
}
}).then(opt_onFulfilled, opt_onRejected);
}
function opt_onRejected(e) {
console.log(e)
}
function opt_onFulfilled(e) {
console.log(e)
}
If you'd be OK with a temporary workaround, inserting Fusion Table public share permission still works with older XML-based GData API. You can check it out here, though beware of the red banner at the top of the page saying 'The deprecation period for Version 3 of the Google Documents List API is nearly at an end. On April 20, 2015, we will discontinue service for this API.'
So if you need to workaround the problem now, that would keep you going till April and then let's hope the Drive API bug gets fixed before that...
Personally, the document I was attempting to add the permissions to had become invalid. This might also be an issue for someone else.