Resumable upload always returns 404 - google-drive-api

I'm trying to get resumable upload working through the google drive v3 REST api. This feature isn't available in the java client SDK so I'm doing this using HttpClient calls. The gsuite service account that I'm using works fine with the java SDK to either create or update files. But I need to be able to upload larger files through the resumable API and it always returns 404 Not Found. The docs say this means the session is no longer valid but I have just created it when I make the call that returns 404 so it must be something else.
Here are the calls I'm making:
POST https://www.googleapis.com/upload/drive/v3/files?uploadType=resumable
Authorization: Bearer [AUTH_TOKEN]
Content-Length: 38
Content-Type: application/json; charset=UTF-8
{"mimeType":"text/csv","name":"TestSheet.csv","parents":["PARENT_FOLDER_ID"]}
This always works to create the session and I correctly get the Location: header out of the response and use that for subsequent calls. But when I go to write the first chunk of data, I do this:
PUT https://www.googleapis.com/upload/drive/v3/files?uploadType=resumable&upload_id=[UPLOAD_ID]
Content-Length: 1048576
Content-Type: application/octet-stream
Content-Range: bytes 0-1048575/3063739
[1048576 bytes of data]
And this always returns a 404 Not Found.
Any ideas? Thanks

Related

Why is my request for an ASP resource not routed as expected by Azure API Management?

For my use case, I would like to use Azure APIM as a proxy.
(Edit: I'm using the "Consumption" tier, and the answer given here works with the standard tiers. I will update this if I find a solution with MS support for the Consumption tier.)
So that a
GET https://my-awesome-api.azure-api.net/default.css
fetches and returns what sits there:
GET https://my-backend.my-domain.com/default.css
It works fine, except for ASP files. If my resource is /default.asp, I get a 404 generated directly by the APIM (not my backend, which is not called at all). The problem is reproduced at every level (I can get /foo/default.css, but 404 on /foo/default.asp).
I've not been able to find in the documentation anything related to special handling of ASP files by default (or any other for that matter). The fact that other types of resources work fine is even more puzzling.
GET /default.css -> works
GET /default.asp -> gets the Azure 404
GET /i-dont-exist.css -> gets the backend 404
GET /i-dont-exist.asp -> gets Azure 404
Azure's 404:
HTTP/1.1 404 Not Found
content-length: 103
content-type: text/html
date: Fri, 05 Apr 2019 15:35:34 GMT
vary: Origin
x-powered-by: ASP.NET
The resource you are looking for has been removed, had its name changed, or is temporarily unavailable.
Most likely your API is misconfigured. Seems you want to pass through all traffic, so you need to create API with Web service URL set to "https://my-backend.my-domain.com" and Path suffix to "/".
Underneath it create an operation for each HTTP method you want to proxy with URL template set to /*.

Unable to hardcode JSON response

I'm currently working with Angular 5.1.2 and i'm trying to get objects from http requests.
In order to verify my code, I've hardcoded a JSON response and created a Python Anywhere's web service, here's what I did :
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Content-Type: application/json; charset=UTF-8
{"Computer":[{
"ip":"192.168.0.142",
"mac":"39-D7-98-9E-5A-DC",
"name":"PC-DE-JEAN-CLAUDE"
},
{
"ip":"192.168.0.50",
"mac":"4D-49-98-30-8A-F5",
"name":"LIVEBOX-684J"
}]}
However, why my Angular app is saying that "No 'Access-Control-Allow-Origin' header is present on the requested resource" ?
Thanks
It is related to CORS issue. It happens when server and client are running on different addresses. To make it run, server need to return Access-Control-Allow-Origin as a Key:Value pair in their header response.
Access-Control-Allow-Origin: *
Specifying value as * means that the content of the address can be accessed by any other address.
It's one of the layer in securing the Internet applications.
This is a server-side problem due to CORS to prevent XSS. In order to fix, make sure your server responds with the header Access-Control-Allow-Origin: * After verifying this fixes the problem, set this header to your website URL

audio tag not working correctly when playing an mp3 stored as azure blob in chrome

We are getting a strange behavior when using Chrome to play an mp3 file stored as a blob in Azure. All other browsers seem to play the track correctly but Chrome does not allow the user to skip to other parts of the track.
To demonstrate this open the following two urls in Chrome - they are both the same track. The first one will let you skip to other sections, the second one won't.
http://scantopdf.eu/downloads/music/igygtrack.mp3
http://igygprodstore.blob.core.windows.net/igyg-site-blobs1/10b1122f-eb43-44fd-aa48-919d8b6955c1.mp3
Is this a Chrome issue or an Azure storage issue? Is there any HTML5 code that will play the blob correctly?
Here's what's different:
The Azure Blob Storage endpoint does not return Accept-Ranges: Bytes to your browser - that's why you can't seek.
Now if you look closer at the response coming from Azure Storage you'll notice a x-ms-version header with a value that looks ancient:
HTTP/1.1 200 OK
Content-Length: 13686118
Content-Type: audio/mp3
...
x-ms-version: 2009-09-19
Both old and new storage accounts default to the same API version so they don't break backwards compatibility with code out there.
Here's the version history on storage API versions:
https://msdn.microsoft.com/en-us/library/azure/dd894041.aspx
We highly recommend using version 2011-08-18 version and later for scenarios that require quoted ETag values or valid Accept-Ranges response headers, since browsers and other streaming clients require these for efficient download and retries.
Here's how to talk Azure Storage into sending you Accept-Ranges: bytes
You either have to pass in x-ms-version header with a post-August-2011 API version -
$ curl -I -s http://igygprodstore.blob.core.windows.net/igyg-site-blobs1/10b1122f-eb43-44fd-aa48-919d8b6955c1.mp3
HTTP/1.1 200 OK
Content-Length: 13686118
Content-Type: audio/mp3
...
↑ Note no Accept-Ranges header!
$ curl -I -s -H "x-ms-version: 2015-12-11" http://igygprodstore.blob.core.windows.net/igyg-site-blobs1/10b1122f-eb43-44fd-aa48-919d8b6955c1.mp3
HTTP/1.1 200 OK
Content-Length: 13686118
Content-Type: audio/mp3
...
Accept-Ranges: bytes
or you need to set the default API version at the container level, with something like AzureBlobUtility: https://github.com/Plasma/AzureBlobUtility
C:\AzureBlobUtility\bin\Release>BlobUtility.exe -k fH00xxxxxxxxxx7w== -a baboonstorage1 -c public --setDefaultServiceVersion 2015-12-11
[2016-09-20 01:59:45] INFO Program - Updating API Version from to 2015-12-11
[2016-09-20 01:59:45] INFO Program - Updated Ok
Or, use the Storage SDK to set the default API version at storage account level:
// From http://geekswithblogs.net/EltonStoneman/archive/2014/10/09/configure-azure-storage-to-return-proper-response-headers-for-blob.aspx
var connectionString = "DefaultEndpointsProtocol=https;AccountName={accountName};AccountKey={accountKey}";
var storageAccount = CloudStorageAccount.Parse(connectionString);
var blobClient = storageAccount.CreateCloudBlobClient();
var props = blobClient.GetServiceProperties();
props.DefaultServiceVersion = "2015-12-11";
blobClient.SetServiceProperties(props);
Use curl to make sure your changes are live -
$ curl -I -s https://baboonstorage1.blob.core.windows.net/public/test.mp3
HTTP/1.1 200 OK
Content-Length: 13686118
Content-Type: audio/mpeg
...
Accept-Ranges: bytes
...
x-ms-version: 2015-12-11
As evilSnobu pointed out, the default version of Azure Blob Storage API doesn't return a Accept-Ranges: Bytes header in the response, which prevents the audio files from being seekable.
You can change the default Blob Storage API version with PowerShell, like this:
$context = New-AzStorageContext `
-StorageAccountName <yourAccount> `
-StorageAccountKey <key>
Update-AzStorageServiceProperty `
-ServiceType Blob `
-DefaultServiceVersion 2021-04-10 `
-Context $context
Note: that requires PowerShell Az module to be installed. You can also use the old AzureRM module, in which case your commands will be New-AzureStorageContext and Update-AzureStorageServiceProperty respectively.

"Protected" error when using "new_copy_url"

I have developed a Box App using "Web App Integrations", the options to manage the file from Box web using right click on it.
It is a popup integration that gets the file modify it and save it again.
Some time ago we detected it was broken but have not had time to check it until now and the problem lays in our last request to box when we want to save the modified file.
In our callback we are requesting #overwrite_url# and #new_copy_url# and we post to that urls with the modified files to "save as" or "save" based on user selection.
The new documentation does not describe this 2 parameters but the app management allows them to be requested so I assume that they are not deprecated, other than that I have not been able to see a difference in the documentation related to this issue.
The request we are using is:
POST /api/1.0/new_copy/dmq5esykpq30sp2kepy3b1d7mvese5ap/9721827325?new_file_name=Koala.proton.jpg HTTP/1.1
Accept: application/json
Content-Type: multipart/form-data;boundary=2iqAzMZWpgN473oDBmRGnysbfTtsD2
Cache-Control: no-cache
Pragma: no-cache
User-Agent: Java/1.7.0_45
Host: upload.box.com
Connection: keep-alive
Content-Length: 17831
--2iqAzMZWpgN473oDBmRGnysbfTtsD2
Content-Disposition: form-data; name="file"; filename="empty.dat"
Content-Type: application/octet-stream
Content-Length: 17627
And the only response I get is a 200 response with the body "restricted" without further information.
I suspect this has something to do with the deprecation of APIv1 but the integrations does not use the api and I did ask a couple of times to box support mail if the deprecation was going to have some effect to integrations and the responses were always negative.
There are definitely changes required in order to update your integration to continue to work. Yes, V1 APIs have been deprecated, and so your old integration has stopped working.
New documentation is here . Subtle difference is that you get way more power now for these web-app integrations. Tokens don't expire after 24 hours, but follow Box's same OAuth2 rules. Scope of your token will be for the file or folder that your web-app-integration is invoked on.
Fundamentally, first step after you get the inbound request on your server is to trade in the auth_code for an Auth-Token via the OAuth2 endpoints.
See the section on auth_code. Then you will have an Auth-token that will let you call regular V2 APIs. To do a copy you would then :
POST https://api.box.com/2.0/files/{id}/copy (with the Bearer-token header)
See https://developers.box.com/docs/#files-copy-a-file for the documentation on how to do a copy operation. Nice thing is you can also do any number of other API calls with that token... as long as they are within scope of that file.

XMLHttpRequest CORS to Google Cloud Storage only working in preflight request

I implemented browser based resumable uploads into Google's Cloud Storage using an XMLHttpRequest send to a server-side created resumable upload url. This works completely fine when disabling web security, which I did during development.
But now in the real world, CORS keeps making trouble. I tried this with other browsers (without success), too, but sticked to chrome for further testing.
Note: A fake.host entry in /etc/HOSTS is used to trick chrome into avoiding localhost-restrictions. Nevertheless, same happens with the "real" domain of our online test server.
The request is started using a normal XMLHttpRequest call:
var xhr = this.newXMLHttpRequest();
xhr.open('PUT', url, true);
xhr.setRequestHeader('Content-Type', this.currentInputFile.type);
xhr.setRequestHeader('Content-Range', 'bytes ' + startByte + '-' + (this.currentInputFile.size - 1) + '/' + this.currentInputFile.size);
xhr.onload = function(e) {
...
};
...
if (startByte > 0) {
xhr.send(this.currentInputFile.slice(startByte));
} else {
xhr.send(this.currentInputFile);
}
The browser then successfully initiates a preflight request:
Remote Address:173.194.71.95:443
Request URL:https://www.googleapis.com/upload/storage/v1/b/my-bucket-name/o?uploadType=resumable&name=aa%20spacetestSMALL_512kb.mp4&upload_id=XXXXXXXXX
Request Method:OPTIONS
Status Code:200 OK
Request Headers:
:host:www.googleapis.com
:method:OPTIONS
:path:/upload/storage/v1/b/my-bucket-name/o?uploadType=resumable&name=aa%20spacetestSMALL_512kb.mp4&upload_id=XXXXXXXXX
:scheme:https
:version:HTTP/1.1
accept:*/*
accept-encoding:gzip,deflate
accept-language:en-US,en;q=0.8,de;q=0.6
access-control-request-headers:content-range, content-type
access-control-request-method:PUT
origin:https://fake.host
referer:https://fake.host/upload.xhtml
user-agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.94 Safari/537.36
x-client-data:YYYYYY
Query String Parameters
uploadType:resumable
name:aa spacetestSMALL_512kb.mp4
upload_id:XXXXXXXXX
Response Headers
access-control-allow-credentials:true
access-control-allow-headers:content-range, content-type
access-control-allow-methods:PUT
access-control-allow-origin:https://fake.host
alternate-protocol:443:quic
content-length:0
content-type:text/html; charset=UTF-8
date:Fri, 05 Sep 2014 14:11:21 GMT
server:UploadServer ("Built on Aug 18 2014 11:58:36 (1408388316)")
status:200 OK
version:HTTP/1.1
... and starts the PUT-request until all data is transferred. But afterwards chrome silently logs an error without completing/ending the request:
XMLHttpRequest cannot load https://www.googleapis.com/upload/storage/v1/b/my-bucket-name…XXXXXXXX. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'https://fake.host' is therefore not allowed access.
This is what chrome logs about the PUT request:
Request URL:https://www.googleapis.com/upload/storage/v1/b/my-bucket-name/o?uploadType=resumable&name=aa%20spacetestSMALL_512kb.mp4&upload_id=XXXXXXXXX
Request Headers
Provisional headers are shown
Content-Range:bytes 0-3355302/3355303
Content-Type:video/mp4
Origin:https://fake.host
Referer:https://fake.host/upload.xhtml
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.94 Safari/537.36
X-DevTools-Emulate-Network-Conditions-Client-Id:YYYYYYY
Query String
uploadType:resumable
name:aa spacetestSMALL_512kb.mp4
upload_id:XXXXXXXXX
Notably, when adding the same url in http://client.cors-api.appspot.com/client and issuing any request, all but the OPTIONS request types fail, too. It seems like the cloud storage api only issues the correct response headers for OPTION requests, but not PUT/POST/GET/... requests.
So am I doing something impossible? Is there something broken? Is this a bug in the cloud storage api? I've spend hours googling and reading SO answers, without any luck so far.
For now, I could periodically check if the download transferred 100% of the data and just ignore the http request outcome, as the file is in fact completely uploaded to the storage bucket. But this is a rather ugly workaround which I really don't want to use if the real issue can be solved.
When requesting a resumable upload url, you MUST include the origin the browser will send when trying to use that upload url, or else the subsequent uploading will fail, just as is happening in the question (the OPTIONS call will look good, but the PUT will not).
It must exactly match the browser's origin (which you can get as location.origin in javascript).
This is the step "Initiating a resumable upload session" in this documentation:
https://cloud.google.com/storage/docs/json_api/v1/how-tos/resumable-upload
If you're requesting the resumable upload url on the server side, you'll probably need the client side (the browser) to pass you its origin (eg: location.origin).
fwiw I was using Google's Cloud Storage library for python for this step, and needed to add the origin like this:
myblob.create_resumable_upload_session(mycontenttype, origin=browserorigin)
Note that you definitely do not need to set up CORS for your bucket.
Since this question remains unanswered and still gets a fair number of view I'll try to post something definitive here.
The 'Access-Control-Allow-Origin' header returned in the response to any PUT requests to upload data is always set to the the origin given in the initial POST request used to initiate the upload, as per the current docs:
When using the resumable upload protocol, the Origin from the first
(start upload) request is always used to decide the
Access-Control-Allow-Origin header in the response, even if you use a
different Origin for subsequent request. Therefore, you should either
use the same origin for the first and subsequent requests, or if the
first request has a different origin than subsequent requests, use the
XML API with the CORS configuration set to *.
This means you must send an initial POST request before any PUT requests that send data, and any subsequent PUT requests must have the same 'origin' as the initial POST.
Regarding the CORS configuration set in GCS, this only applies to calls to the XML API, from the current docs:
Note: CORS configuration applies only to XML API requests. For JSON
API requests, Cloud Storage always returns the
Access-Control-Allow-Origin header with the origin of the request.