I have been running GDrive API v3 using our Node.js API for awhile now (1+ year), and everything has been functional for creating (drive.files.create) & reading (drive.files.get) DOCX files through the Google Drive API logic using a service account. Everything has been working perfectly fine.
Today I am trying to extend our infrastructure to handle some generic Google Docs files (rather than the existing DOCX stored within GDrive) using the same service account, but the GDrive API is now returning a 403 "Forbidden" error for these files specifically. I can't seem to figure out why there would be a difference in permissions for two file formats that are within the same service account (is the owner of both) and are using the same OAuth token system that I set up originally. The service account itself created the file, so I am a little shocked that it can't then access the same file using the GDrive API.
My question is, is it not possible to use GDrive API to access Google Doc files, even when the files are within a GDrive folder? I can't seem to find any definitive information online on whether we specifically have to use the Google Docs API to access Docs files (seems silly if true). If it is possible, is there an alternative SCOPE that I need to assign to this service account to access Google Docs items specifically? Right now the only SCOPE assigned is https://www.googleapis.com/auth/drive, which according to documentation should be enough to access all of our GDrive files.
I can share some code, but there is not exactly a lot more than what I explained above. I would like to not have to entirely re-do my GDrive OAuth permissions if possible, as that took me a long time in the first place, but if that is the only way, I would love to hear suggestions.
Thanks!
Please note, this issue seems unique from other "quota exceeded" issues I've seen. It's not bad creds, and it's not a file that has been downloaded a lot.
2020/02/21 15:44:49 ERROR : test.mkv: Failed to copy: multpart copy: failed to open source: open file failed: googleapi: Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded
I'm only receiving this error through the API.
Other files in this shared drive are accessible with the same credentials.
This doesn't affect the file for other users (ie: I tested with an actual Google account [rather than a Service Account] and was able to download the file via the UI.
So what can be going on? I'm used to 403s with Google APIs when the user doesn't have permission, and have even seen this "reason" in a scenario where I just hadn't granted permissions yet. But in this case, the permissions seem just fine [they're all set at the Drive level, anyway].
Important context:
The files that are having this problem were copied from... public sources. Those files likely have exceeded their download quota. But these are copies that I made into my own drive.
To be more speciic, I have copied files into my Drive via the web UI and these files have never presented an issue.
However, these new files that are having issues - I copied them via the Drive API. But they say they're owned by my Service Account, and I can't find ANY linkage back to the original file they were copied from. Just like when I copy them by hand, they have their own file IDs, different than the ones they were copied from.
Things I'm suspicious of:
Maybe the copy performed via the UI, and the copy performed by the API are somehow different?
Somehow my "copy" of the file is more strongly linked to the original file I copied from originally?
Otherwise, I'm out of ideas. Any suggestions would be highly appreciated.
EDIT:
Manual copy method: File file, add star to it, navigate to Drive web ui, go to Starred items, right-click on the file, "Make a Copy".
New API method: use Rust to do this:
let url = format!("https://www.googleapis.com/drive/v3/files/{}/copy", file_id);
let auth_hdr_val = format!("Bearer {}", token.as_str());
let mut query_params = HashMap::new();
query_params.insert("supportsAllDrives", true);
query_params.insert("supportsTeamDrives", true);
let mut body = HashMap::new();
body.insert("parents", [destination_parent_dir]);
client
.post(&url)
.query(&query_params)
.header(reqwest::header::AUTHORIZATION, auth_hdr_val)
.json(&body)
.send()
Nothing too wild, just calling the v3 drive copy API, with a parent directory specified which is a Shared Drive. (Seems like it also happens if I API copy to my drive first, then manually move to Shared Drive.)
Google Drive has undocumented API limits. It appears that one of them is a per-user, per-file limitation.
I switched the service account that I was using to download the particularly problematic files, and they downloaded successfully.
It appears that the "new" method was not the problem, but rather that another service was indexing "new" files in Google Drive and bumped up against this limit. The service account used for downloading was shared with the one indexing.
So, avoid hitting these limits, or use different service accounts for different roles, etc.
I know you found a workaround for your issue, but it seems the issue you were having, it was previously reported by other people on Issue Tracker:
"downloadQuotaExceeded": The download quota for this file has been exceeded..
Receiving downloadQuotaExceeded error in python client.
You can hit the ☆ next to the issue number in the top left on this page as it lets Google know more people are encountering this and so it is more likely to be seen to faster.
I'm trying to implement a shared filespace on an existing server that has its own login/authentication system. I'd prefer if I could use Google Drive or One Drive as the actual backing file store since then the files could (ideally) be edited in browser and automatically update for everyone else that has access to the shared area.
Diagrammatically (red link = existing authorization, red dashed line = desired auth in a session):
The need for this weird scheme stems from usage by a group where almost the entire set of members gets replaced each year (university committee). It was decided we don't just want to use a Google Drive folder since these end up in people's personal Google Drive folders, files get lost of cleared out etc. Additionally we already pass through this website quite regularly so it makes sense as an official repository of committee documents.
However this leaves me stuck with the issue in the diagram. I don't think that the server can create new, temporary tokens for a logged in user for them to pass to the Google Drive API to edit files. The question is really, can I get the Google API to do this for me somehow by only talking to the server? If not, what's the best way to implement this?
Thanks!
We are having issues where sometimes a file that a user can access is not returned when the user issues a files.list. This can happen in many ways. For example, new members of a Google group will not see previously shared files, as described in this question. Moreover, acording to Google documentation there are other limits on sharing which can prevent shared files from appearing in the "Shared with me" view. Finally, a user can issue a files.delete on a file she doesn't own, and the file will disappear from files.list but will still exist.
What can a user do via the SDK alone to cause a file which she can access via files.get to appear in the list of files retrieved via files.list? We are using a service account which impersonates users; the user never authenticates to Google via a browser. A link in an email that the user needs to click won't work for us, unfortunately. Accessing the file via the Google Drive UI has the desired effect, but the analogous files.get call does not.
The Google Calendar API explicitly exposes a CalendarList interface where a user can issue an insert to add an existing calendar to her list. The Google Drive SDK seems like a hybrid Files/FilesList interface with some of the functionality missing (nothing like FilesList.insert) and some of the functionality mixed together (issuing a delete as a non-owner acts like FilesList.delete but issuing it as the owner acts like Files.delete).
If we can't manage the user's files list programmatically then it is not useful for our service. We could ignore the files.list call entirely and just start recursively performing children.list queries on all shared folders, but this is incredibly expensive (unless someone knows how to issue a single query which returns all the Files resources in a folder and not just the IDs of those resources).
Any help would be appreciated. We've been trying this many different ways and have been frustrated at every turn. Thanks!
I'm developing an application using the Google drive api (java version). The application saves files on a Google drive mimicking a file system (i.e. has a folder tree). I started by using the files.list() method to retrieve all the existing files on the Google drive but the response got slower as the number of files increased (after a couple of hundred).
The java Google API hardcodes the response timeout to 20 seconds. I changed the code to load one folder at a time recursively instead (using files.list().setQ("'folderId' in parents) ). This method beats the timeout problem but it consistently misses about 2% of the files in my folders (the same files are missing each time). I can see those files through the Google drive web browser interface and even through the Google drive API if I search the file name directly files.list().setQ("title='filename'").
I'm assuming that the "in parents" search uses some inexact indexing which may only be updated periodically. I need a file listing that's more robust and accurate.
Any ideas?
could you utilize the Page mechanism to do multiple times of queries and each query just asks for a small mount of result ?