I'm trying to subscribe to changes in a Team Drive folder or its direct-children files. I get a notification with X-Goog-Resource-State: sync, but no notifications after that. I've tried renaming files, editing file contents, adding/removing files. Nothing triggers additional notifications.
I'm doing this with the Ruby API:
drive = Google::Apis::DriveV3::DriveService.new
drive.authorization = ... # json auth for a service account with access to the drive
file_id = ... # id of the folder to watch
channel = {
id: 'test',
type: 'web_hook',
address: 'https://example.com/notifications',
}
drive.watch_file(file_id, channel, supports_team_drives: true)
I know that this auth can list and access files within the team drive without problem. I get no errors from the watch_file call. I get the "sync" event successfully, but no "update" or similar events later.
Am I missing something? How can I debug this?
Related
I have been trying to use the Google API to create files on a folder that's been shared with me by another user (I made sure I have edit permissions on it). When I was using the files.create module with supportsAllDrives=True I got the following error message:
{
"errorMessage": "<HttpError 404 when requesting https://www.googleapis.com/upload/drive/v3/files?supportsTeamDrives=true&alt=json&uploadType=multipart returned "File not found: 1aLcUoiiI36mbCt7ZzWoHr8RN1nIPlPg7.". Details: "[{'domain': 'global', 'reason': 'notFound', 'message': 'File not found: 1aLcUoiiI36mbCt7ZzWoHr8RN1nIPlPg7.', 'locationType': 'parameter', 'location': 'fileId'}]">",
"errorType": "HttpError",
"requestId": "fc549b9e-9590-4ab4-8aaa-f5cea87ba4b6",
"stackTrace": [
" File "/var/task/lambda_function.py", line 154, in lambda_handler\n upload_file(service, download_path, file_name, file_name, folder_id, 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet')\n",
" File "/var/task/lambda_function.py", line 78, in upload_file\n file = service.files().create(\n",
" File "/opt/python/googleapiclient/_helpers.py", line 131, in positional_wrapper\n return wrapped(*args, **kwargs)\n",
" File "/opt/python/googleapiclient/http.py", line 937, in execute\n raise HttpError(resp, content, uri=self.uri)\n"
]
}
After a bit of digging in, I found that 'Shared Drives' is different from 'Shared with me' and all the APIs I found so far apply to the 'Shared Drives' only. The supportsTeamDrives=True has been deprecated and I was not able to find a related replacement parameter in the docs. There is a parameter sharedWithMe=True for the file.list api and I'm not sure how I can use this in my code because file.create doesn't see the folderID for a 'Shared with me' folder anyway. Any suggestions are appreciated in advance!
My current code:
def upload_file(service, file_name_with_path, file_name, description, folder_id, mime_type):
media_body = MediaFileUpload(file_name_with_path, mimetype=mime_type)
body = {
'name': file_name,
'title': file_name,
'description': description,
'mimeType': mime_type,
'parents': [folder_id]
}
file = service.files().create(
supportsAllDrives=True,
supportsTeamDrives=True,
body=body,
media_body=media_body).execute()
Modified answer to include more details:
You are correct 'Shared Drives' are different from 'Shared With Me'. First off, you need to get the ID from the shared with you folder, for this you can use files:list. To upload files to that folder or any type of folder you can use the modified code below:
from __future__ import print_function
import pickle
import os.path
from googleapiclient.http import MediaFileUpload
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
from google.oauth2 import credentials, service_account
# Scopes required by this endpoint -> https://developers.google.com/drive/api/v3/reference/files/create
SCOPES = ['https://www.googleapis.com/auth/drive']
"""
To upload/create a file in to a 'Shared with me' folder this script has the following configured:
1. Project:
* Create project
* Enable the Google Workspace API the service account will be using: https://developers.google.com/workspace/guides/create-project
2.Consent screen:
* Configure the consent screen for the application
* Create credentials for your service account depending on the type of application to be used with https://developers.google.com/workspace/guides/create-credentials#create_a_service_account
Once your Service Account is created you are taken back to the credentials list (https://console.cloud.google.com/apis/credential) click on the created Service Account, next click on ‘Advanced settings’ and copy your client ID
3. Scopes
* Collect the scopes needed for your service account/application
https://developers.google.com/identity/protocols/oauth2/scopes
4. Grant access to user data to a service account in Google Workspace https://admin.google.com/ac/owl/domainwidedelegation
* In the "Client ID" field, paste the client ID from your service account
* In the "OAuth Scopes" field, enter a comma-delimited list of the scopes required by your application. This is the same set of scopes you defined when configuring the OAuth consent screen.
* Click Authorize.
5. In your code you need to impersonate the account the folder was shared with, if it was your account, you add your account here:
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
delegated_creds = credentials.with_subject('user#domain.info')
"""
def main():
SERVICE_ACCOUNT_FILE = 'drive.json' #Service Account credentials from Step 2
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
delegated_creds = credentials.with_subject('user#domain.xyz')
service = build('drive', 'v3', credentials=delegated_creds)
media = MediaFileUpload(
'xfiles.jpg',
mimetype='image/jpeg',
resumable=True
)
request = service.files().create(
media_body=media,
body={'name': 'xfile new pic', 'parents': ['1Gb0BH1NFz30eau8SbwMgXYXDjTTITByE']} #In here 1Gb0BH1NFz3xxxxxxxxxxx is the 'Shared With ME'FolderID to upload this file to
)
response = None
while response is None:
status, response = request.next_chunk()
if status:
print("Uploaded %d%%." % int(status.progress() * 100))
print("Upload Complete!")
if __name__ == '__main__':
main()
Where:
parents is the ID of the folder shared with you.
See here for more documentation details
After a chat with a Google Workspace API specialist, turns out there is no API available to perform the above task. For clarity, refer the picture where my target folder lies.
Difference between 'Shared Drive' and 'Shared with me' (image)
Here's the response from the Support Agent:
I reviewed your code and everything was done perfectly, so I spoke to
our Drive Specialists, and they have explained to me that "Shared with
me" it's more than anything a label, and because you are not the owner
of the file, (like you would be if they were in "My Drive" )nor the
co-owner (if they were located in "Shared Drive") it does not allow
you to use any type of API in order to automate file creation or
deletion or anything for that matter.
In this case you can either make a copy on your Drive and automate it
there, and just update it every now and then in the file that was
shared with you, or just ask the user to move it to the "Shared Drive"
and access it from there.
I confess I'm a little disappointed that there is no API way to add/delete/edit in another user's folder in spite of having permissions to do so. My understanding as a developer is that the CLI is the ultimate most powerful way to interact with any service. GUI comes second to CLI, it's just a more visually appealing medium. Often times, when we are not able to perform a task using the GUI, we turn to CLI and manage high granularity and precision.
But this was a completely upside down scenario! I'm failing to understand how I'm able to access the 'shared folder' and make adds and deletes through the GUI but unable to do the same using a script. I understand now that 'Shared with me' is just a label and not a 'location' for me to access the folder but surely I would have assumed there was another API way to access a folder that belonged to another user (using the person's username/ID for identification, folder path as target, verifying if I have permissions to make said changes for authentication, returning an error if I don't, lastly executing the API).
If someone's able to explain to me if there is a specific reason why this is not made available to end users, I would love to learn about it please.
EDIT
I'm a bit late posting the solution here, but the issue turned out to be that the google workspace service account that was being used by my API did not have write permissions to the Shared Drive I was trying to query. Once the service account was given the required edit permissions, my code worked perfectly.
I have two Revit model files, A and B, where B is linked into A. I want to upload the files to BIM360 Docs via the Autodesk.Forge API and keep them linked, so I can see the combined model in the Forge Model viewer when I subsequently view model A.
I have the two files in a zip file, but from what I understand, I shouldn't upload the zip file, but rather upload A and B separately, then create a relationship between them.
I can upload the files without problems, and I've then tried to link them via this code (using the NON-encoded version ids for A and B):
public async Task SetLinkedFileRelationship(string projectId, string versionId, string linkedVersionId)
{
BaseAttributesExtensionObject baseAttribute = new BaseAttributesExtensionObject("auxiliary:autodesk.core:Attachment", "1.0");
CreateRefDataMeta meta = new CreateRefDataMeta(baseAttribute);
CreateRefData createRefData = new CreateRefData(CreateRefData.TypeEnum.Versions, linkedVersionId, meta);
CreateRef createRef = new CreateRef(new JsonApiVersionJsonapi(JsonApiVersionJsonapi.VersionEnum._0), createRefData);
VersionsApi versionsApi = new VersionsApi { Configuration = { AccessToken = _token.AccessToken } };
await versionsApi.PostVersionRelationshipsRefAsync(projectId, versionId, createRef);
}
...which produces this response:
status: 400
code: FUNCTION_NOT_SUPPORTED
detail: BIM360 currently does not support the creation of refs.
So apparently I can't create the link between A and B like this. Is there another way to accomplish what I want, or is this currently just not possible in BIM360? I know you can do it via the BIM360 Docs web page (using the Upload file -> Linked Files button), but is it possible when I upload the model files via the API? If so, what is the recipe?
Please keep in mind that my question is for uploading to BIM360 Docs - using the Autodesk.Forge API (v2). I'm aware of this post: BIM360 Docs: Setting up external references between files (Upload Linked Files), but that is targeted at manually composing requests. I'd like to be able to use the v2 API.
I believe this post should help https://forge.autodesk.com/blog/bim360-docs-setting-external-references-between-files-upload-linked-files.
As part of a communications effort to a large user base, I need to send upwards of 75,000 emails per day. The emails of the users I'm contacting are stored in a CSV file. I've been using Postman Runner to send these requests via SendGrid (Email API), but with such a large volume, my computer either slows way down or Postman completely crashes before the batch completes. Even if it doesn't crash, it takes upwards of 3 hours to send this many POST requests via Runner.
I'd like to upload the CSV containing the emails into a Cloud Storage bucket and then access the file using Cloud Functions to send a POST request for each email. This way, all the processing can be handled by GCP and not by my personal machine. However, I can't seem to get the Cloud Function to read the CSV data line-by-line. I've tried using createReadStream() from the Cloud Storage NodeJS client library along with csv-parser, but can't get this solution to work. Below is what I tried:
const sendGridMail = require('#sendgrid/mail');
const { Storage } = require('#google-cloud/storage');
const fs = require('fs');
const csv = require('csv-parser');
exports.sendMailFromCSV = (file, context) => {
console.log(` Event: ${context.eventId}`);
console.log(` Event Type: ${context.eventType}`);
console.log(` Bucket: ${file.bucket}`);
console.log(` File: ${file.name}`);
console.log(` Metageneration: ${file.metageneration}`);
console.log(` Created: ${file.timeCreated}`);
console.log(` Updated: ${file.updated}`);
const storage = new Storage();
const bucket = storage.bucket(file.bucket);
const remoteFile = bucket.file(file.name);
console.log(remoteFile);
let emails = [];
fs.createReadStream(remoteFile)
.pipe(csv())
.on('data', function (row) {
console.log(`Email read: ${row.email}`);
emails.push(row.email);
//send email using the SendGrid helper library
const msg = {
to: [{
"email": row.email;
}],
from: "fakeemail#gmail.com",
template_id: "fakeTemplate",
};
sendGridMail.send(msg).then(() =>
context.status(200).send(file.body))
.catch(function (err) {
console.log(err);
context.status(400).send(file.body);
});
})
.on('end', function () {
console.table(emails);
});
};
The Cloud Function is currently triggered by an upload to the Cloud Storage bucket.
Is there a way to build a solution to this problem without loading the file into memory? Is Cloud Functions to right path to be moving down, or would it be better to use App Engine or some other tool? Willing to try any GCP solution that moves this process to the cloud
Cloud Function's memory can be shared/used as a temporary directory /tmp. Thus, you can download the csv file from the cloud storage bucket into that directory as a local file, and then process it, as if that file is handled from the local drive.
At the same time, you may would like to remember about 2 main restrictions:
Memory - up to 2Gb for everything
Timeout - no more than 540 seconds per invocation.
I personally would create a solution based on a combination of a few GCP resources.
The first cloud function is triggered by a 'finlize' event - when the csv file is saved in the bucket. This cloud function reads the file and for every record composes a Pub/Sub message with relevant details (enough to send an email). That message is posted into a Pub/Sub topic.
The Pub/Sub topic is used to transfer all messages from the first cloud function to trigger the second cloud function.
The second cloud function is triggered by a Pub/Sub message, which contains all neccessary details to process and send an email. As there may be 75K records in the source csv file (for example), you should expect 75K invocations of the second cloud function.
That may be enough at a high level. Pub/Sub paradigm guarantees at least once delivery (but may be more than once), so if you need no more than one email per address, some additional resources may be required to achieve an idempotent behaviour.
Basically you will have to download the file locally in the Cloud Function machine to be able to read it in this way.
Now there are multiple options to workaround this.
The most basic/simplest is to provision a Compute Engine machine and run this operation from it if is a once on a time event.
If you need to do this more frequently (i.e. daily) you can use an online tool to convert your csv file into json and import it to Firestore, then you can read a lot faster the emails from Firestore.
I am trying to insert an image (PNG) in a Google Slide presentation using the Slides API. I do this by first uploading the image to the user's Drive, obtaining the url, passing that along to the Slide API via the correct request and then deleting the image file.
What used to work as of a few weeks ago:
image_url = '%s&access_token=%s' % (
drive_service.files().get_media(fileId=image_file_id).uri,
creds.token)
However, there have been changes to the Drive API, such that URLS constructed this way no longer work.
I am having difficulty figuring out the new correct URL to use here. The options as per the doc that describes the change are:
Use webContentLink -- Downloads
Use webViewLink -- View
Use exportLinks -- Export
I use code that looks like this to get these links:
upload = drive_service.files().create(
body={'name': 'My Image File'},
media_body=media_body,
fields='webContentLink, id, webViewLink').execute()
image_url = upload.get('webContentLink')
I have tried both #1 and #2 and get the following error:
"Invalid requests[0].createImage: The provided image is in an unsupported format."
I have also been receiving the following error intermittently:
"Invalid requests[0].createImage: Access to the provided image was forbidden."
I verified that I am able to download / view the image from the URLs generated in #1 and #2. I didn't try #3 since I am not trying to export to a different format.
What would be the best way to go about figuring out the correct URL to use?
From your script, I think that the reason of your issue is due to this. By this, the query parameter of access_token cannot be used. Under this situation, when image_url = '%s&access_token=%s' % (drive_service.files().get_media(fileId=image_file_id).uri,creds.token) is used, the login page is returned. By this, such error occurs. So as a workaround, how about the following flow?
Flow:
Upload a PNG file.
Publicly share the PNG file by creating a permission.
Insert the PNG file to Slides.
Close the shared PNG file by deleting the permission.
When the image file is put to the Slides, even when the permission of file is deleted, the image is not removed from the Slides. This workaround uses this.
Sample script:
For above flow, the sample script of python is as follows. Please set the variables of uploadFilename, presentation_id and pageObjectId
uploadFilename = './sample.png' # Please set the filename with the path.
presentation_id = '###' # Please set the Google Slides ID.
pageObjectId = '###' # Please set the page ID of the Slides.
drive = build('drive', 'v3', credentials=creds)
slides = build('slides', 'v1', credentials=creds)
# 1. Upload a PNG file from local PC
file_metadata = {'name': uploadFilename}
media = MediaFileUpload(uploadFilename, mimetype='image/png')
upload = drive.files().create(body=file_metadata, media_body=media, fields='webContentLink, id, webViewLink').execute()
fileId = upload.get('id')
url = upload.get('webContentLink')
# 2. Share publicly the uploaded PNG file by creating permissions.
drive.permissions().create(fileId=fileId, body={'type': 'anyone', 'role': 'reader'}).execute()
# 3. Insert the PNG file to the Slides.
body = {
"requests": [
{
"createImage": {
"url": url,
"elementProperties": {
"pageObjectId": pageObjectId
}
}
}
]
}
slides.presentations().batchUpdate(presentationId=presentation_id, body=body).execute()
# 4. Delete the permissions. By this, the shared PNG file is closed.
drive.permissions().delete(fileId=fileId, permissionId='anyoneWithLink').execute()
Note:
I thought that from your script, you might be using google-api-python-client with python. So I proposed the sample script for python.
In this case, the scopes for using Slides API and Drive API are required. Please be careful this.
In the case of Google Apps Script, you can see the sample script at here.
References:
Upcoming changes to the Google Drive API and Google Picker API
Permissions: create
Permissions: delete
If I misunderstood your question and this was not the direction you want, I apologize.
I was running into the same error even when using the flow involving granting temporary permissions access then removing the permissions after calling .createImage() or .replaceAllShapesWithImage()
I also ran into this error when creating permissions for a folder containing those images: "Invalid requests[0].replaceAllShapesWithImage: Access to the provided image was forbidden." Not sure why the permissions are not propagating to the images...
Following Kos' comment, switching to jpg file type worked for me.
Edit:
It appears I am also required to set the scope to 'https://www.googleapis.com/auth/drive' in order for it to work, which isn't ideal, but is sufficient for now.
Edit 2:
Nevermind it appears to be inconsistent. I am running into the permissions access error again. Deleting my token.pickle does not seem to fix either
When using this line of code in a Google Apps Script
var user = folders[n].getOwner().getEmail()
I get an error saying I am not authorized to perform such an action (your version may vary, I am translating from italian).
What gives? I am just retrieving an information, such as the owner of a folder.
When the script processes a folder I own, the error does not arise, the error arises when it encounters a folder not of mine. The matter is that this line of code is just for spotting folders which are not of mine, to avoid issuing method that would correctly rise an error, like setTrashed. The script looks for empty folders to delete them, but I cannot delete folders I am not the owner of of course. And yes I am into Google apps for business, does it make some difference?
There isn't any specifc warning about file.getOwner().getEmail(), but there is for Class Session.
In limited-privilege executions (such as in response to onOpen or
onEdit), we only return the identity of the active user if both the
user and the script owner are part of the same domain. This is to
protect the privacy of consumer users, who may not want their email
address exposed.
I have no problem with this in a consumer account.
The following function is an excerpt from a gist I posted for a previous question. It wraps the call to .getEmail() (or getUserLoginId() if you prefer) in a try ... catch block, so it avoids errors for users crossing Apps Domains.
function getFileInfo (file,fileType) {
var fileInfo = {
id: file.getId(),
name: file.getName(),
size: file.getSize(),
type: (fileType == "file") ? docTypeToText_(file.getFileType()) : "folder",
created: file.getDateCreated(),
description: file.getDescription(),
owner: file.getOwner()
}
try {
fileInfo.owner = file.getOwner().getEmail()//.getUserLoginId()
} catch(e)
{
// Possible permission problem
fileInfo.owner = "unknown";
}
return fileInfo;
}
UPDATE: Since this was first posted, something has changed. Now my consumer account encounters the aforementioned error when trying to access getOwner() for a file shared from another account. (March 3, 2013)