Python Google Drive API file-delete() method broken - google-drive-api

I cannot get google-drive file-delete() method to work via the Python API.
It is acting broken.
I offer some info about my setup:
Ubuntu 16.04
Python 3.5.2 (default, Nov 12 2018, 13:43:14)
google-api-python-client (1.7.9)
google-auth (1.6.3)
google-auth-httplib2 (0.0.3)
google-auth-oauthlib (0.3.0)
Below, I list a Python script which can reproduce the bug:
"""
googdrive17.py
This script should delete files named 'hello.txt'
Ref:
https://developers.google.com/drive/api/v3/quickstart/python
https://developers.google.com/drive/api/v3/reference/files
Demo (Ubuntu):
sudo apt install python3-pip
sudo pip3 install --upgrade google-api-python-client
sudo pip3 install --upgrade google-auth-httplib2
sudo pip3 install --upgrade google-auth-oauthlib
python3 googdrive17.py
"""
import pickle
import os.path
from googleapiclient.discovery import build
from googleapiclient.http import MediaFileUpload
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
# I s.declare a very permissive scope (for training only):
SCOPES = ['https://www.googleapis.com/auth/drive']
creds = None
# The file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first time.
if os.path.exists('token.pickle'):
with open('token.pickle', 'rb') as fh:
creds = pickle.load(fh)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server()
# Save the credentials for the next run
with open('token.pickle', 'wb') as token:
pickle.dump(creds, token)
# I s.create a file so I can upload it:
with open('/tmp/hello.txt','w') as fh:
fh.write("hello world\n")
# From my laptop, I s.upload a file named hello.txt:
drive_service = build('drive', 'v3', credentials=creds)
file_metadata = {'name': 'hello.txt'}
media = MediaFileUpload('/tmp/hello.txt', mimetype='text/plain')
create_response = drive_service.files().create(body=file_metadata,
media_body=media,
fields='id').execute()
file_id = create_response.get('id')
print('new /tmp/hello.txt file_id:')
print(file_id)
# Q: With googleapiclient, how to filter files list()-response?
# A1: https://developers.google.com/drive/api/v3/reference/files/list
# A2: https://developers.google.com/drive/api/v3/search-files
list_response = drive_service.files().list(
orderBy = "createdTime desc",
q = "name='hello.txt'",
pageSize = 22,
fields = "files(id, name)"
).execute()
items = list_response.get('files', [])
if items:
for item in items:
print('I will try to delete this file:')
print(u'{0} ({1})'.format(item['name'], item['id']))
del_response = drive_service.files().delete(fileId=item['id'])
print('del_response.body:')
print( del_response.body)
print('I will try to emptyTrash:')
trash_response = drive_service.files().emptyTrash()
print('trash_response.body:')
print( trash_response.body)
else:
print('hello.txt not found in your google-drive account.')
When I run the script I see output similar to that listed below:
$ python3 googdrive17.py
new /tmp/hello.txt file_id:
1m8nKOfIeB0E5t60F_-9bKwIJds8PSvYY
I will try to delete this file:
hello.txt (1m8nKOfIeB0E5t60F_-9bKwIJds8PSvYY)
del_response.body:
None
I will try to delete this file:
hello.txt (1Ow4fcUBgEYUy3ezYScDKlLSMbp-hyOLT)
del_response.body:
None
I will try to delete this file:
hello.txt (1TiUrLgQdY1Cb9w0UWHjnmj7HZBaFsKcp)
del_response.body:
None
I will try to emptyTrash:
trash_response.body:
None
$
I see that two of the API calls work well:
files.list()
files.create()
Two calls appear broken:
files.delete()
files.emptyTrash()
Perhaps, though, I call them incorrectly?

How about this modification?
At first, the official document of Files: delete method and Files: emptyTrash method says as follows.
If successful, this method returns an empty response body.
By this, when the file was deleted and the trash was cleared, the returned del_response and trash_response are empty.
Modified script:
From your question, I could understand that files.list() and files.create() works. So I would like to propose the modification points for files.delete() and files.emptyTrash(). Please modify your script as follows.
From:
for item in items:
print('I will try to delete this file:')
print(u'{0} ({1})'.format(item['name'], item['id']))
del_response = drive_service.files().delete(fileId=item['id'])
print('del_response.body:')
print( del_response.body)
print('I will try to emptyTrash:')
trash_response = drive_service.files().emptyTrash()
print('trash_response.body:')
print( trash_response.body)
To:
for item in items:
print('I will try to delete this file:')
print(u'{0} ({1})'.format(item['name'], item['id']))
del_response = drive_service.files().delete(fileId=item['id']).execute() # Modified
print('del_response.body:')
print(del_response)
print('I will try to emptyTrash:')
trash_response = drive_service.files().emptyTrash().execute() # Modified
print('trash_response.body:')
print(trash_response)
execute() was added for drive_service.files().delete() and drive_service.files().emptyTrash().
References:
Files: delete
Files: emptyTrash
If this was not the result you want, I apologize.

Related

AWS API re-deployment using ansible

I have an existing API in my AWS account. Now I am trying to use ansible to redeploy api after introducing any resource policy changes.
According to AWS I need to use below CLI command to redeploy the api:
- name: deploy API
command: >
aws apigateway update-stage --region us-east-1 \
--rest-api-id <rest-api-id> \
--stage-name 'stage'\
--patch-operations op='replace',path='/deploymentId',value='<deployment-id>'
Above, 'deploymentId' from previous deployment will be different after every deployment that's why trying to create that as a variable so this can be automated for redeployment steps.
I can get previous deployment information using below CLI:
- name: Get deployment information
command: >
aws apigateway get-deployments \
--rest-api-id 123454ne \
--region us-east-1
register: deployment_info
And output looks like this:
deployment_info.stdout_lines:
- '{'
- ' "items": ['
- ' {'
- ' "id": "abcd",'
- ' "createdDate": 1228509116'
- ' }'
- ' ]'
- '}'
I was using deployment_info.items.id as deploymentId and couldn't able to make this work. Now stuck on what can be Ansible CLI command to get id from output and use this id as deploymentId in deployment commands.
How can I use this id for deploymentId in deployment commands?
I created a small ansible module which you might find useful
#!/usr/bin/python
# Creates a new deployment for an API GW stage
# See https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-deployments.html
# Based on https://github.com/ansible-collections/community.aws/blob/main/plugins/modules/aws_api_gateway.py
# TODO needed?
# from __future__ import absolute_import, division, print_function
# __metaclass__ = type
import json
import traceback
try:
import botocore
except ImportError:
pass # Handled by AnsibleAWSModule
from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
def main():
argument_spec = dict(
api_id=dict(type='str', required=True),
stage=dict(type='str', required=True),
deploy_desc=dict(type='str', required=False, default='')
)
module = AnsibleAWSModule(
argument_spec=argument_spec,
supports_check_mode=True
)
api_id = module.params.get('api_id')
stage = module.params.get('stage')
client = module.client('apigateway')
# Update stage if not in check_mode
deploy_response = None
changed = False
if not module.check_mode:
try:
deploy_response = create_deployment(client, api_id, **module.params)
changed = True
except (botocore.exceptions.ClientError, botocore.exceptions.EndpointConnectionError) as e:
msg = "Updating api {0}, stage {1}".format(api_id, stage)
module.fail_json_aws(e, msg)
exit_args = {"changed": changed, "api_deployment_response": deploy_response}
module.exit_json(**exit_args)
retry_params = {"retries": 10, "delay": 10, "catch_extra_error_codes": ['TooManyRequestsException']}
#AWSRetry.jittered_backoff(**retry_params)
def create_deployment(client, rest_api_id, **params):
result = client.create_deployment(
restApiId=rest_api_id,
stageName=params.get('stage'),
description=params.get('deploy_desc')
)
return result
if __name__ == '__main__':
main()

How do I solve “AttributeError: 'Resource' object has no attribute 'documents'” error?

I am working on this last 3 days. i don't have any idea.. there are similar question but i didn't got my answer.
this is ERROR
File ID: 1U4XUrAhMk1WFAKE_IDqmQcteYqmIWPMFEd Traceback (most recent
File is Created I want to USE this ID and Edit Docs in Drive
call last): File "main.py", line 61, in
main() File "main.py", line 53, in main
service.documents() AttributeError: 'Resource' object has no attribute 'documents'
My goal
create Docs in GOOGLE Drive
insert Table in it
from __future__ import print_function
import pickle
import os.path
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
import sys
from gdoctableapppy import gdoctableapp
# If modifying these scopes, delete the file token.pickle.
SCOPES = ["https://www.googleapis.com/auth/drive"]
def main():
"""Shows basic usage of the Drive v3 API.
Prints the names and ids of the first 10 files the user has access to.
"""
creds = None
# The file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists("token.pickle"):
with open("token.pickle", "rb") as token:
creds = pickle.load(token)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file("credentials.json", SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open("token.pickle", "wb") as token:
pickle.dump(creds, token)
service = build("drive", "v3", credentials=creds)
serviceDoc = build("docs", "v1", credentials=creds)
# Call the Drive v3 API
# Create Google Docs file in folder
file_metadata = {
"name": sys.argv[1],
"parents": ["Folder ID"],
}
file = service.files().create(body=file_metadata, fields="id").execute()
print("File ID: %s" % file.get("id"))
DOCUMENT_ID = file.get("id")
requests = [{"insertTable": {"rows": 2, "columns": 2, "location": {"index": 1}}}]
result = (
service.documents()
.batchUpdate(documentId=DOCUMENT_ID, body={"requests": requests})
.execute()
)
return
if __name__ == "__main__":
main()
The reason why you are encountering such error is because your service variable is for Drive API, it doesn't have a documents() method.
Use serviceDoc instead:
serviceDoc.documents()
.batchUpdate(documentId=DOCUMENT_ID, body={"requests": requests})
.execute()
In addition:
I noticed that when you create a Docs file mimeType is not part of your file_metadata. If you create files without a specific mimeType, your newly created file will be application/octet-stream. See Create Files
If you want to create a Google Docs using Drive API, please add a "mimeType"='application/vnd.google-apps.document' in your file_metadata
Sample:
file_metadata = {
"name": sys.argv[1],
"mimeType"='application/vnd.google-apps.document',
"parents": ["Folder ID"]
}
Reference:
Google Workspace and Drive MIME Types

Fail to load a .pth file (pre-trained neural network) using torch.load() on google colab

My google drive is linked to my google colab notebook. Using the pytorch library torch.load($PATH) fails to load this 219 Mo file (pre-trained neural network) (https://drive.google.com/drive/folders/1-9m4aVg8Hze0IsZRyxvm5gLybuRLJHv-) which is in my google drive. However it works fine when I do it locally on my computer. The error i get on google collab is: (settings: Python 3.6, pytorch 1.3.1):
state_dict = torch.load(model_path)['state_dict']
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 303, in load
return _load(f, map_location, pickle_module)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 454, in _load
return legacy_load(f)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 380, in legacy_load
with closing(tarfile.open(fileobj=f, mode='r:', format=tarfile.PAX_FORMAT)) as tar,
File "/usr/lib/python3.6/tarfile.py", line 1589, in open
return func(name, filemode, fileobj, **kwargs)
File "/usr/lib/python3.6/tarfile.py", line 1619, in taropen
return cls(name, mode, fileobj, **kwargs)
File "/usr/lib/python3.6/tarfile.py", line 1482, in init
self.firstmember = self.next()
File "/usr/lib/python3.6/tarfile.py", line 2297, in next
tarinfo = self.tarinfo.fromtarfile(self)
File "/usr/lib/python3.6/tarfile.py", line 1092, in fromtarfile
buf = tarfile.fileobj.read(BLOCKSIZE)
OSError: [Errno 5] Input/output error```
Any help would be much appreciated!
Large sized files are automatically analyzed for virus on Drive, every time you attempt to download a large file you have to pass thru this scan, making it hard to reach the download link.
You could download the file directly using the Drive API and then pass it to the torch, it shouldn't be hard to implement on Python, I've made a sample on how to Download your file and pass it to Torch.
import torch
import pickle
import os.path
import io
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
from googleapiclient.http import MediaIoBaseDownload
from __future__ import print_function
url = "https://drive.google.com/file/d/1RwpuwNPt_r0M5mQGEw18w-bCfKVwnZrs/view?usp=sharing"
# If modifying these scopes, delete the file token.pickle.
SCOPES = (
'https://www.googleapis.com/auth/drive',
)
def main():
"""Shows basic usage of the Sheets API.
Prints values from a sample spreadsheet.
"""
creds = None
# The file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.pickle'):
with open('token.pickle', 'rb') as token:
creds = pickle.load(token)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open('token.pickle', 'wb') as token:
pickle.dump(creds, token)
drive_service = build('drive', 'v2', credentials=creds)
file_id = '1RwpuwNPt_r0M5mQGEw18w-bCfKVwnZrs'
request = drive_service.files().get_media(fileId=file_id)
# fh = io.BytesIO()
fh = open('file', 'wb')
downloader = MediaIoBaseDownload(fh, request)
done = False
while done is False:
status, done = downloader.next_chunk()
print("Download %d%%." % int(status.progress() * 100))
fh.close()
torch.load('file')
if __name__ == '__main__':
main()
To run it you'll have first to:
Enable the Drive API for your account
Install the Google Drive API libraries,
This takes no more than 3 minutes and is properly explained on the Quickstart Guide for Google Drive API, just follow steps 1 and 2 and run the provided sample code from above.
It worked by uploading directly the file to google colab instead of loading it from google drive using:
from google.colab import files
uploaded= files.upload()
I guess this solution is similar to the one proposed by #Yuri

Google Docs API for creating a blank document does not create anything. How do I make it create a blank Google Docs file?

I am trying to create a Google Docs file using the API with Python.
I have followed every instruction on their API Guides and Reference page.
Including creating their quickstart script
from __future__ import print_function
import pickle
import os.path
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
# If modifying these scopes, delete the file token.pickle.
SCOPES = ['https://www.googleapis.com/auth/drive.metadata.readonly']
def main():
SCOPES = ['https://www.googleapis.com/auth/drive.file']
"""Shows basic usage of the Drive v3 API.
Prints the names and ids of the first 10 files the user has access to.
"""
creds = None
# The file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.pickle'):
with open('token.pickle', 'rb') as token:
creds = pickle.load(token)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open('token.pickle', 'wb') as token:
pickle.dump(creds, token)
service = build('drive', 'v3', credentials=creds)
# Call the Drive v3 API
results = service.files().list(
pageSize=10, fields="nextPageToken, files(id, name)").execute()
items = results.get('files', [])
if not items:
print('No files found.')
else:
print('Files:')
for item in items:
print(u'{0} ({1})'.format(item['name'], item['id']))
title = 'My Document'
body = {
'title': title
}
doc = service.files() \
.create(body=body).execute()
print('Created document with title: {0}'.format(
doc.get('title')))
if __name__ == '__main__':
main()
I expected a Google Docs file to be created but instead the script returned: Created document with title: None.
There are no errors it returns but clearly something is missing for it to not create the file.
I am quite frustrated because I spent 9 hours trying to get Google Drive's very own script to work. The code is a direct copy-paste from the Google Drive and Docs API documentation except for where I changed the scope from "SCOPES = ['https://www.googleapis.com/auth/drive.metadata.readonly']" to "SCOPES = ['https://www.googleapis.com/auth/drive.file']" because with the former it was crashing and their API documentation advises to use the latter as the scope when trying to create files.
Edit:
Current script:
from __future__ import print_function
import pickle
import os.path
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
# If modifying these scopes, delete the file token.pickle.
SCOPES = ['https://www.googleapis.com/auth/documents']
def main():
"""Shows basic usage of the Docs API.
Prints the title of a sample document.
"""
creds = None
# The file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.pickle'):
with open('token.pickle', 'rb') as token:
creds = pickle.load(token)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open('token.pickle', 'wb') as token:
pickle.dump(creds, token)
service = build('docs', 'v1', credentials=creds)
title = 'My Document'
body = {
'title': title
}
doc = service.documents() \
.create(body=body).execute()
print('Created document with title: {0}'.format(
doc.get('title')))
return
if __name__ == '__main__':
main()
I get the following error:
Traceback
(most recent call last):
File "create-teamwork-sops.py", line 137, in <module>
main()
File "create-teamwork-sops.py", line 131, in main
.create(body=body).execute()
File "C:\Python27\lib\site-packages\googleapiclient\_helpers.py", line 130, in positional_wrapper
return wrapped(*args, **kwargs)
File "C:\Python27\lib\site-packages\googleapiclient\http.py", line 855, in execute
raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 403 when requesting https://docs.googleapis.com/v1/documents?alt=json returned "Request had insufficient authentication scopes.">
Notes: every time the value of SCOPES is changed, the file token.pickle needs to be deleted, and when the script runs it will ask you to log into Google Drive and will create a new token.pickle file, which will allow the new scope to be taken into account.
Working script:
from __future__ import print_function
import pickle
import os.path
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
SCOPES = ['https://www.googleapis.com/auth/drive']
def main():
"""Shows basic usage of the Docs API.
Prints the title of a sample document.
"""
creds = None
# The file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.pickle'):
with open('token.pickle', 'rb') as token:
creds = pickle.load(token)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open('token.pickle', 'wb') as token:
pickle.dump(creds, token)
# service = build('docs', 'v1', credentials=creds)
service = build('drive', 'v3', credentials=creds)
# title = 'My Document'
# body = {
# 'title': title
# }
# doc = service.documents() \
# .create(body=body).execute()
# print('Created document with title: {0}'.format(
# doc.get('title')))
# get folder ID
page_token = None
while True:
response = service.files().list(q="mimeType = 'application/vnd.google-apps.folder'",
spaces='drive',
fields='nextPageToken, files(id, name)',
pageToken=page_token).execute()
for file in response.get('files', []):
# Process change
print('Found file: %s (%s)' % (file.get('name'), file.get('id')))
if file.get('name')=="SOPs":
folder_id=file.get('id')
break
page_token = response.get('nextPageToken', None)
if page_token is None:
break
# create Google Docs file in folder
file_metadata = {
'name': 'my doc 2',
'parents': [folder_id]
}
# media = MediaFileUpload('files/photo.jpg',
# mimetype='image/jpeg',
# resumable=True)
file = service.files().create(body=file_metadata,
# media_body=media,
fields='id').execute()
print('File ID: %s' % file.get('id'))
return
if __name__ == '__main__':
main()
You want to create new Google Document using Docs API.
You want to put the created new Google Document to the specific folder.
You want to achieve this using google-api-python-client with Python.
I could understand like this. If my understanding is correct, unfortunately, when new Google Document is created by Docs API, the Document is the root folder. So when you want to directly create the new Document to the specific folder, please use Drive API. The modified script is as follows.
From:
body = {
'title': title
}
To:
body = {
'name': title,
'mimeType': 'application/vnd.google-apps.document',
'parents': ['### folder ID ###']
}
Please set the folder ID to 'parents': ['### folder ID ###'].
Note:
Of course, after new Document was created to the root folder by Docs API, the file can be moved to the specific folder using Drive API. In this case, 2 API calls are used. So I proposed above modification.
If you want to create new Google Document using Docs API, please modify as follows. In this modified script, it supposes that you have already been able to set and get values for Google Document using Google Docs API.
From
doc = service.files() \
.create(body=body).execute()
To
serviceForDocs = build('docs', 'v1', credentials=creds)
doc = serviceForDocs.documents().create(body=body).execute()
References:
Method: documents.create of Docs API
Files: create of Drive API

How do I solve "AttributeError: 'Resource' object has no attribute 'documents'" error?

I'm programming a script to create multiple Google Docs files.
I have followed the Google Docs API Guides:
Quickstart (https://developers.google.com/docs/api/quickstart/python)
and Creating and managing documents (https://developers.google.com/docs/api/how-tos/documents)
from __future__ import print_function
import pickle
import os.path
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
def main():
"""Shows basic usage of the Drive v3 API.
Prints the names and ids of the first 10 files the user has access to.
"""
creds = None
# The file token.pickle stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.pickle'):
with open('token.pickle', 'rb') as token:
creds = pickle.load(token)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open('token.pickle', 'wb') as token:
pickle.dump(creds, token)
service = build('drive', 'v3', credentials=creds)
# Call the Drive v3 API
results = service.files().list(
pageSize=10, fields="nextPageToken, files(id, name)").execute()
items = results.get('files', [])
if not items:
print('No files found.')
else:
print('Files:')
for item in items:
print(u'{0} ({1})'.format(item['name'], item['id']))
print("Creating blank Google Docs file")
title = 'My Document'
body = {
'title': title
}
doc = service.documents() \
.create(body=body).execute()
print('Created document with title: {0}'.format(
doc.get('title')))
return
if __name__ == '__main__':
main()
Note:
This website does not display the indentation around "print("Creating blank Google Docs file")" and "return" correctly. That should be indented once and not twice as it is showing here. The block is not part of the "else: print('Files:')" branch.
Instead of creating a Google Docs file I get the following error:
Traceback (most recent call last):
File "create-teamwork-sops.py", line 234, in <module>
main()
File "create-teamwork-sops.py", line 227, in main
doc = service.documents() \
AttributeError: 'Resource' object has no attribute 'documents'