I am developing an add-on for google drive.
As a part of the functionality of this add-on, I would like to incorporate a google vm server that performs some processing on a user's google drive files (i.e. you click on a file through the add-on, send a request with a download link to the server, the server downloads the file, then finally responds to the request with some helpful information about the file). Both the apps script for the add-on and the vm instance are connected to the same google "project."
I am struggling with google's OAuth2.0 system and how I can connect the authorization of the add-on and the vm instance together.
Currently, when users open the add-on for the first time, they are brought to the authorization screen like so .
Once they authorize, my add on has access to all the expected scopes, including read access to google drive files.
Now I want my server to have access to them as well. Unfortunately, I do not understand how to do this.
I have tried simply requesting the url returned from file.getDownloadUrl() in python. While the request returns a status code of 200, I cannot seem to get the file to download.
I have also looked into the Google Drive API for python (I am running a flask server). Unfortunately, it appears that I need an entirely new authorization flow to make it work.
Any clarity on this issue would be greatly appreciated. Frankly, I find google's documentation on this matter very scattered and confusing. So, even knowing the right place to look would extremely helpful.
Much Thanks!
EDIT
I am adding some additional code to help provide some clarity. This is currently how I make a request to my server from the add-on:
var route = http://exampleurl.com/process
var data = {
'oAuthToken': ScriptApp.getOAuthToken(),
'stateToken': ScriptApp.newStateToken().withTimeout(120).createToken(),
'fileId': e.parameters.fileId,
'fileType': e.parameters.fileMimeType
};
var options = {
'method' : 'post',
'payload' : data
};
var response = UrlFetchApp.fetch(route, options);
This code successfully sends information to my vm instance running my server.
Now, I need to authorize the server to download the file specified by fileId.
When developing, I closely followed this tutorial to set up OAuth2.0 access to the Drive API. Here are two key routes:
#app.route('/google/login')
#no_cache
def login():
session = OAuth2Session(CLIENT_ID, CLIENT_SECRET,
scope=AUTHORIZATION_SCOPE,
redirect_uri=AUTH_REDIRECT_URI)
uri, state = session.create_authorization_url(AUTHORIZATION_URL)
flask.session[AUTH_STATE_KEY] = state
flask.session.permanent = True
return flask.redirect(uri, code=302)
#app.route('/google/auth')
#no_cache
def google_auth_redirect():
req_state = flask.request.args.get('state', default=None, type=None)
if req_state != flask.session[AUTH_STATE_KEY]:
response = flask.make_response('Invalid state parameter', 401)
return response
session = OAuth2Session(CLIENT_ID, CLIENT_SECRET,
scope=AUTHORIZATION_SCOPE,
state=flask.session[AUTH_STATE_KEY],
redirect_uri=AUTH_REDIRECT_URI)
oauth2_tokens = session.fetch_access_token(
ACCESS_TOKEN_URI,
authorization_response=flask.request.url)
flask.session[AUTH_TOKEN_KEY] = oauth2_tokens
return flask.redirect(BASE_URI, code=302)
Is there a way to plug in the two tokens I generate from the add-on into this Oauth flow? It appears that Google isn't anticipating this setup given that I am required to provide a redirect URL, which wouldn't make much sense in the case of my add-on/server tech stack.
Currently, you can successfully send the access token from the Apps Script project (retrieved with getOAuthToken()), to the flask server.
Since you already got the access token, you don't need to go through all the OAuth process as defined here (use the application credentials to request the access token, provide user consent, redirect, etc.). Sending the access token through the API request is actually the last step in the OAuth flow.
You just need to use the token to build the service, and the server will be able to access the file.
Using access token to build service:
from googleapiclient.discovery import build
import google.oauth2.credentials
ACCESS_TOKEN = requestBody['oAuthToken'] # Data coming from Apps Script via UrlFetch
creds = google.oauth2.credentials.Credentials(ACCESS_TOKEN)
drive_service = build('drive', 'v3', credentials=creds) # Build Drive service
Using Drive service to download files:
After you've build the Drive service to access the API, you can use this code sample to download Google Documents, or this one for other files (both examples include a tab with a Python sample).
Note:
You might need to Install the Google Client Library if you haven't done so.
Reference:
Using OAuth 2.0 to Access Google APIs
Related
I'm trying to download a Google doc to PDF or Sheet to XLS given an ID programmatically from the CLI.
Steps I've tried so far:
Contact support, but can't see a (?) help icon
Google for 10 minutes... I think Google Drive API does this (not sure)
Enable the Google Drive API
Signed up for a GCP project
Navigated thought the UI to enable the API
Trying the GET API results in 400 Invalid field selection using the fields for the ID of the document
I'm a bit stuck now and I am not sure how to proceed. Any suggestions?
Warning: hopefully
informative wall of text ahead! I've also uploaded the full Jupyter Notebook for you to clone and run here since, as you've realized, putting this sort of stuff together can be challenging.
Since we're going to be exporting files via the google drive API, we need credentials for that scope as detailed in https://developers.google.com/drive/api/v3/reference/files/export#auth.
However, first we need to choose an authentication method as detailed in https://developers.google.com/identity/protocols/oauth2#scenarios.
Since you mentioned creating a GCP project, I assume you're interested in using a GCP service account
as detailed in https://developers.google.com/identity/protocols/oauth2#serviceaccount
You can create a service account at https://console.developers.google.com/apis/credentials
or as explained in https://developers.google.com/identity/protocols/oauth2/service-account#creatinganaccount
Make sure to enable domain-wide-delegation for that service account while creating it and grant it https://www.googleapis.com/auth/drive scope under https://admin.google.com/ac/owl/domainwidedelegation since you otherwise won't be able to impersonate other users, including yourself, and download their files.
We then use the SERVICE_ACCOUNT_FILE we just downloaded and the SCOPES we defined to create a Credentials object.
However, you'll need to first install the Python bindings for the Google API as per https://developers.google.com/drive/api/v3/quickstart/python (pip3 install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib)
With that, the following should be enough to authenticate to the API:
from googleapiclient.discovery import build
from google.oauth2 import service_account
SCOPES = ['https://www.googleapis.com/auth/drive']
SERVICE_ACCOUNT_FILE = 'credentials.json'
credentials = service_account.Credentials.from_service_account_file(SERVICE_ACCOUNT_FILE,
scopes=SCOPES)
# Remember, you must have created credentials.json with domain-wide delegation!
credentials = credentials.with_subject('user#example.com')
# We then build a drive_v3 service using the credentials we just created
service = build('drive', 'v3', credentials=credentials)
We can access the files resource as shown in https://developers.google.com/drive/api/v3/reference/files/get and request the metadata of a file to which user#example.com has access https://docs.google.com/document/d/fileId/edit. In your case fileId=141g8UkQfdMQSTfIn475gHj1ezZVV16f5ONDxpWrrvts.
files = service.files()
print(service.files().get(fileId='1U3eMevKxTwDxzvOBUsqa36zvwBzKPVYOFgy3k_9vxb8').execute())
{'kind': 'drive#file', 'id':
'1U3eMevKxTwDxzvOBUsqa36zvwBzKPVYOFgy3k_9vxb8', 'name': 'empty',
'mimeType': 'application/vnd.google-apps.document'}
We access the files resource again but this time to export the file as detailed in
https://developers.google.com/resources/api-libraries/documentation/drive/v3/python/latest/drive_v3.files.html#export
This could also be achieved using https://developers.google.com/drive/api/v3/manage-downloads.
Valid MIME types are listed in https://developers.google.com/drive/api/v3/ref-export-formats.
fconr = files.export(fileId='1U3eMevKxTwDxzvOBUsqa36zvwBzKPVYOFgy3k_9vxb8',
mimeType='application/vnd.openxmlformats-officedocument.wordprocessingml.document')
fcont = fconr.execute()
print('{}...'.format(fcont[:10]))
file = open("/tmp/sample.doc", "wb")
file.write(fcont)
file.close()
b'MN\xc30\x10\x85O\xc0\x1d"'...
As you can see, fcont contains a binary blob that corresponds to the document and of which I'm showing the first 10 bytes. Finally, the blob is saved to sample.doc.
ls -alh1 /tmp/sample.doc
-rw-rw-r-- 1 jdsalaro jdsalaro 6,0K Jan 20 23:38 /tmp/sample.doc
As mentioned above, I encourage you to experiment with the Jupyter notebook once you've created the service account with domain-wide delegation, have saved it to credentials.json and have granted it the https://www.googleapis.com/auth/drive scope.
I'm working on an performance evaluation app in Google App Maker. One of the challenges we have with our current tool is that it doesn't sync with our G Suite directory when a person's manager changes or when a person has a name change -- their existing evaluations are linked to the person's old name and we have to change manually.
In my new app, I have an Employees datasource that includes a relation to the evaluation itself that was initially populated via the Directory API. Reading the documentation here, it seems as though I should be able to set up a watch on the Users resource to look for user updates and parse through them to make the appropriate name and manager changes in my Employees datasource. What I can't figure out, though, is what the receiving URL should be for the watch request.
If anyone has done this successfully within Google App Maker, or even solely within a Google Apps Script, I'd love to know how you did it.
EDITED TO ADD:
I created a silly little GAS test function to see if I can get #dimu-designs solution below to work. Unfortunately, I just get a Bad Request error. Here's what I have:
function setUserWatch() {
var optionalArgs = {
"event": "update"
};
var resource = {
"id": "10ff4786-4363-4681-abc8-28166022425b",
"type": "web_hook",
"address": "https://script.google.com/a/.../...hXlw/exec"
};
AdminDirectory.Users.watch(resource);
}
Address is the current web app URL.
EDITED TO ADD MORE:
The (in)ability to use GAS to receive web hooks has been an active issue/feature request since Sep 2014 -- https://issuetracker.google.com/issues/36761910 -- which has been #dimu-designs on top of for some time.
This is a more comprehensive answer.
Google supports push notifications across many of their APIs. However there are many subtle (and not so subtle) differences between them. Some that leverage webhooks send their data payloads primarily as HTTP headers; for example Drive API and Calendar API. Others mix their payloads across HTTP headers and a POST body(ex: AdminDirectory API). And its gets even crazier, with some APIs utilizing different mechanisms altogether (ex: GMail API leverages Cloud PubSub).
There are nuances to each but your goal is to leverage AdminDirectory push notifications in a GAS app. To do that you need a GAS Web App whose URL can serve as a web-hook endpoint.
STEP 1 - Deploy A Stand-Alone Script As A Web App
Let's start with the following template script and deploy it as a Web App from the Apps Script Editor menu Publish > Deploy As Web App:
/** HTTP GET request handler */
function doGet(e) {
return ContentService.createTextOutput("GET message");
}
/** HTTP POST request handler */
function doPost(e) {
return ContentService.createTextOutput("POST message");
}
STEP 2 - Verify/Validate Domain Ownership And Add/Register Domain
NOTE: As of August 2019, GAS Web App URLs can no longer be verified using this method. Google Cloud Functions may be a viable
alternative.
With the web app deployed you now have to verify and register the domain of the receiving url, which in this case is also the web app url. This url takes the following form:
https://script.google.com/macros/s/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/exec
Technically you cannot own a GAS web app url's domain. Thankfully the App Script Gods at Google do provide a mechanism to verify and register a GAS web app url.
From the Apps Script Editor menu select Publish > Register in Chrome Web Store. Registering a published web app with the Chrome Web Store also validates the URL's domain (no need to fiddle with the search console).
Once validated you need to add the "domain" via the Domain verification page in the API Console. The "domain" is everything in the url sans the 'exec', so you'll add a string that looks like this:
https://script.google.com/macros/s/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/
STEP 3 - Make a watch request
For this step the AdminSDK/Directory API service should be enabled both for your App Script project and in the API Console.
Create a function that generates a watch request (this can be retooled for other event types):
function startUpdateWatch() {
var channel = AdminDirectory.newChannel(),
receivingURL = "https://script.google.com/macros/s/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/exec",
gSuiteDomain = "[business-name].com",
event = "update";
channel.id = Utilities.getUuid();
channel.type = "web_hook";
channel.address = receivingURL + "?domain=" + gSuiteDomain + "&event=" + event;
channel.expiration = Date.now() + 21600000; // max of 6 hours in the future; Note: watch must be renew before expiration to keep sending notifications
AdminDirectory.Users.watch(
channel,
{
"domain":gSuiteDomain,
"event":event
}
);
}
Note that Directory API push notifications have an expiration, the max being 6 hours from starting the watch so it must be renewed periodically to ensure notifications are sent to the endpoint URL. Typically you can use a time-based trigger to call this function every 5 hours or so.
STEP 4 - Update doPost(e) trigger to handle incoming notifications
Unlike the push mechanisms of other APIs, the Directory API sends a POST body along with its notifications, so the doPost(e) method is guaranteed to be triggered when a notification is sent. Tailor the doPost(e) trigger to handle incoming events and re-deploy the web app:
function doPost(e) {
switch(e.parameter.event) {
case "update":
// do update stuff
break;
case "add":
break;
case "delete":
break;
}
return ContentService.createTextOutput("POST message");
}
There is one caveat to keep in mind. Push notifications for update events only tell you that the user's data was updated, it won't tell you exactly what was changed. But that's a problem for another question.
Note that there are a ton of details I left out but this should be enough to get you up and running.
You can do this with GAS and the Admin SDK. The Directory API supports Notifications (Note this is scheduled to be deprecated so not sure what is replacing this functionality). You can then set up a GMAIL script to do what you need to do with the notification.
UPDATE: There are also PUSH notifications from the Directory API.
I was able to set up push notifications for a local resource (a spreadsheet) using Heroku-based Node.js app as an intermediary API. The Node app captures the custom request headers and builds the payload to be consumed by the doPost(e) function of the GAS web app.
The code for constructing a watch request is simple
//get the unique id
var channelId = Utilities.getUuid();
//build the resource object
var resource = {
"id": channelId,
"type": "web_hook",
"address": "https://yourapp.herokuapp.com/drivesub
}
//watch the resource
Drive.Files.watch(resource, fileId);
The challenge is to get that domain address verified. There are ways to verify the standalone (not file-bound!) GAS web app, however, as previous posters have mentioned, the Apps Script web app can't access custom headers.
After you've enabled the Pub/Sub API and created the topic & subscription, go to APIs & Services -> Credentials -> Domain verification. It gives you a few options of verifying your domain, including serving the html file. Download the file generated by Google. Thankfully, Heroku makes it very easy to deploy a Node app
https://devcenter.heroku.com/articles/getting-started-with-nodejs
After your domain is verified you can make your subscription push data to the endpoint URL on Heroku.
I simply created the js file for route handlers and created one specifically for domain verification
handlers.verifyDomain = function(callback){
//Synchronously read from the static html file. Async method fs.readFile() doesn't make the file available to the router callback
var file = fs.readFileSync('./google/google_file.html');
callback(200, file);
}
Then include the handler in your router object like so:
var router = {
"google_file.html": handlers.verifyDomain
}
Finally, in your server starting function, get the path from the URL (there are multiple ways of doing that), and execute the handler;
var routeHandler = router(path);
routerHandler(function(statusCode, file){
//do something
});
Go back to domain verification tool and load the HTML page to verify the URL. After it's verified, the only remaining step is creating the GAS web app and posting to it.
Back to Node app. Note that my endpoint is https://yourapp.herokuapp.com/drivesub
//The code for posting data to GAS web app. Of course, you need to
// update your router with router['driveSub'] = handlers.driveSub
handlers.driveSub = function(data, callback){
var headers = data.headers;
var options = {
method:"POST",
uri: your_gas_app_url,
json:{"headers":headers}
};
//Use NPM to install the request module
request.post(options, function(err, httpResponse, body){
console.log(err, body);
});
callback(200, data);
}
Apps Script app - don't forget to publish it.
function doPost(req) {
var postData = req.postData["contents"];
var headers = JSON.parse(postData)["headers"];
//take action
return 200;
}
Unfortunately you cannot, at least not solely using Apps Script.
Admin Directory push notifications require a web-hook URL endpoint to receive notifications. You might think deploying a GAS web app and using its URL as an endpoint would be sufficient. But the thing with Admin Directory Push notifications is that its data payload resides in custom HTTP headers which cannot be accessed from a GAS Web App. (This also holds true for push notifications across other APIs including the Drive and Calendar APIs)
You can however leverage Google Cloud Functions (a GCP service) in tandem with GAS, but you'll have to know your way around Node.js.
EDIT
After giving this some thought, and reviewing your requirements I believe there is a way to pull this off just using GAS.
You can setup a unique push notification channel for a given event per user/domain (the 'update' event in your use case) by setting the event parameter when initializing the watch. Thus the GAS web app will only be triggered if an update event occurs; you don't really need to rely on the HTTP header to determine the event type.
If you want to track multiple events, you simply create a unique channel per event and use the same GAS Web app endpoint for each one. You differentiate between events by checking the event parameter sent in the POST request. This should remove the need for middle-man services such as Heroku or Google Cloud Functions.
I'm trying to use the what Google terms a 'Domain-wide delegation' service account: https://developers.google.com/admin-sdk/directory/v1/guides/delegation
The specific API I'm trying to access with this delegation is: https://developers.google.com/apps-script/api/
Here's the code:
from google.oauth2 import service_account
import googleapiclient.discovery
import json
import os
SCOPES = ['https://www.googleapis.com/auth/script.projects', 'https://www.googleapis.com/auth/drive']
SERVICE_KEY = json.loads(os.environ['SERVICE_KEY'])
credentials = service_account.Credentials.from_service_account_info(SERVICE_KEY, scopes=SCOPES)
delegated_credentials = credentials.with_subject('fred.bloggs#my-gapps-domain.com')
script = googleapiclient.discovery.build('script', 'v1', credentials=delegated_credentials)
response = script.projects().get(scriptId='<myscriptId>').execute()
print json.dumps(response)
This fails with:
google.auth.exceptions.RefreshError: ('unauthorized_client: Client is unauthorized to retrieve access tokens using this method.', u'{\n "error" : "unauthorized_client",\n "error_description" : "Client is unauthorized to retrieve access tokens using this method."\n}')
I'm pretty sure I've followed all the steps at https://developers.google.com/api-client-library/python/auth/service-accounts, including authorizing the 'https://www.googleapis.com/auth/script.projects' scope with the client ID of the service account json key I downloaded.
Note, I was able to successfully get this particular snippet to work by skipping with_subject, and going in to the Script dashboard as the user, and 'sharing' the script project.
Unfortunately though that still doesn't allow upload a new set of files (as 'sharing' doesn't give the ability to delete). It does at least confirm my calling code is correct, albeit not authenticating properly with the json service key.
To clarify:
The script in question is what I believe is termed 'standalone' (not a web app)
It's owned by a bot user I've setup just like a regular GSuite user (as I didn't want scripts in regular user's Google Drives)
The script started in a Google Cloud Project that seemed to be automatically created, and with 'No organisation'. I then created a new project manually within the organisation, and moved the script to that project.
There's an official Google Apps Script client now, so I asked there too https://github.com/google/clasp/issues/225#issuecomment-400174500 - although they use the Javascript API (via Typescript), the principles should be the same.
I am trying to run a script off of my Google Drive through the Javascript Google Drive API. This works fine, but only if I sign into my account on the popup that opens. I wish to sign into the same account every time and so was wondering if there was any way to automate the login of this so as to bypass users having to enter in that login information.
In short, you would have login at least once, everytime after the Google Identity Provider JSON Web Token expires. I am not sure how long this would be with the Goolge Drive API, but typically these tokens may be valid for anywhere from a single request to days long.
Here is the Documentation for the Google API OAuth2
https://developers.google.com/identity/protocols/OAuth2
Refresh the access token, if necessary.
Access tokens have limited lifetimes. If your application needs access
to a Google API beyond the lifetime of a single access token, it can
obtain a refresh token. A refresh token allows your application to
obtain new access tokens.
Note: Save refresh tokens in secure long-term storage and continue to
use them as long as they remain valid. Limits apply to the number of
refresh tokens that are issued per client-user combination, and per
user across all clients, and these limits are different. If your
application requests enough refresh tokens to go over one of the
limits, older refresh tokens stop working.
Google has provided a quickstart guide for implementing a user sign via Google Apis. Google uses the OAuth2 protocol in which you must register with Google as a Client application. Once registered as a Client application, you will be issued a Client ID, which you typically provide to your application in some form of application initialization.
Here is a link to their quickstart guide, which will help you get started:
https://developers.google.com/drive/v3/web/quickstart/js
Note that this is a basic example that does not demonstrate how you may approach persisting a JSON Web Token so that the user does not have to login on every request. I outline a simple approach of managing Authentication in JavaScript and Angular to get you moving in the right direction, but incomplete, direction.
For example, in Angular:
// Configures the required variables before Running an Instance of the App
angular.module("yourModuleName").config(configureApp);
AND
// Executed when the App Instance launches, allowing you to connect to Google APIs when the App starts
angular.module("yourModuleName").run(runApp);
Where configureApp and runApp are JS functions that handle application initialization in the AngularJS Framework. The code in the follow example would retrieve the Apps Google Client ID from their own App's REST API. This is just an example of where you could retrieve these credentials from storage, but most likely is not the most secure example:
var configureApp = function($http,$window) {
// Setup your CLIENT ID from your own REST API (or any other mechanism you may choose)
var httpPromise = $http.get("http://myApp.myDomain.com/config/googleClient");
// Handle the Response from the above GET Request
httpPromise.then(
// Handle Success
function(response) {
// Store the CLIENT ID in local storage for example
$window.localStorage.setItem("GOOGLE_API_CLIENT_ID", response.data.clientId);
// Setup the App Wide variables for Google API
// Client ID and API key from the Developer Console
var CLIENT_ID = response.data.clientId;
// Array of API discovery doc URLs for APIs used by the quickstart
var DISCOVERY_DOCS = ["https://www.googleapis.com/discovery/v1/apis/drive/v3/rest"];
// Authorization scopes required by the API; multiple scopes can be
// included, separated by spaces.
var SCOPES = 'https://www.googleapis.com/auth/drive.metadata.readonly';
// Do more initialization configuration
};
var runApp = function() {
// Initialize the API
gapi.client.init({
discoveryDocs: DISCOVERY_DOCS,
clientId: CLIENT_ID,
scope: SCOPES
}).then(function () {
// Listen for sign-in state changes.
gapi.auth2.getAuthInstance().isSignedIn.listen(updateSigninStatus);
// Handle the initial sign-in state.
updateSigninStatus(gapi.auth2.getAuthInstance().isSignedIn.get());
authorizeButton.onclick = handleAuthClick;
signoutButton.onclick = handleSignoutClick;
});
}
Which function to use with Angular would depend on the desired app lifecycle you need to target in an Angularjs app. This approach can be applied in other JS frameworks like React and Backbone.
To highlight another perspective from the documentation, updateSigninStatus would be a great place to capture the JSON Web Token returned by Google's Authorization request at which point you could store this token in the browser's window.localStorage for re-use.
You then could reuse the token whenever the Google API requires authentication. Tokens typically have an expiration. Until the token expires, you would be able to prevent the API from displaying a login modal.
This does mean you would still have to manage the logic behind the Authorization process using this approach, monitoring any response from Google requesting a token refresh or re-authentication.
Auth0 is a great Authentication and Authorization plugin available in many languages for connecting with Google and many other OAuth2 Identity Providers. The Google Drive API uses their own Identity Provider Service to confirm the Identity of your apps users in tandem with your registered app's Client ID.
Here are links that I found when implementing Authorization for a project that required me to implement Authorization using the Google Identity Provider:
https://jwt.io/
https://auth0.com/
Best practices for authentication and authorization in Angular without breaking RESTful principles?
https://thinkster.io/tutorials/angularjs-jwt-auth
You are saying that all users login to the same Google account?
In that case you have 2 options.
1/ write a server application that has a stored refresh token. Create an endpoint that allows an authenticated user to request an access token.
2/ embed a refresh token in your JavaScript, but make sure that only authenticated users can load the JS
I'm developing an AppEngine application which lists files from my Google Drive using the Google Drive API. I'm getting an 500 error when one of my users tries to run my app:
HttpError: https://www.googleapis.com/drive/v2/files?q=%27[ID sanitized]%27+in+parents&alt=json&maxResults=5 returned "Invalid Credentials">
The code pulls files from a specific google drive folder. The folder and files are shared with the user, and the app is authorized to use the Google Drive API.
At some point, I figured that I should start from scratch, so I revoked the authorization from that user's account. Didn't seem to make a difference and now to add insult over injury it appears that my app no longer asks that particular user for authorization.
Does anybody have any recommendations at this point? My code uses the Google API client for python and the decorators (with oauth_required) for handling authorization:
CLIENT_SECRETS = os.path.join(os.path.dirname(__file__), 'client_secrets.json')
MISSING_CLIENT_SECRETS_MESSAGE = """
<h1>Warning: Please configure OAuth 2.0</h1>
<p>
To make this sample run you will need to populate the client_secrets.json file
found at:
</p>
<p>
<code>%s</code>.
</p>
<p>with information found on the APIs Console.
</p>
""" % CLIENT_SECRETS
http = httplib2.Http(memcache)
service = build("plus", "v1", http=http)
files_service = build("drive", "v2", http=http)
decorator = oauth2decorator_from_clientsecrets(
CLIENT_SECRETS,
scope='https://www.googleapis.com/auth/plus.me https://www.googleapis.com/auth/drive.readonly',
message=MISSING_CLIENT_SECRETS_MESSAGE)
[...]
class MainPage(webapp2.RequestHandler):
#decorator.oauth_required
def get(self):
[...]
try:
kb_param = {}
kb_param["maxResults"] = 5
kb_param["q"] = "'[sanitized]' in parents"
kb_files = files_service.files().list(**kb_param).execute(http=http)
template_values = {
'kb_files': kb_files["items"]
}
Any insight would be greatly appreciated :)
Thanks!
Rick.
The token might be revoked, or the credentials might actually be wrong.