Google Drive REST API returning different set of results to their Python library - google-apis-explorer

Using the Files:list live API correctly returns all the files in the drive (there's 5 of them). The REST API URL (showing the parameters) is as follows:
https://developers.google.com/drive/api/v3/reference/files/list?apix_params=%7B%22corpora%22%3A%22drive%22%2C%22driveId%22%3A%22[our-drive-id]%22%2C%22includeItemsFromAllDrives%22%3Atrue%2C%22includeTeamDriveItems%22%3Atrue%2C%22q%22%3A%22trashed%20%3D%20false%20and%20mimeType%20%3D%20%27application%2Fvnd.google-apps.document%27%22%2C%22supportsAllDrives%22%3Atrue%2C%22supportsTeamDrives%22%3Atrue%7D
Using the Python library returns only a subset of the files (3 of them). AFAICT, I'm passing in the same parameters.
The Python code is returning just the 3 files that were migrated from another storage solution. It's not finding the 2 files that were created directly in Google Drive.
Here's the Python code:
def list_doc_ids_in_drive(self, drive_id: str):
query = self._service.files().list(
pageSize=1000,
includeItemsFromAllDrives=True, supportsAllDrives=True,
includeTeamDriveItems=True, supportsTeamDrives=True,
corpora='drive',
driveId=drive_id,
q="trashed = false and mimeType = 'application/vnd.google-apps.document'"
)
results = query.execute()
items = results.get('files', [])
return items
The Python code is using the https://www.googleapis.com/auth/drive permission, which I think should be sufficient to find all 5 files. Has anyone else experienced an incomplete set of results when using the Python client library vs. the REST API?

False alarm! Turns out the REST API and the Python client library are both returning the same data... once I'd deleted my Python pickle'd TOKEN file and re-authenticated to the Google Drive service...

Related

Posting multiple JSON files at onece

I am working with an api (Track-pod) and uploading JSON files to their server using a google apps script. I know this question has probably already been answered, but I have searched google extensively and couldn't find an answer, or maybe I just wasn't typing in the right keywords. Each Json file that I am uploading contains information on customers for the company I am working for. The way I am doing it is like so
for each(var item in array)
{
option.payload = JSON.stringify(item);
UrlFetchApp.fetch(url, option);
}
In my code the array is an array of objects for each customer. I was wondering if I have to constantly make requests, or is there a way to upload all the JSON files at once. Or at least make it faster.
To save some time you can use UrlFetchApp.fetchAll(). It will take an array of request as parameter and you can do up to 10 requests at the same time if I well remember.
Don't forget to check destination endpoint limit to not over charge it.
Reference : https://developers.google.com/apps-script/reference/url-fetch/url-fetch-app#fetchAll(Object)
Stéphane

How to execute 2 different rest api json payload in powershell

I am new to powershell, I have been working on one task.
I have 2 JSON payloads which I got from different 2 different rest API in appcenter.
One JSON payload creates an app in appcenter and the other Json payload add the particular app to the Teams in appcenter where users are grouped in single team.
So how can I use these 2 JSON in my powershell script and execute it (not sure i if I elaborated my question clearly).

Data acquisition of AWS IoT Aanalytics

I would like to obtain the latest data by specifying Lambda's IoT Analytics dataset.
If you use getDatasetContent of IoTAnalytics of aws sdk, only the link for downloading the file will be returned.
Data itself can not be acquired.
I would like to know how to obtain information on the IoT Analytics data set from Lambda.
Hi and welcome to Stack Overflow!
If I understand your question correctly, you are asking how to get the data from an IoT Analytics Dataset using a Lambda function?
You are correct that get_dataset_content only returns the URI, but it is simple to then fetch the actual content, for example in Python it would look like this;
# Code Fragment to retrieve content from IoT Analytics Dataset
iota = boto3.client('iotanalytics')
response = iota.get_dataset_content(datasetName='my_data_set',versionId='$LATEST')
contentState = response['status']['state']
if (contentState == 'SUCCEEDED') :
url = response['entries'][0]['dataURI']
stream = urllib.request.urlopen(url)
reader = csv.DictReader(codecs.iterdecode(stream, 'utf-8'))
for record in reader:
# Process the record as desired, you can refer to columns in the CSV
# by using record['column_name'] using the DictReader iterator
Note that this code is specifically looking at the most recent results using the $LATEST version - you can also look for the $LATEST_SUCCEEDED version.
There's more documentation here for Boto - the AWS Python SDK, but you can use the same approach in all other sdk supported languages.
Hope that helps,
Roger

weird file listing response differences between v2 and v3

I am using the google-drive-sdk with our company-made device. We upload pictures made by our device to google drive. After that I try to list the files with a GET request to https://www.googleapis.com/drive/v2/files to get thumbnailLink and webContentLink. Everything is working fine except that when I switch to v3 I don't get the response I should. The documentation says I should get a metadata response like https://developers.google.com/drive/v3/reference/files
but I only get: id, kind, name and mimeType. What am I doing wrong?
As stated in Migrate to Google Drive API v3 documentation, there are changes on how fields were returned.
Full resources are no longer returned by default. You need to use the fields query parameter to request specific fields to be returned. If left unspecified only a subset of commonly used fields are returned.
You can see examples on Github. This SO question might also help.
In v3 they made all the queries parametric. So you can query passing some parameter like
var request = gapi.client.drive.files.list({
'pageSize': 10,
'fields': 'files,kind,nextPageToken'
});
This block of code will return you all the information of every file just like v2.
If you are sending a get request then for fetching all the information you can try GET https://www.googleapis.com/drive/v3/files?fields=files%2Ckind%2CnextPageToken&key={YOUR_API_KEY}
Suppose you need ownsers and permissions only then set
var request = gapi.client.drive.files.list({
'pageSize': 10,
'fields':'files(owners,permissions),kind,nextPageToken'
});
For GET request use GET https://www.googleapis.com/drive/v3/files?fields=files(owners%2Cpermissions)%2Ckind%2CnextPageToken&key={YOUR_API_KEY}
for reference you can use Google Developers Documentation for fetching File list

Where do I find the Google Places API Client Library for Python?

It's not under the supported libraries here:
https://developers.google.com/api-client-library/python/reference/supported_apis
Is it just not available with Python? If not, what language is it available for?
Andre's answer points you at a correct place to reference the API. Since your question was python specific, allow me to show you a basic approach to building your submitted search URL in python. This example will get you all the way to search content in just a few minutes after you sign up for Google's free API key.
ACCESS_TOKEN = <Get one of these following the directions on the places page>
import urllib
def build_URL(search_text='',types_text=''):
base_url = 'https://maps.googleapis.com/maps/api/place/textsearch/json' # Can change json to xml to change output type
key_string = '?key='+ACCESS_TOKEN # First think after the base_url starts with ? instead of &
query_string = '&query='+urllib.quote(search_text)
sensor_string = '&sensor=false' # Presumably you are not getting location from device GPS
type_string = ''
if types_text!='':
type_string = '&types='+urllib.quote(types_text) # More on types: https://developers.google.com/places/documentation/supported_types
url = base_url+key_string+query_string+sensor_string+type_string
return url
print(build_URL(search_text='Your search string here'))
This code will build and print a URL searching for whatever you put in the last line replacing "Your search string here". You need to build one of those URLs for each search. In this case I've printed it so you can copy and paste it into your browser address bar, which will give you a return (in the browser) of a JSON text object the same as you will get when your program submits that URL. I recommend using the python requests library to get that within your program and you can do that simply by taking the returned URL and doing this:
response = requests.get(url)
Next up you need to parse the returned response JSON, which you can do by converting it with the json library (look for json.loads for example). After running that response through json.loads you will have a nice python dictionary with all your results. You can also paste that return (e.g. from the browser or a saved file) into an online JSON viewer to understand the structure while you write code to access the dictionary that comes out of json.loads.
Please feel free to post more questions if part of this isn't clear.
Somebody has written a wrapper for the API: https://github.com/slimkrazy/python-google-places
Basically it's just HTTP with JSON responses. It's easier to access through JavaScript but it's just as easy to use urllib and the json library to connect to the API.
Ezekiel's answer worked great for me and all of the credit goes to him. I had to change his code in order for it to work with python3. Below is the code I used:
def build_URL(search_text='',types_text=''):
base_url = 'https://maps.googleapis.com/maps/api/place/textsearch/json'
key_string = '?key=' + ACCESS_TOKEN
query_string = '&query=' + urllib.parse.quote(search_text)
type_string = ''
if types_text != '':
type_string = '&types='+urllib.parse.quote(types_text)
url = base_url+key_string+query_string+type_string
return url
The changes were urllib.quote was changed to urllib.parse.quote and sensor was removed because google is deprecating it.