Limit in google knowledge graph weird behaviour - google-apis-explorer

According to the reference
Limit: Limits the number of entities to be returned. Maximum is 500. Default is 20. Requests with high limits have a higher chance of timing out.
but I'm facing a weird behaviour, if I try to query for example https://www.nasa.gov/ and without setting a limit (which defaults to 20) I get a response of:
{
"error": {
"code": 429,
"message": "Over resource limits. Try a more restrictive request.",
"status": "RESOURCE_EXHAUSTED"
}
}
Now if I try with a limit of 19 or 21 I don't get any error at all.
I do realise that the error says "RESOURCE_EXHAUSTED" but it doesn't seem to be the problem here.
Note: using limit of 10 or 15 also gives the same error
This is a url to test
https://developers.google.com/knowledge-graph/reference/rest/v1/?apix_params=%7B%22query%22%3A%22https%3A%2F%2Fwww.nasa.gov%2F%22%7D

As the Google Knowledge Graph Search API notes:
Warning: This API is not suitable for use as a production-critical service. Your product should not form a critical dependence on this API.
I assume the API is not suitable for production-critical services due to errors like these. I unfortunately do not know why the error occurs, but in order to avoid your program from crashing and mitigate the effects of the error, I suggest you place your query in a try-catch block. So in Python:
import googleapiclient.discovery
try:
service = googleapiclient.discovery.build('kgsearch', 'v1', developerKey=YOUR_API_KEY)
search = service.entities().search(languages="en", limit=20, prefix=True, query="nasa")
search_result = search.execute()
except:
print("error while querying kgsearch")
Note: the error does not occur anymore (as you pointed out in the comments), however this is still a relevant issue as my program tried querying "data" and resulted in the exact same error as you described. I know the provided solution does not directly answer your question, but it was my work around it.

Related

How to handle "Unexpected EOF at target" error from API calls?

I'm creating a Forge application which needs to get version information from a BIM 360 hub. Sometimes it works, but sometimes (usually after the code has already been run once this session) I get the following error:
Exception thrown: 'Autodesk.Forge.Client.ApiException' in mscorlib.dll
Additional information: Error calling GetItem: {
"fault":{
"faultstring":"Unexpected EOF at target",
"detail": {
"errorcode":"messaging.adaptors.http.flow.UnexpectedEOFAtTarget"
}
}
}
The above error will be thrown from a call to an api, such as one of these:
dynamic item = await itemApi.GetItemAsync(projectId, itemId);
dynamic folder = await folderApi.GetFolderAsync(projectId, folderId);
var folders = await projectApi.GetProjectTopFoldersAsync(hubId, projectId);
Where the apis are initialized as follows:
ItemsApi itemApi = new ItemsApi();
itemApi.Configuration.AccessToken = Credentials.TokenInternal;
The Ids (such as 'projectId', 'itemId', etc.) don't seem to be any different when this error is thrown and when it isn't, so I'm not sure what is causing the error.
I based my application on the .Net version of this tutorial: http://learnforge.autodesk.io/#/datamanagement/hubs/net
But I adapted it so I can retrieve multiple nodes asynchronously (for example, all of the nodes a user has access to) without changing the jstree. I did this to allow extracting information in the background without disrupting the user's workflow. The main change I made was to add another Route on the server side that calls "GetTreeNodeAsync" (from the tutorial) asynchronously on the root of the tree and then calls it on each of the returned children, then each of their children, and so on. The function waits until all of the nodes are processed using Task.WhenAll, then returns data from each of the nodes to the client;
This means that there could be many api calls running asynchronously, and there might be duplicate api calls if a node was already opened in the jstree and then it's information is requested for the background extraction, or if the background extraction happens more than once. This seems to be when the error is most likely to happen.
I was wondering if anyone else has encountered this error, and if you know what I can do to avoid it, or how to recover when it is caught. Currently, after this error occurs, it seems that every other api call will throw this error as well, and the only way I've found to fix it is to rerun the code (I use Visual Studio so I just rerun the server and client, and my browser launches automatically)
Those are sporadic errors from our apigee router due to latency issues in the authorization process that we are currently looking into internally.
When they occur please cease all your upcoming requests, wait for a few minutes and retry again. Take a look at stuff like this or this to help you out.
And our existing reports calling out similar errors seem to point to concurrency as one of the factors leading up to the issue so you might also want to limit your concurrent requests and see if that mitigate the issue.

Model derivative translate job giving status code 409 (CONFLICT)

How to fix status code 409 for translate job.
There are two types of problems I am facing.
1. Sometimes API returns error status code 409(conflict)
2. Sometimes it continuously gives in progress status and never completes or fails even.
Once any of the above error occurs, any subsequent job requests starts failing with error code 409.
We are trying node js API for translating job using following code.
let translateResult = derivativesAPI.translate(job, { 'xAdsForce': true }, forgeSvc.requestOAuth2TwoLeggedOBJ(), accessToken);
First try to delete manifest for the stuck/pending request file,
If that doesn't works , last option is to delete the bucket with pending/stuck translation request and then try again.
As per documentation, the 409 means:
The request conflicts with a previous request that is still in progress
As you mentioned a previous request failed, but is pending on our system and causes this conflict. Is that happening consistently with a file? Or random? When it fails (or hangs), what's the manifest? Finally, can you share a problematic URN?
EDIT: the file is working now and we'll keep investigating this.

Unchecked runtime.lastError while running storage.set: QUOTA_BYTES_PER_ITEM quota exceeded

I am getting this exception in my background.html page. I don't know what this exception says. Can anyone explain this exception and also tell me how to resolve this exception.
The exception details are
Unchecked runtime.lastError while running storage.set: QUOTA_BYTES_PER_ITEM quota exceeded
Thank you.
This error comes when you use chrome.storage.sync.set...to set the data greater than 8,192 bytes for a single item as chrome.storage.sync.set allows 8,192 QUOTA_BYTES_PER_ITEM.
Use chrome.storage.local.set, to save the large data...instead of chrome.storage.sync.set.
As chrome.storage.local.set can contains 5242880 :QUOTA_BYTES.
See https://developer.chrome.com/extensions/storage
Also, you can get the alert if still want to use chrome.storage.sync.set using below code:
chrome.storage.sync.set(function() {
var error = chrome.runtime.lastError;
if (error) {
alert(error);
}
});
If you are getting same warning with chrome.storage.local too, then
Reason: The data you are trying to store is greater than the allowed storage with local i.e. 5242880 QUOTA_BYTES.
Solution: You can set the permission as unlimitedStorage in manifest.json file.
"permissions": [
.....
"unlimitedStorage",
.....
],
For more regarding permission
1) https://developer.chrome.com/extensions/storage#property-managed
2) https://developer.chrome.com/extensions/permission_warnings#nowarning
As outlined by wOxxOm in his comment above, the answer is covered in the chrome.storage documentation.
Moreover, it's always a good practice to implement error handling and check for runtime.lastError. If everything is all right, it will be undefined. If there is a problem, it will be non-empty, and chrome.runtime.lastError.message will explain what's wrong.
Chrome added checks that chrome.runtime.lastError is actually checked (evaluated). If not, it considers this to be an unhandled exception, and throws this error.

Drive API files.list returning nextPageToken with empty item results

In the last week or so we got a report of a user missing files in the file list in our app. We we're a bit confused at first because they said they only had a couple files that matched our query string, but with a bit of work we were able to reproduce their issue by adding a large number of files to our Google Drive. Previously we had been assuming people would have less than 100 files and hadn't been doing paging to avoid multiple files.list requests.
After switching to use paging, we noticed that on one of our test accounts was sending hundreds and hundreds of files.list requests and most of the responses did not contain any files but did contain a nextPageToken. I'll update as soon as I can get a screenshot - but the client was sending enough requests to heat the computer up and drain battery fairly quickly.
We also found that based on what the query is even though it matches the same files it can have a drastic effect of the number of requests needed to retrieve our full file list. For example, switching '=' to 'contains' in the query param significantly reduces the number of requests made, but we don't see any guarantee that this is a reasonable and generalizeable solution.
Is this the intended behavior? Is there anything we can do to reduce the number of requests that we are sending?
We're using the following code to retrieve files created by our app that is causing the issue.
runLoad: function(pageToken)
{
gapi.client.drive.files.list(
{
'maxResults': 999,
'pageToken': pageToken,
'q': "trashed=false and mimeType='" + mime + "'"
}).execute(function (results)
{
this.filePageRequests++;
if (results.error || !results.nextPageToken || this.filePageRequests >= MAX_FILE_PAGE_REQUESTS)
{
this.isLoading(false);
}
else
{
this.runLoad(results.nextPageToken);
}
}.bind(this));
}
It is, but probably shouldn't be, the correct behaviour.
It generally occurs when using the drive.file scope. What (I think) is happening is that the API layer is fetching all files, and then removing those that are outside of the current scope/query, and returning the remainder to your client app. In theory, a particular page of files could have no files in-scope, and so the returned array is empty.
As you've seen, it's a horribly inefficient way of doing it, but that seems to be the way it is. You simply have to keep following the next page link until it's null.
As to "Is there anything we can do to reduce the number of requests that we are sending?"
You're already setting max results to 999 which is the obvious step. Just be aware that I have seen this value trigger internal errors (timeouts?) which manifest themselves as 500 errors. You might want to sacrifice efficiency for reliability and stick to the default of 100 which seems to be better tested.
I don't know if the code you posted is your actual code, or just a simplified illustration, but you need to make sure you are dealing with 401 errors (auth expiry) and 500 errors (sometimes recoverable with a retry)

Google Maps API RadarSearch not working for London

If I go to the following location:
https://maps.googleapis.com/maps/api/place/radarsearch/json?location=51.5112139,-0.1198244&types=lodging&radius=3200&sensor=false&key=yourKey
I get the error:
{
"debug_info" : [],
"html_attributions" : [],
"results" : [],
"status" : "UNKNOWN_ERROR"
}
Is there any reason for this?
When I lookup Bristol(51.454513,-2.58791), Ipswitch(52.056736,1.14822) or Edinburgh(55.953252,-3.188267) I get a normal JSON file back full of data.
I don't know how Google works internally.
Unknown error simply means to me that "We have an error with your request, and we don't want you to know what the exact error is". Most of time, it is from the internal exception handling process.
Honestly, I don't know what happens in Google.
However, if you change your radius from 3500 meter to 2000 meter, it works fine
https://maps.googleapis.com/maps/api/place/radarsearch/json?location=51.5112139,-0.11982439999997041&types=lodging&radius=2100&sensor=false&key=key
My guess is there are too many results, and Google cannot handle that much.
I don't know why but the URL works sometimes, and sometimes doesn't work.
The "UNKNOWN_ERROR" status may be internal error of Google.
To prevent the error temporally, shorten the radius.
And report this to the gmaps-issues-tracker.
https://code.google.com/p/gmaps-api-issues/
Rank by distance seems to have some issues with radius. Is this implied in your situation?
Google
Stack Overflow