Model derivative translate job giving status code 409 (CONFLICT) - autodesk-forge

How to fix status code 409 for translate job.
There are two types of problems I am facing.
1. Sometimes API returns error status code 409(conflict)
2. Sometimes it continuously gives in progress status and never completes or fails even.
Once any of the above error occurs, any subsequent job requests starts failing with error code 409.
We are trying node js API for translating job using following code.
let translateResult = derivativesAPI.translate(job, { 'xAdsForce': true }, forgeSvc.requestOAuth2TwoLeggedOBJ(), accessToken);

First try to delete manifest for the stuck/pending request file,
If that doesn't works , last option is to delete the bucket with pending/stuck translation request and then try again.

As per documentation, the 409 means:
The request conflicts with a previous request that is still in progress
As you mentioned a previous request failed, but is pending on our system and causes this conflict. Is that happening consistently with a file? Or random? When it fails (or hangs), what's the manifest? Finally, can you share a problematic URN?
EDIT: the file is working now and we'll keep investigating this.

Related

How to handle "Unexpected EOF at target" error from API calls?

I'm creating a Forge application which needs to get version information from a BIM 360 hub. Sometimes it works, but sometimes (usually after the code has already been run once this session) I get the following error:
Exception thrown: 'Autodesk.Forge.Client.ApiException' in mscorlib.dll
Additional information: Error calling GetItem: {
"fault":{
"faultstring":"Unexpected EOF at target",
"detail": {
"errorcode":"messaging.adaptors.http.flow.UnexpectedEOFAtTarget"
}
}
}
The above error will be thrown from a call to an api, such as one of these:
dynamic item = await itemApi.GetItemAsync(projectId, itemId);
dynamic folder = await folderApi.GetFolderAsync(projectId, folderId);
var folders = await projectApi.GetProjectTopFoldersAsync(hubId, projectId);
Where the apis are initialized as follows:
ItemsApi itemApi = new ItemsApi();
itemApi.Configuration.AccessToken = Credentials.TokenInternal;
The Ids (such as 'projectId', 'itemId', etc.) don't seem to be any different when this error is thrown and when it isn't, so I'm not sure what is causing the error.
I based my application on the .Net version of this tutorial: http://learnforge.autodesk.io/#/datamanagement/hubs/net
But I adapted it so I can retrieve multiple nodes asynchronously (for example, all of the nodes a user has access to) without changing the jstree. I did this to allow extracting information in the background without disrupting the user's workflow. The main change I made was to add another Route on the server side that calls "GetTreeNodeAsync" (from the tutorial) asynchronously on the root of the tree and then calls it on each of the returned children, then each of their children, and so on. The function waits until all of the nodes are processed using Task.WhenAll, then returns data from each of the nodes to the client;
This means that there could be many api calls running asynchronously, and there might be duplicate api calls if a node was already opened in the jstree and then it's information is requested for the background extraction, or if the background extraction happens more than once. This seems to be when the error is most likely to happen.
I was wondering if anyone else has encountered this error, and if you know what I can do to avoid it, or how to recover when it is caught. Currently, after this error occurs, it seems that every other api call will throw this error as well, and the only way I've found to fix it is to rerun the code (I use Visual Studio so I just rerun the server and client, and my browser launches automatically)
Those are sporadic errors from our apigee router due to latency issues in the authorization process that we are currently looking into internally.
When they occur please cease all your upcoming requests, wait for a few minutes and retry again. Take a look at stuff like this or this to help you out.
And our existing reports calling out similar errors seem to point to concurrency as one of the factors leading up to the issue so you might also want to limit your concurrent requests and see if that mitigate the issue.

Google Cloud Function: lazy loading not working

I deploy a google cloud function with lazy loading that loads data from google datastore. The last update time of my function is 7/25/18, 11:35 PM. It works well last week.
Normally, if the function is called less than about 30 minutes since last called. The function does not need to load data loaded from google datastore again. But I found that the lazy loading is not working since yesterday. Even the time between two function is less than 1 minute.
Does anyone meet the same problem? Thanks!
The Cloud Functions can fail due to several reasons such as uncaught exception and internal process crashes, therefore, it is required to check the logs files / HTTP responses error messages to verify the issue root cause and determine if the function is being restarted and generating Function execution timeouts that could explain why your function is not working.
I suggest you take a look on the Reporting Errors documentation that explains the process required to return a function error in order to validate the exact error message thrown by the service and return the error at the recommended way. Keep in mind that when the errors are returned correctly, then the function instance that returned the error is labelled as behaving normally, avoiding cold starts that leads higher latency issues, and making the function available to serve future requests if need be.

Jmeter how to stop only one thread, not all the threads.

I have 50 users in my ThreadGroup with 50 seconds rump up (50 rows in my .csv config file). After certain HTTP request I would like to test for certain condition,and if pass, continue to next HTTP requests. Soft of read on google that BeanShell Assertion with the code
String response = SampleResult.getResponseDataAsString();
if(response.contains("\"HasError\":true")){
SampleResult.setStopThread(true);
}
should resolve my problem. But the problem is that this function actually stops the entire test execution, all remaining users (where I might have some more values at the .csv file to test). IS there any convenient way not to stop the entire test? If anybody faced that problem please advise.
You can set a thread to stop on Sampler error by configuring it in the thread-group component. Mark the 'stop thread' in the 'Action to be taken after Sampler error' section.
To ensure that you get a Sampler error by configuring a Response Assertion.

Autodesk DM API: Is Retry appropriate here?

I've got an application that's been working for a long time.
Recently we created a new app/keys for it, and it's behaving strangely.
(I did figure out the scope requirements had been put in place. I am requesting bucket:create bucket:read data:read data:write).
When I upload a file to a bucket, I've traditionally called done the call to get the object details afterwards, to verify that it's successfully uploaded.
With the new key, I am intermittently getting this error:
GetObjectDetails: InternalServerError {"fault":{"faultstring":"Execution of ServiceCallout servicecallout-auth-acm-request failed. Reason: timeout occurred servicecallout-auth-acm-request","detail":{"errorcode":"steps.servicecallout.ExecutionFailed"}}}
Is this something I should be re-trying with a sleep in between? or is it indicative of something wrong with the upload?
(FYI - putting in a retry seems to have have resolved this for me, but I still don't know if that's the right answer - and if this issue might happen on other calls).
It could be that the service requires a slight delay between a put object and a get, so I would suggest either use a timer or a retry as you mentioned. However a successful response from the upload should be enough to ensure your object has been placed to the bucket without the need to double check.

Box API 2.0 Uploading files with conflict name returns 200

After uploading a file with a name conflict with existing one, the server still responds with HTTP status code 201 Created. I had to parse the response body to know exactly whether it is really created or not. It sounds to me that I should be able to know the result of the operation just by the status code. So I am wondering if this is an intended behavior.
The following is the response I get
{
"total_count":1,
"entries":[
{
"type":"error",
"status":409,
"code":"item_name_in_use",
"context_info":{
"conflicts":[
{
"type":"file",
"id":"2990420477",
"sequence_id":"0",
"etag":"1f64ca909178de30bc682a4ca2d14444719cf9a2",
"name":"Extensions.pdf"
}
]
},
"help_url":"http:\/\/developers.box.com\/docs\/#errors",
"message":"Item with the same name already exists",
"request_id":"1389504407503c7c1e8183c"
}
]
}
We are in the process of changing this from a 200 to a 202. Later this week (or possibly tonight) we'll roll out a change to make upload statuses be 202's, to show that the upload request has been accepted. I'll post a bit more on our blog to explain more details.
The basic logic is that uploads can be sent in bulk, and the API call has to return you an array of upload statuses (stati?). If you only upload one file, you'll get an array of 1, and you'll have to dig into the array to see if you were successful or not. If you upload a group of files, then you'll be digging into the array to find out the status of each file.
You might ask: Why not collapse the status when there is only one file? Our thought there is that you'd have to implement 2 different code paths to deal with single vs bulk-upload, and it would be easier to just write the code once to handle uploads either way.
Hope that helps. Let us know if you see unexpected behavior after we flip the error code over from the 200 to the 202.