I have a following scenario, 2 revit files, ModelA.rvt and ModelB.rvt. They are cross-referenced together, zipped and uploaded twice under diferrent object key (ModelA.zip, ModelB.zip). ZIP files are identical, very small(4MB) and containing both files. They both are uploaded succesfuly in a loop using:
PUT https://developer.api.autodesk.com/oss/v2/buckets/:bucketKey/objects/:objectName
Files are overwritten with token scope data:write and a post job called with x-ads-force = true in case of model update. Then I call the POST JOB 2x in a loop, once with ModelA.rvt as rootFilename for ModelA.zip and secondly with ModelB.rvt for ModelB.zip. Both post jobs are done sucesfully.
Right after I am getting manifest for both zip files each 10 secs. ModelB.zip is translated 100% in a few secs, but ModelA.zip never finishes (few hours so far), just hangs for no reason. On friday I thought that is just temporary issue, but no it still lasts.
I tried this scenario 3x times, each time with different set of files today and 3 days back. Same result. This one is the easiest one and they are all already present on the cloud. Still have no idea what is going on.
When I list bucket objects, zip files are never present. Another weird thing. Other files with non-zip extension are.
Does anyone have a clue what is causing this, what could be possible workaround? That is serious issue, because it corrupts usability and reliability of the whole API.
The linked revit files need to be in one zipfile with the new v2 API. See this post for more details: http://adndevblog.typepad.com/cloud_and_mobile/2016/07/translate-referenced-files-by-derivative-api.html
Related
I ran a model derivative job and the status came back: failed. After drilling through the return values, it said that two of the linked dwg files were missing. I added the dwg files, re-zipped and re-uploaded the zip. When I try to run the job, it keeps coming back with the initial failed status. Am I missing something?
Assuming you have buckets, on the POST Job endpoint, use the x-ads-force header, if you pass true it will translate again the file.
In hindsight, one could say this is obvious but it isn't spelled out in any documentation anywhere. Essentially, one needs to DELETE the failed manifest and run a new job. There doesn't seem to be any re-try mechanics.
I am trying to gather all of the files and folders that are descendants of a given folder.
To do this I use file.list() with q="'FOLDER_ID' in parent" and trashed=false with FOLDER_ID being the ID of the folder I am interested in. As I process the results I keep track of all of the folders that get returned from this request and then repeat the files.list() call using the new folders in the q parameter. I combine multiple folders in one request by using or and continue to repeat this until no new folders are returned.
Example:
Initial Request: q="('FOLDER_ID' in parent) and trashed=false"
All Subsequent Requests: q="('FOLDER_ID_1' in parent or 'FOLDER_ID_2' in parent or 'FOLDER_ID_3' in parent ...) and trashed=false"
(For more information about how to create queries see Drive REST API - Search for Files)
Sometimes this returns all the folders it should and other times some are left out. This doesn't happen if I remove the q parameter as every single file and folder are returned, none are missing.
After some testing/trial and error, I discovered that if I am not receiving all the folders I should be, sending a request with no q seems to "fix" the problem. The next time I run my application and it uses q, all the correct folders do get returned.
Other Information:
It is not a permissions issue, I am using drive.readonly
It is not a pageSize issue as I have tried different values for this and get different results.
It is not a pageToken issue as I make sure to send a request again with the given nextPageToken when it exists.
I am running this on a folder that has a little under 4,000 descendant folders in it and a little under 25,000 descendant files in it.
I feel like this must be a bug related to using multiple folders in the q parameter in a single request, considering that I can perform the exact same process and will get different results seemingly randomly.
I suggest you abandon the approach you've taken. Making so many calls to Drive will take forever and possibly give you quota problems.
It's much, much simpler to simply fetch all the folders in a single query, and then build an in-memory hierarchy of the folder ID's you're interested in. Then run a second set of queries to fetch files with those parents.
Alternatively, if these files are being created by an application, make them all children of a common dummy parent folder that you can query against.
I found a similar issue when looking for all files a given user owns, eg:
'example.user#company.com' in owners and trashed=false
I have about 5000 files and usually I can iterate through all of them via pagination. Some days however (like today) I only get <100 results with the query above. When I rewrite my code to fetch files for a given parent-ID and then recursively iterate through the sub-folders, I will get all files. Afterwards the original query succeeds again as well.
It looks like some kind of caching issue on the google-drive server to me.
While trying to import some Android projects into Eclipse, I have noticed that every file in the project is 0 bytes after they are imported. These projects are stored on Drive, so there is some chance of reverting them back to the previous version.
Reverting files to previous versions is easy to do when you've got a few files - you simply do it through a browser. However, I have hundreds of files and I need to fetch one revision back for each. I have been able to download a number of files by hand thus far, but there has to be a better way.
I have asked Google support and actually got a response back, but it's clear that there is no built-in functionality to do this. So I have started looking at the Drive API but I can see that there might be a bit of a learning curve.
Wondering if anyone has run into this before? Ideally I would like to identify one folder and for each file underneath, fetch the last version of the file. If anyone has a good approach for this, I would love to hear it.
thanks!
The pseudeo code to do what you want is
# get the id of the folder https://developers.google.com/drive/v2/reference/files/list
fid=file.list(q=title = 'foo')[0]
# get the children of that folder https://developers.google.com/drive/v2/reference/children/list
children = file.children(fid).maxresults=999
# for each child,
for id in children.id
# get the revisions https://developers.google.com/drive/v2/reference/revisions/get
revisions = file.revisions(id)
# iterate, or take item[1] whatever works best for you, and use its downloadUrl to fetch the file
With each call that you make, you'll need to provide an access token. For something like this, you can generate an access token using the oauth playground https://developers.google.com/oauthplayground/
You'll also need to register a project at the cloud/api console https://code.google.com/apis/console/
So ye, it's a disproportionate amount of learning to do something fairly simple. It's a few minutes work for somebody familiar with drive, and I would guess 3 days for somebody who isn't. You might want to throw it up on freelancer.com.
I'm considering using Google Drive push notification in order to replace our currently pulling process.
I started playing with it, but I have 2 major problems:
Watching changes:
When watching for drive changes, I get notification with the new change id. But when I try to query it using: driveService.changes().get(changeId), I intermittently get 404. Am I doing something wrong here?
Watching files:
When watching for file changes, in case of a folder, I want to know about new files added to that folder, so I expected that when adding/removing files from this folder, the "x-goog-resource-state" will hold "add/remove" value while "x-goog-changed" will contain "children".
In reality, the "x-goog-changed" does contain "children", but the "x-goog-resource-state" is always "update", and there is no extra information about the added/deleted file.
Regarding deleted files, I know can get it by watching the file once I have it, but is there a way I can get updated about new files in a certain folder?
I was working on a similar project a few months ago. There are two things you can do to monitor changes on Google Drive :
Set Notification Push using : changes().watch()
Set Notification Push using : files().watch()
The 1st case sends you a request for everything that happens on the Drive you are monitoring, with very little information on what exactly has changed.
The 2nd case is less 'spamming', and you get to decide which folder to monitor.
However the tags on the change type are not accurate. when I was using files().watch() I tested all the use-cases, and I compared the headers of each case.
My conclusions are:
for a new file (or folder) creation inside yourfolder (yourfolder/newfile) the headers contain:
'X-Goog-Changed': 'properties'
'X-Goog-Resource-State': 'update'
which is the same when you move a file to yourfolder, or when you start following an existing file in your folder.
you get 'X-Goog-Resource-State': 'add' when you share with a user
as you can see, the header tags are not accurate/unique.
Also, note that the push-notification channel will not send you requests for files inside a folder inside yourfolder (yourfolder/folder/files). And the channel will expire at some point.
If you still have any questions, or want to know how to implement the code, let me know : )
The timestamps provided by drive.changes.list and drive sometimes do not match. They are close, but the timestamps are off by a few seconds.
We were trying to look at the changes API, and after that pick the revision that has the same timestamp as the one listed in revisions. We are doing this instead of picking the head revision because we do some processing in our app to indicate we've processed a changed file.
Example output demonstrating the issue is as below:
With the changes API I get back:
"modifiedDate": "2013-07-27T12:58:31.854Z",
With the revisions API
GET https://www.googleapis.com/drive/v2/files/0AnwTzqT0JeG7dDFuQmtfbTNzWTd5eWNobllJa014aGc/revisions?key={YOUR_API_KEY}
This is what I get back from drive.revisions.list
"modifiedDate": "2013-07-27T12:58:29.152Z",
Is this a bug? It's preventing us from trying to make a changes call, and then trying to pick the version of the file corresponding to a change.
Changes.list() shows aggregated changes of overall Drive. It can't list changes of every files every single second thus can't be as accurate as file revision. This is not bug. Changes should be mere reference of what's going on on Drive. FYI, you might want to use push notification. This monitors file revisions and tells you whenever changes are made to the file.