I'm looking to allow certain users to clone a collection, and all of it's child collections and folders to a new 'parent' collection.
I think I've got the overall design down, but after reading many timeout issues, I wanted to see if anyone here has any 'gotcha' things to stick clear of. I'm going to allow them to select a single collection from a Listbox and give them the chance to name the new 'parent' Collection and the folder it should belong to.
However, where I'm concerned is that iterating through parent>Collection1>Collection2>Collection3.. etc.
I'm thinking of feeding the folders into arrays then create the new folder and then copy docs from source to the new folder. Sound reasonable? Anyone already invent this wheel?
I don't think there is any gotchas here. But depending on how many collections and files you're copying, you might hit some of the various quotas we're limited too.
I don't if this fits in your use-case, but you might have to divide your workload into minor chunks and do it in multiple runs, setting your script to continue later (depending on the quota, the next day) using the Script Services API.
Related
When using Forge Data Management API endpoint
projects/:project_id/folders/:folder_id/search we have two problems.
It seems that we sometimes have to wait for several minutes (hours?)
after a model is uploaded until it can be found by the search.
We often get error 429 "Too Many Requests" even that we only call do
very few calls (less than 10 within an hour).
These issues makes the endpoint hard to use in production code. Is there anything we can do to improve the success rate? Is Autodesk going to improve the endpoint?
This question is related to How to find cloud Item id of a Revit model?
We are aware of those issues and working on improving things in those areas. However, I cannot tell when exactly those will become available.
In the meantime, depending on your workflow, there are two things that could be of help:
Use webhooks in order to be notified about new files being added to BIM 360/ACC
Use folder/contents endpoint to find the file you need, which also supports filtering just like the folder/search endpoint. You would have to iterate through subfolders though if you wanted to look for items in them as well. Newly added files should show up here straight away.
I'm using Autodesk Forge to integrate with our remodeling tool. In particular, I need to count objects of different families and types and determine to what room they actually belong. I use Model Derivative API for this purpose. To keep the room/area information I convert .rvt files to .nwc files as suggested here. However, when I retrieve data with GET /modelderivative/v2/designdata/{urn}/metadata/{guid}/properties I face the following problems from time to time:
Room information sometimes disappears from Objects for some reason
Objects disappear from result data for some reason (but they seem to exist when I browse them in A360)
I have no idea, what can be the reason for this.
I have no explanation for the disappearance of room data or objects for you.
If you can provide a reproducible case demonstrating that, I will gladly pass it on to the development team for analysis.
If you are interested in an immediate reliable solution and full control, which I assume is the case, I would suggest following the second bullet item in the advice provided by Eason in the previous answer that you refer to above:
Extract all the room information and object relationships you are interested in via the Revit API, store that data somewhere yourself, and use it later on wherever you like to your heart's content.
Then you will be completely safe and independent of all other components and their unpredictable behaviour.
If the only information that you need is the room containing each family instance, I can even implement a suitable Revit add-in for you.
Another suggestion that might help, if that is indeed the data you require: determine that information in a Revit add-in and attach it to each family instance in your own personal shared parameter. That will ensure that it remains intact through the translation process. Afaik, all shared parameter data is retained, independent of other behaviour.
I want to provide a user the ability to cache up to 2,600+ items, by groupings (categories of book, individual books, or possibly even just chapters of a certain book if they don't want the whole book). It is not possible, as far as I can tell, to precache all of these items because there are 2,600+ of them, and will be more in the future - the service worker will timeout with under a couple hundred. And since service workers either get all or none on install (if I understand correctly), do I need to use multiple services workers (with different ids?), or am I thinking about this wrong?
What I am thinking is something like...
<iron ajax></iron-ajax>
<template is="dom-repeat" items="...">
<platinum-sw-register auto-register clients-claim skip-waiting>
<platinum-sw-cache default-cache-strategy="fastest"
cache-config-file="../someGenerator.php"></platinum-sw-cache>
</platinum-sw-register>
In other words:
Get a list of wanted URLs via iron-ajax (based upon what the user enables for cache)
Iterate through the URLs as groups via dom-repeat
Create a service worker with a customized cache-config for the URL group
Repeat 2 and 3 until done, then present a toast
That someGenerator.php would return a JSON config setup for the particular group of URLs.
My app is a single page app - with neon-animated-pages - one page representing categories, one for book listings, one for table of contents for each book, and then one of each the chapter contents. All of the data is obtained via iron-ajax.
Here are some links to demonstrate the issues:
The App
A large non-functional cache-config generated
I suspect, in order to not have service workers errors due to redundancy, or overwrite existing caches, I will need to assign individual ids, and include them in the generated cache-configs. Does that sound right?
No, I don't think that's the right approach. <dom-repeat> and creating multiple service workers isn't going to accomplish what you want.
It does look like you're bumping into some service worker-imposed timeouts during your install handler due to the delays in fetching the JSON configuration and performing all of the precaching. Taking a step back, are you sure that you need that entire set of URLs precached?
<platinum-sw> will give you runtime caching as well, so that when a browser loads a given URL when there's a network connection available, the resources will be automatically added to the cache and available offline during subsequent return visits.
There are other approaches that would use either window.caches to cache resources from within your controlled page, or using something like postMessage() to communicate a list of additional URLs to cache from your controlled page to your service worker. Both of those approaches would involve going beyond the default functionality you get from using <platinum-sw> and digging into the internals a bit.
I need to fetch a list of all the files in a user's box account, such that the list of files can then be displayed in a table view (iOS).
I have successfully implemented this by recursively using /folders/{folder id}/items on all the folder's in my user's box.
However, while this works, it's kind of dirty, seeing as how a request is made for each of the users's folders, which could be quite a large number.
Is there any way to get a list of all the files (it's no issue if folders are included, I can ignore those manually) available?
I tried implementing this using search, but I couldn't identify a value for the query parameter that returned everything.
Any help would be appreciated.
Help me, Obi-Wan Kenobi. You're my only hope.
What you are looking for (recursive call through a Box account) is not available. We have enterprise customers will bajillions of files and millions of folders. Recursively asking for everything would take too long.
What we generally recommend is that you ask for as little as you can, and that you use multiple threads and anticipate what you'll need just a little bit, so that you can deliver a high-performance user-interface to your end-users.
For example ?fields=item_collection is expensive to retrieve, and can add a lot to a paylaod. It can double, or 10x the time that it takes to get back a payload from the Box API. Most UI's don't need to show all the items inside every folder. So they are better off asking for ?fields=.
You can make your application responsive to the user if you make the smallest possible call. Of course there is a balance. Mobile networks have high latency, and sometimes that next API call to show some extra thing is slow. But for a folder tree, you can get high performance by retrieving only the current level, displaying that, and then starting to fetch one-level down while the user is looking at the first level.
Same goes for displaying thumbnails. If a user drills into a folder and starts looking at thumbnails for pictures, there's a good chance they'll want to see other thumbnails in that same folder. Your app should anticipate that, and start to pull one or two extras down in the background. Yes, it means more API calls, but your users will give your app a higher rating for being fast.
My real-time document allows the user to edit the file name within the editor (much like Google's own apps). I represent this as a collaborative string so all collaborators see the file renames as soon as possible.
I'm trying to determined the best and most efficient way to keep this collaborative string in sync with the actual file name. There are two scenarios to consider:
In Editor Changes
If a user edits the document name within the editor. In this case we need to use the Drive API to push that change out to the file on Google drive. To avoid race conditions, it is best if only one of the collaborators pushes the change out. The easiest way to do this seems to check if the rename event was local.
I also found it best to add a delay so we are not pushing the rename out to the Drive API with every character change. If a few seconds pass with no more name changes at that point it pushes the change out. This all seems to work well.
External Changes
The harder one and the one I am interested in requesting advice on, the case when the file name is changed externally. For example, if the user renamed the file within the Drive interface itself. We want this change to update our collaborative string to match.
My application is entirely client-side so I can't use webhook push notifications. So my only solution is to poll the file name every X seconds (currently set to 10). But this presents the following problems:
It is API intensive. If you have 4 collaborators that keep the screen open for 8 hour that is 11520 API calls. If my app has lots of users with lots of documents I could see how this might push me past my API limits.
To avoid race conditions (and reduce API calls) we only want one collaborator to check for changes and update the collaborative string if the file name has changed. But how to pick when collaborators might join/exit at any time? Currently I am having each collaborator check anytime the collaborators change if they are the "leader". The "leader" is the collaborator whose session id is the highest. This seems to work but it all seems fairly hackey. Also if collaborators join close together I wonder if it might be possible that a race condition would cause multiple collaborators to think they are the leader.
Is there an easier way? An real-time API function I am missing?
It would be ideal if the real-time API just provided a method that stored the document name. Anytime the real-time API checks for mutations it could grab the latest document name.
I think you've identified the options. There isn't any built in functionality currently to sync it via the Realtime API specifically.
Personally I'd probably back off the poll time a lot.. its probably not critical that the title is always exactly up to date, so asking every few minutes is probably sufficient and would greatly reduce your qps.
In terms of identifying a "leader", I can't think of anything better than something deterministic based on the session id. So long as each rechecks on each session join/leave event, I don't think there should be any issues.