I am following the steps from this tutorial to try to compare the parameter values from the same element but in two different revisions of the same model inside the ACC, using Data Exchange API. The steps I did are as follows:
I've created a Data Exchange inside ACC Docs (which corresponds to the REVISION_1) using a Revit Model A;
I've changed the Revit Model A, adding a new window on it, and after, uploaded the model inside the ACC Docs (which corresponds to the REVISION_2) from the same model;
With my exchangeId and collectionId on my hands, I've collected the Snapshots Revisions successfully from the Data Exchange by doing on Postman:
https://developer.api.autodesk.com/exchange/v1/collections/<COLLECTION_ID>/exchanges/<EXCHANGE_ID>/snapshots:exchange/revisions
After that, I collected the changed assets from REVISION_1 to REVISION_2 by doing:
https://developer.api.autodesk.com/exchange/v1/collections/<COLLECTION_ID>/exchanges/<EXCHANGE_ID>/assets:sync?filters=exchange.snapshot.fromRevision==<REVISION_1>;exchange.snapshot.toRevision==<REVISION_2>
And again the results are as expected, it shows only the data that has changed between both versions.
So, my main question is: Since now we have two revisions, how to retrieve the original data only from REVISION_1? I've tried several request options unsuccessfully.
By the way, is curious that now retrieving the assets from the current Data Exchange, by doing
https://developer.api.autodesk.com/exchange/v1/collections/<COLLECTION_ID>/exchanges/<EXCHANGE_ID>/assets:sync shows only information from REVISION_1, and the modified elements in REVISION_2 seem that have disappeared. Someone knows why does it happen?
I've tried the following requests without success:
https://developer.api.autodesk.com/exchange/v1/collections/<COLLECTION_ID>/exchanges/<EXCHANGE_ID>/assets:sync?filters=exchange.snapshot.fromRevision==<REVISION_1>
https://developer.api.autodesk.com/exchange/v1/collections/<COLLECTION_ID>/exchanges/<EXCHANGEID>/assets:sync?filters=exchange.snapshot.toRevision==<REVISION_1>
https://developer.api.autodesk.com/exchange/v1/collections/<COLLECTION_ID>/exchanges/<EXCHANGEID>/exchange.snapshot.fromRevision==latest-1
https://developer.api.autodesk.com/exchange/v1/collections/<COLLECTION_ID>/exchanges/<EXCHANGEID>/exchange.snapshot.toRevision==latest-1
I expected to retrieve the assets only from REVISION_1, but it seems that since I have a new revision, like REVISION_2, it is overwriting the data from the previous revision in some way.
Related
I'am currently playing around with the Couchbase Sync-Gateway and have built a demo app.
What is the intended behavior if a user logs in with the same username on a different device (which has an empty database) or if he deleted the local database?
I'am expecting that all the data from the server should get synced back to the clients.
Is this correct?
My problem is that if i'am deleting the database or login from a different device, nothing will get synced.
Ok i figured it out and it's exactly how i thought it would be.
If i log in from a different device i get all the data synced automatically.
My problem was the missing sync function. I thought it will use a default and route all documents to the public channel automatically.
I'am now using the following simple sync-function:
"sync": `function (doc, oldDoc) {
channel('!');
access('demo#example.com', '*');
}`
This will simply route all documents to the public channel and grant my demo-user access to it.
I think this shouldn't be used in production but it's a good starting point for playing around.
Now everything is working fine.
Edit: I've now found the missing info:
https://docs.couchbase.com/sync-gateway/current/configuration-properties.html#databases-this_db-sync
If you don't supply a sync function, Sync Gateway uses the following default sync function
...
The channels property is an array of strings that contains the names of the channels to which the document belongs. If you do not include a channels property in a document, the document does not appear in any channels.
I have 2 firestore collections - crews/{crew}/clients and crews/{crew}/pros. If a new client registers and a new document is created, I want to search collection pros for pros working the matching sector and living within 5 km (of the new client), and send notification to the pros filtered. In order to implement that in cloud functions,
I installed geofirestore using npm, saved crews/{crew}/pros like this;
https://i.stack.imgur.com/YwAFO.png
but after executing this function, I have error message on cloud functions console like this;
Error: Registration token(s) provided to sendToDevice() must be a non-empty string or a non-empty array
Is there anything wrong with my firestore data structure? Thank you.
I found that this data structure was correct, because I could get notification to work with this data structure. I tried another structure like this;
https://i.stack.imgur.com/vtB5X.jpg
However, it gave me another error.
Objects (files) disappears in Buckets after a day. The translated data is still there since I immediately get "success" on the translation status if I upload the same file again.
Is there some time-limit param? Can't find any https://forge.autodesk.com/en/docs/data/v2/reference/http/buckets-:bucketKey-objects-:objectName-PUT/
Well, if you get the details of the bucket using the following URL:
https://developer.api.autodesk.com/oss/v2/buckets/:bucketKey/details
Take note of the property policyKey, I got a hunch that if you have been following the autodesk tutorials, you have created a bucket with transient policyKey which marks all model that have existed for 24 hours for deletion.
Extra note: files that are marked for deletion are not immediately deleted based on my past experience.
See also:
Bucket retention policy: https://forge.autodesk.com/en/docs/data/v2/developers_guide/retention-policy/
Creating a bucket: https://forge.autodesk.com/en/docs/data/v2/reference/http/buckets-POST/
^Take note of policyKey in the request body
I'm finalizing a Data Studio connector and noticing some odd behavior with the number of API calls.
Where I'm expecting to see a single API call, I'm seeing multiple calls.
In my apps script I'm keeping a simple tally which increments by 1 every url fetch and that is giving me the correct number I expect to see with getData().
However, in my API monitoring logs (using Runscope) I'm seeing multiple API requests for the same endpoint, and varying numbers for different endpoints in a single getData() call (they should all be the same). E.g.
I can't post the code here (client project) but it's substantially the same framework as the Data Connector code on Google's docs. I have caching and backoff implemented.
Looking for any ideas or if anyone has experienced something similar?
Thanks
Per the this reference, GDS will also perform semantic type detection if you aren't explicitly defining this property for your fields. If the query is semantic type detection, the request will feature sampleExtraction: true
When Data Studio executes the getData function of a community connector for the purpose of semantic detection, the incoming request will contain a sampleExtraction property which will be set to true.
If the GDS report includes multiple widgets with different dimensions/metrics configuration then GDS might fire multiple getData calls for each of them.
Kind of a late answer but this might help others who are facing the same problem.
The widgets / search filters attached to a graph issue getData calls of their own. If your custom adapter is built to retrieve data via API calls from third party services, data which is agnostic to the request.fields property sent forward by GDS => then these API calls are multiplied by N+1 (where N = the amout of widgets / search filters your report is implementing).
I could not find an official solution for this either, so I invented a workaround using cache.
The graph's request for getData (typically requesting more fields than the Search Filters) will be the only one allowed to query the API Endpoint. Before starting to do so it will store a key in the cache "cache_{hashOfReportParameters}_building" => true.
if (enableCache) {
cache.putString("cache_{hashOfReportParameters}_building", 'true');
Logger.log("Cache is being built...");
}
It will retrieve API responses, paginating in a look, and buffer the results.
Once it finished it will delete the cache key "cache_{hashOfReportParameters}building", and will cache the final merged results it buffered so far inside "cache{hashOfReportParameters}_final".
When it comes to filters, they also invoke: getData but typically with only up to 3 requested fields. First thing we want to do is make sure they cannot start executing prior to the primary getData call... so we add a little bit of a delay for things that might be the search filters / widgets that are after the same data set:
if (enableCache) {
var countRequestedFields = requestedFields.asArray().length;
Logger.log("Total Requested fields: " + countRequestedFields);
if (countRequestedFields <= 3) {
Logger.log('This seams to be a search filters.');
Utilities.sleep(1000);
}
}
After that we compute a hash on all of the moving parts of the report (date range, plus all of the other parameters you have set up that could influence the data retrieved form your API endpoints):
Now the best part, as long as the main graph is still building the cache, we make these getData calls wait:
while (cache.getString('cache_{hashOfReportParameters}_building') === 'true') {
Logger.log('A similar request is already executing, please wait...');
Utilities.sleep(2000);
}
After this loop we attempt to retrieve the contents of "cache_{hashOfReportParameters}_final" -- and in case we fail, its always a good idea to have a backup plan - which would be to allow it to traverse the API again. We have encountered ~ 2% error rate retrieving data we cached...
With the cached result (or buffered API responses), you just transform your response as per the schema GDS needs (which differs between graphs and filters).
As you start implementing this, you`ll notice yet another problem... Google Cache is limited to max 100KB per key. There is however no limit on the amount of keys you can cache... and fortunately others have encountered similar needs in the past and have come up with a smart solution of splitting up one big chunk you need cached into multiple cache keys, and gluing them back together into one object when retrieving is necessary.
See: https://github.com/lwbuck01/GASs/blob/b5885e34335d531e00f8d45be4205980d91d976a/EnhancedCacheService/EnhancedCache.gs
I cannot share the final solution we have implemented with you as it is too specific to a client - but I hope that this will at least give you a good idea on how to approach the problem.
Caching the full API result is a good idea in general to avoid round trips and server load for no good reason if near-realtime is good enough for your needs.
I'd like to have a list of just the current titles for all questions in one of the smaller (less than 10,000 questions) stackexchange site. I tried the interactive utility here: https://api.stackexchange.com/docs/questions and it both reports the result as a json at the bottom, and produces the requesting url at the top. For example:
https://api.stackexchange.com/2.2/questions?order=desc&sort=activity&tagged=apples&site=cooking
returns this JSON in my browser:
{"items":[{"tags":["apples","crumble"],"owner":{ ...
...
...],"has_more":true,"quota_max":300,"quota_remaining":252}
What is quota? It was 10,000 on one search on one site, but suddenly it's only 300 here.
I won't be doing this very often, what I'd like is the quickest way to edit that (or similar of course) url so I can get a list of all of the titles on a small site. I don't understand how to use paging, and I don't need any of the other fields. I don't care if I get them, but I'm thinking if I exclude them I can have more at once.
If I need to script it, python (2.7) is my preferred (only) language.
quota_max is the number of requests your application is allowed per day. 300 is the default for an unregistered application. This used to be mentioned directly on the page describing throttles, but seems to have been removed. Here is historical information describing the default.
To increase this to 10,000, you need to register an application and then authenticate by passing an access token in your script.
To get all titles on a site, you can use a Python library to help:
StackAPI. The answer below will use this library. DISCLAIMER: I wrote this library
Py-StackExchange
SEAPI
StackPy
Assuming you have registered your application and authenticated we can proceed.
First, install StackAPI (documentation):
pip install stackapi
This code will then grab the 10,000 most recent questions (max_pages * page_size) for the site hardwarerecs. Each page costs you one API hit, so the more items per page, the few API calls.
from stackapi import StackAPI
SITE = StackAPI('hardwarerecs')
SITE.page_size = 100
SITE.max_pages = 100
# Filter to only get question title and link
filter = '!BHMIbze0EQ*ved8LyoO6rNjkuLgHPR'
questions = SITE.fetch('questions', filter=filter)
In the questions variable is a dictionary that looks very similar to the API output, except that the library did all the paging for you. Your data is in questions['data'] and, in this case, contains a list of dictionaries that look like this:
[
...
{u'link': u'http://hardwarerecs.stackexchange.com/questions/29/sound-board-to-replace-a-gl2200-in-a-house-of-worship-foh-setting',
u'title': u'Sound board to replace a GL2200 in a house-of-worship FOH setting?'},
{ u'link': u'http://hardwarerecs.stackexchange.com/questions/31/passive-gps-tracker-logger',
u'title': u'Passive GPS tracker/logger'}
...
]
This result set is limited to only the title and the link because of the filter we applied. You can find the appropriate filter by adjusting what fields you want in the web UI and copying the filter field.
The hardwarerecs parameter that is passed when creating the SITE parameter is the first part of the site's domain URL. Alternatively, you can find it by looking at the api_site_parameter for your site when looking at the /sites end point.