Why does my forge bucket not show any objects? - autodesk-forge

I have followed this tutorial and have uploaded my file successfully to: https://developer.api.autodesk.com/oss/v2/buckets/timmyisabucket/objects/audobon_arch.rvt
It has uploaded successfully and I can verify this by calling https://developer.api.autodesk.com/modelderivative/v2/designdata/dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6dGltbXlpc2FidWNrZXQvYXVkb2Jvbl9hcmNoLnJ2dA==/metadata/c63a6682-a73c-a2a8-a08c-dfeee25781f4/properties which successfully returns all the object properties.
However, when I ask the api to list all the objects inside the bucket, it simply returns an empty list!
The endpoint I'm calling: https://developer.api.autodesk.com/oss/v2/buckets/timmyisabucket/objects
The response:
{
"items": []
}
Where am I going wrong?
Thanks

Just to close this off, the helpful comment by Xiaodong Liang led me to the the fact that my bucket type was created with the incorrect type of "Transient" meaning that all my stuff gets deleted after 24 hours.
It should have been Temporary or Persistent.
Retention policy
Transient
Think of this type of storage as a cache. Use it for
ephemeral results. For example, you might use this for objects that
are part of producing other persistent artifacts, but otherwise are
not required to be available later.
Objects older than 24 hours are removed automatically. Each upload of
an object is considered unique, so, for example, if the same rendering
is uploaded multiple times, each of them will have its own retention
period of 24 hours.
Temporary
This type of storage is suitable for artifacts produced for
user-uploaded content where after some period of activity, the user
may rarely access the artifacts.
When an object has reached 30 days of age, it is deleted.
Persistent
Persistent storage is intended for user data. When a file
is uploaded, the owner should expect this item to be available for as
long as the owner account is active, or until he or she deletes the
item.

Related

Couchbase Sync-Gateway Multiple Clients

I'am currently playing around with the Couchbase Sync-Gateway and have built a demo app.
What is the intended behavior if a user logs in with the same username on a different device (which has an empty database) or if he deleted the local database?
I'am expecting that all the data from the server should get synced back to the clients.
Is this correct?
My problem is that if i'am deleting the database or login from a different device, nothing will get synced.
Ok i figured it out and it's exactly how i thought it would be.
If i log in from a different device i get all the data synced automatically.
My problem was the missing sync function. I thought it will use a default and route all documents to the public channel automatically.
I'am now using the following simple sync-function:
"sync": `function (doc, oldDoc) {
channel('!');
access('demo#example.com', '*');
}`
This will simply route all documents to the public channel and grant my demo-user access to it.
I think this shouldn't be used in production but it's a good starting point for playing around.
Now everything is working fine.
Edit: I've now found the missing info:
https://docs.couchbase.com/sync-gateway/current/configuration-properties.html#databases-this_db-sync
If you don't supply a sync function, Sync Gateway uses the following default sync function
...
The channels property is an array of strings that contains the names of the channels to which the document belongs. If you do not include a channels property in a document, the document does not appear in any channels.

Object disappears in Bucket, (Autodesk forge)

Objects (files) disappears in Buckets after a day. The translated data is still there since I immediately get "success" on the translation status if I upload the same file again.
Is there some time-limit param? Can't find any https://forge.autodesk.com/en/docs/data/v2/reference/http/buckets-:bucketKey-objects-:objectName-PUT/
Well, if you get the details of the bucket using the following URL:
https://developer.api.autodesk.com/oss/v2/buckets/:bucketKey/details
Take note of the property policyKey, I got a hunch that if you have been following the autodesk tutorials, you have created a bucket with transient policyKey which marks all model that have existed for 24 hours for deletion.
Extra note: files that are marked for deletion are not immediately deleted based on my past experience.
See also:
Bucket retention policy: https://forge.autodesk.com/en/docs/data/v2/developers_guide/retention-policy/
Creating a bucket: https://forge.autodesk.com/en/docs/data/v2/reference/http/buckets-POST/
^Take note of policyKey in the request body

Data Studio connector making multiple calls to API when it should only be making 1

I'm finalizing a Data Studio connector and noticing some odd behavior with the number of API calls.
Where I'm expecting to see a single API call, I'm seeing multiple calls.
In my apps script I'm keeping a simple tally which increments by 1 every url fetch and that is giving me the correct number I expect to see with getData().
However, in my API monitoring logs (using Runscope) I'm seeing multiple API requests for the same endpoint, and varying numbers for different endpoints in a single getData() call (they should all be the same). E.g.
I can't post the code here (client project) but it's substantially the same framework as the Data Connector code on Google's docs. I have caching and backoff implemented.
Looking for any ideas or if anyone has experienced something similar?
Thanks
Per the this reference, GDS will also perform semantic type detection if you aren't explicitly defining this property for your fields. If the query is semantic type detection, the request will feature sampleExtraction: true
When Data Studio executes the getData function of a community connector for the purpose of semantic detection, the incoming request will contain a sampleExtraction property which will be set to true.
If the GDS report includes multiple widgets with different dimensions/metrics configuration then GDS might fire multiple getData calls for each of them.
Kind of a late answer but this might help others who are facing the same problem.
The widgets / search filters attached to a graph issue getData calls of their own. If your custom adapter is built to retrieve data via API calls from third party services, data which is agnostic to the request.fields property sent forward by GDS => then these API calls are multiplied by N+1 (where N = the amout of widgets / search filters your report is implementing).
I could not find an official solution for this either, so I invented a workaround using cache.
The graph's request for getData (typically requesting more fields than the Search Filters) will be the only one allowed to query the API Endpoint. Before starting to do so it will store a key in the cache "cache_{hashOfReportParameters}_building" => true.
if (enableCache) {
cache.putString("cache_{hashOfReportParameters}_building", 'true');
Logger.log("Cache is being built...");
}
It will retrieve API responses, paginating in a look, and buffer the results.
Once it finished it will delete the cache key "cache_{hashOfReportParameters}building", and will cache the final merged results it buffered so far inside "cache{hashOfReportParameters}_final".
When it comes to filters, they also invoke: getData but typically with only up to 3 requested fields. First thing we want to do is make sure they cannot start executing prior to the primary getData call... so we add a little bit of a delay for things that might be the search filters / widgets that are after the same data set:
if (enableCache) {
var countRequestedFields = requestedFields.asArray().length;
Logger.log("Total Requested fields: " + countRequestedFields);
if (countRequestedFields <= 3) {
Logger.log('This seams to be a search filters.');
Utilities.sleep(1000);
}
}
After that we compute a hash on all of the moving parts of the report (date range, plus all of the other parameters you have set up that could influence the data retrieved form your API endpoints):
Now the best part, as long as the main graph is still building the cache, we make these getData calls wait:
while (cache.getString('cache_{hashOfReportParameters}_building') === 'true') {
Logger.log('A similar request is already executing, please wait...');
Utilities.sleep(2000);
}
After this loop we attempt to retrieve the contents of "cache_{hashOfReportParameters}_final" -- and in case we fail, its always a good idea to have a backup plan - which would be to allow it to traverse the API again. We have encountered ~ 2% error rate retrieving data we cached...
With the cached result (or buffered API responses), you just transform your response as per the schema GDS needs (which differs between graphs and filters).
As you start implementing this, you`ll notice yet another problem... Google Cache is limited to max 100KB per key. There is however no limit on the amount of keys you can cache... and fortunately others have encountered similar needs in the past and have come up with a smart solution of splitting up one big chunk you need cached into multiple cache keys, and gluing them back together into one object when retrieving is necessary.
See: https://github.com/lwbuck01/GASs/blob/b5885e34335d531e00f8d45be4205980d91d976a/EnhancedCacheService/EnhancedCache.gs
I cannot share the final solution we have implemented with you as it is too specific to a client - but I hope that this will at least give you a good idea on how to approach the problem.
Caching the full API result is a good idea in general to avoid round trips and server load for no good reason if near-realtime is good enough for your needs.

Storing data in FIWARE Object Storage

I'm building an application that stores files into the FIWARE Object Storage. I don't quite understand what is the correct way of storing files into the storage.
The code python code snippet below taken from the Object Storage - User and Programmers Guide shows 2 ways of doing it:
def store_text(token, auth, container_name, object_name, object_text):
headers = {"X-Auth-Token": token}
# 1. version
#body = '{"mimetype":"text/plain", "metadata":{}, "value" : "' + object_text + '"}'
# 2. version
body = object_text
url = auth + "/" + container_name + "/" + object_name
return swift_request('PUT', url, headers, body)
The 1. version confuses me, because when I first looked at the only Node.js module (repo: fiware-object-storage) that works with Object Storage, it seemed to use 1. version. As the module was making calls to the old (v.1.1) API version instead of the presumably newest (v.2.0), referencing to the python example, not sure if that is an outdated version of doing it or not.
As I played more with the module, realised it didn't work and the code for it was a total mess. So I forked the project and quickly understood that I will need rewrite it form the ground up, taking the above mention python example from the usage guide as an reference. Link to my repo.
As of writing this the only methods that aren't implement is the object storage (PUT) and object fetching (GET).
Had some addition questions about the Object Storage which I sent to fiware-lab-help#lists.fiware.org, but haven't heard anything back so asking them here.
Haven't got much experience with writing API libraries. Should I need to worry about auth token expiring? I presume it is not needed to make a new authentication, every time we interact with storage. The authentication should happen once when server is starting-up (we create a instance) and it internally keeps it. Should I implement some kind of mechanism that refreshes the token?
Does the tenant id change? From the quote below is presume that getting a tenant I just a one time deal, then later you can use it in the config to make less authentication calls.
A valid token is required to access an object store. This section
describes how to get a valid token assuming an identity management
system compatible with OpenStack Keystone is being used. If the
username, password and tenant details are known, only step 3 is
required. source
During the authentication when fetching tenants how should I select the "right" one? For now i'm just taking the first one similar as the example code does.
Is it true that a object storage container belongs to only a single region?
Use only what you call version 2. Ignore your version 1. It is commented out in the example. It should be removed from the documentation.
(1) The token will be valid for some period of time. This could be an hour or a day, depending on the setup. This period of time should be specified in the token that is returned by the authentication service. The token needs to be periodically refreshed.
(2) The tenant id does not change.
(3) Typically only one tenant id is returned. It is possible, however, that you were assigned more than one id, in which case you have to pick which one you are currently using. Containers typically belong to a single tenant and are not shared between tenants.
(4) Containers are typically limited to a single region. This may change in the future when multi-region support for a container is added to Swift.
Solved my troubles and created the NPM module that works with the FIWARE Object Storage: https://github.com/renarsvilnis/fiware-object-storage-ge

AngularJS form wizard save progress

I have a service in AngularJS that generates all the steps needed, the current state of each step (done, current, show, etc) and an associated directive that actually implements the service and displays the data of the service. But, there are 2 steps that are divided in 4 and 3 steps each:
Step one
Discounts
Activities
Duration
Payment Length
Step two
Identification
Personal data
Payment
How can I "save" the state of my form in case the person leaves the site and comes back later? Is it safe to use localStorage? I'm no providing support for IE6 or 7. I thought of using cookies, but that can end up being weak (or not)
Either local storage or cookies should be fine. I doubt this will be an issue, but keep in mind that both have a size limit. Also, it goes without saying that the form state will only be restored if the user returns on the same browser, and without having deleted cookies / local storage.
Another option could be to save the information server side. If the user is signed in, you can make periodic AJAX calls with the data and store the state on the server. When the user finishes all steps, you can make an AJAX call telling the server to delete any saved data it might have. This allows you to restore state even if the user returns on a different browser, as long as he is signed in.
Regardless of what direction you go with this, you can use jQuery's serialize method to serialize the form into a string and save it using your choice of storage.