Does anyone know if it is possible to get all the bucket files data in some kind of array or similiar? I'm thinking of building a viewer where you can load a different view, containing a different model, when the user clicks on the desired model (thumbnail)
Yes if I do not misunderstand your requirement. You can get all your buckets by GET buckets API, you will get an bucket array like this:
{
"items": [
{
"bucketKey": "mybucket1",
"createdDate": 1508056179005,
"policyKey": "persistent"
},
{
"bucketKey": "mybucket2",
"createdDate": 1502411682779,
"policyKey": "transient"
},
{
"bucketKey": "mybucket3",
"createdDate": 1502420840319,
"policyKey": "transient"
}
]
}
Then, you can iterate all these buckets to get all the files under each bucket by GET buckets/:bucketKey/objects API, it will provide you an array of items like this:
{
"items": [
{
"bucketKey": "mybucket1",
"objectKey": "mytestbim1.rvt",
"objectId": "urn:adsk.objects:os.object:mybucket1/mytestbim1.rvt",
"sha1": "248205b7609ca95c04e4d60fee2ad7b6bd9a2uy2",
"size": 17113088,
"location": "https://developer.api.autodesk.com/oss/v2/buckets/mybucket1/objects/mytestbim1.rvt"
},
{
"bucketKey": "mybucket1",
"objectKey": "mytestbim2.rvt",
"objectId": "urn:adsk.objects:os.object:mybucket1/mytestbim2.rvt",
"sha1": "248205b7609ca95c04e4d60fee2ad7b6bd8a2322",
"size": 17113088,
"location": "https://developer.api.autodesk.com/oss/v2/buckets/mybucket1/objects/mytestbim2.rvt"
}
]
}
The most important value is the "objectId", it will be the urn after base64 encoded, you can get all the derivative with this urn, and also you can load the urn in Forge Viewer after it's translated to SVF.
We have an code example of Forge Node.js Boilers, and you can check the project 5 to see if that is something you are interested.
Hope it helps.
Related
I want to know how to use NSGI-LD to upload an image even though these static files are not stored in Orion Context Broker or Mongo. I want to know if there is a way to configure the NSGI-LD to forward the images to AWS S3 Buck or another location?
As you correctly identified, binary files are not a good candidate for context data, and should not be held directly within a context broker. The usual paradigm would be as follows:
Imagine you have a number plate reader library linked to Kurento and wish to store the images of vehicles as they pass. In this case the event from the media stream should cause two separate actions:
Upload the raw image to a storage server
Upsert the context data to the context broker including an attribute holding the URI of the stored image.
Doing things this way means you can confirm that the image is safely stored, and then send the following:
{
"vehicle_registration_number": {
"type": "Property",
"value": "X123RPD"
},
"image_download": {
"type": "Property",
"value": "http://example.com/url/to/image"
}
}
The alternative would be to simply include some link back to the source file somehow as metadata:
{
"vehicle_registration_number": {
"type": "Property",
"value": "X123RPD",
"origin": {
"type": "Property",
"value": "file://localimage"
}
}
}
Then if you have a registration on vehicle_registration_number which somehow links back to the server with the original file, it could upload the image after the context broker has been updated (and then do another upsert)
Option one is simpler. Option two would make more sense if the registration is narrower. For example, only upload images of VRNs for cars whose speed attribute is greater than 70 km/h.
Ontologically you could say that Device has a relationship to a Photograph which would mean that Device could have an additional latestRecord attribute:
{
"latestRecord": {
"type": "Relationship",
"object": "urn:ngsi-ld:CatalogueRecordDCAT-AP:0001"
},
}
And and create a separate entity holding the details of the Photograph itself using a standard data model such as CatalogueRecordDCAT-AP which is defined here. Attributes such as source and sourceMetadata help define the location of the raw file.
{
"id": "urn:ngsi-ld:CatalogueRecordDCAT-AP:0001",
"type": "CatalogueRecordDCAT-AP",
"dateCreated": "2020-11-02T21:25:54Z",
"dateModified": "2021-07-02T18:37:55Z",
"description": "Speeding Ticket",
"dataProvider": "European open data portal",
"location": {
"type": "Point",
"coordinates": [
36.633152,
-85.183315
]
},
"address": {
"streetAddress": "2, rue Mercier",
"addressLocality": "Luxembourg",
"addressRegion": "Luxembourg",
"addressCountry": "Luxembourg",
"postalCode": "2985",
"postOfficeBoxNumber": ""
},
"areaServed": "European Union and beyond",
"primaryTopic": "Public administration",
"modificationDate": "2021-07-02T18:37:55Z",
"applicationProfile": "DCAT Application profile for data portals in Europe",
"changeType": "First version",
"source": "http://example.com/url/to/image"
"sourceMetadata": {"type" :"jpeg", "height" : 100, "width": 100},
"#context": [
"https://smartdatamodels.org/context.jsonld"
]
}
I try to get the dbIds of Navisworks items which have a specific property.
The NWDs get translated to SVF2, using the default settings.
When I query the property DB, I get different dbIds than the dbIds that are returned from the {urn}/metadata/{guid}/properties endpoint.
Query with results:
Snippet of the corresponding json data:
[{
"objectid": 11,
"name": "3D Solid",
"externalId": "0/1/0",
"properties": {
"Neanex Connector": {
"ibcNAME": "Cone-1",
"ibcGUID": "6453c4c067d1476db9c68c4066e291c4"
}
}
},
{
"objectid": 2,
"name": "SomeSolids-1.nwd",
"externalId": "0",
"properties": {
"Neanex Connector": {
"ibcNAME": "SomeSolids-1",
"ibcGUID": "2613704afaeb4e68bcb1600a737df0b7"
}
}
},
{
"objectid": 27,
"name": "3D Solid",
"externalId": "1/5/0",
"properties": {
"Neanex Connector": {
"ibcNAME": "Wedge-2",
"ibcGUID": "25d5fb2daebe4a3fb3eb1c671012d5f3"
}
}
},
{
"objectid": 3,
"name": "SomeSolids-2.nwd",
"externalId": "1",
"properties": {
"Neanex Connector": {
"ibcNAME": "SomeSolids-2",
"ibcGUID": "425212b48a45457f9f9d5192e84bb0a4"
}
}
}]
Summary:
SQLite dbId
External ID
json dbId
GUID
2
0
2
2613704afaeb4e68bcb1600a737df0b7
6
0/1/0
11
6453c4c067d1476db9c68c4066e291c4
15
1
3
425212b48a45457f9f9d5192e84bb0a4
27
1/5/0
27
25d5fb2daebe4a3fb3eb1c671012d5f3
Which dbIds are the correct ones?
Why the differences?
Regards
Wolfgang
As I know, the dbIds in SQLite DB will be svf dbIds. If you want to get svf dbIds after translating to svf2, you will need to call the below APIs with an extra header x-ads-derivative-format: fallback. Otherwise, the objectIds you got from the properties API will be svf2 dbIds.
https://forge.autodesk.com/en/docs/model-derivative/v2/reference/http/urn-metadata-guid-GET/
https://forge.autodesk.com/en/docs/model-derivative/v2/reference/http/urn-metadata-guid-properties-GET/
Two crucial pieces of information:
There are 'old' SVF dbIds, and newer SVF2 dbIds, see:
https://forge.autodesk.com/blog/model-derivative-svf2-enhancements-part-1-viewer
https://forge.autodesk.com/blog/model-derivative-svf2-enhancements-part-2-metadata
https://forge.autodesk.com/blog/temporary-workaround-mapping-between-svf1-and-svf2-ids
https://forge.autodesk.com/en/docs/model-derivative/v2/reference/http/job-POST/#headers > x-ads-derivative-format
The dbIds in the SQLite properties DB are always 'old' SVF dbIds
Using the header x-ads-derivative-format: fallback works fine, and you always get the 'old' SVF dbIds.
Note:
If you use this header with one derivative (URN), you must use it consistently across the following endpoints, whenever you reference the same derivative.
POST job (for OBJ output)
GET {urn}/metadata/{guid}
GET {urn}/metadata/{guid}/properties
I'm currently trying to extract data out of Log Analytics through its REST API. I have been successful at using a Copy Data activity to store the response in an Azure Data Lake Gen 2 account.
The format is roughly similar to the example from the Log Analytics API Reference Page.
{
"tables": [
{
"name": "PrimaryResult",
"columns": [
{
"name": "Category",
"type": "string"
},
{
"name": "count_",
"type": "long"
}
],
"rows": [
[
"Administrative",
20839
],
[
"Recommendation",
122
],
[
"Alert",
64
],
[
"ServiceHealth",
11
]
]
}
] }
My dataset is much larger with more columns more values etc but the principals are the same.
What I am trying to do is generate a new JSON file that would hold the table but multiple documents in the same file e.g.
[{
"Category": "Administrative",
"count_": 20839
},
{
"Category": "Recommendation",
"count_": 122
},
{
"Category": "Alert",
"count_": 64
},
{
"Category": "ServiceHealth",
"count_": 11
}]
The output of this would be stored back into the data lake and then ideally could be used as a source for a copy activity to go into an Azure SQL Database.
I have tried accomplishing this using Data Flows Flattening but haven't been successful with this up until this point as when trying to map the column name it doesn't see individual column names just that level of the document where the column names are defined.
How would I go about flattening the dataset so it appears as desired? Is this an unrealistic expectation of Data flows or is this task more suitable for something like Azure Databricks?
our customer try to show IFC model through method https://developer.api.autodesk.com/oss/v2/buckets/bucket_name/objects/model_name. IFC model is huge, about 100km, it is road design. He can't show it in Forge, he got error that model is empty, although it has 120MB. Model is in IFC4. Has You any documentation to supported IFC classes, or model size to show in FOrge ? He try to divide model to smaller parts, but nothing happen.
Milan Nemec, Graitec
A few things to check:
Did you call the conversion job to translate the IFCs to SVF first? Using this endpoint here?
Is the conversion completed? Try here to query the progress.
And if everything works out the manifest response should contain SVF derivatives similar to:
{
"type": "manifest",
"hasThumbnail": "true",
"status": "success",
"progress": "complete",
"region": "US",
EDIT
For IFC models exported from Revit try switch to the IFC loader by specifying in your job payload:
{
"input": {
"urn": "dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c2JzYjIzMzMvc2JiLmlmYw"
},
"output": {
"formats": [
{
"type": "svf",
"views": ["3d", "2d"], "advanced": {
"switchLoader": true
}
}]
I am trying to create a domain and uploading a sample data which is like :
[
{
"type": "add",
"id": "1371964",
"version": 1,
"lang": "eng",
"fields": {
"id": "1371964",
"uid": "1200983280",
"time": "2013-12-23 13:00:26",
"orderid": "1200983280",
"callerid": "66580662",
"is_called": "1",
"is_synced": "1",
"is_sent": "1",
"allcaller": [
{
"sno": "1085770",
"uid": "1387783883.30547",
"lastfun": null,
"callduration": "00:00:46",
"request_id": "1371964"
}
]
}
}]
when I am uploading sample data while creating a domain, cloudsearch is not taking it.
If I remove allcaller array then it takes it smoothly.
If cloudsearch does not allowing object arrays, then how should I format this json??
Just found after searching on aws forums, cloudsearch doesnot allow nested json (object arrays) :(
https://forums.aws.amazon.com/thread.jspa?messageID=405879񣅷
Time to try Elastic search.