Autodesk - Model Properties API - Out of Memory - autodesk-forge

Autodesk Construction Cloud - Model Properties API.
I'm trying to index and query uploaded Navisworks model. The model size is 241Mb.
When trying to index the properties the indexing fails as follows:
"indexes": [
{
...
"type": "INDEX",
"state": "FAILED",
...
"errors": [
{
"type": "RuntimeError",
"title": "OutOfMemoryException",
"detail": "Exception of type 'System.OutOfMemoryException' was thrown."
}
]
}
]
Does the indexing has a limit on Autodesk Backend?
What could be done in such case?

Related

FIWARE - Orion Context Broker as Context Provider

I'm having a hard time understanding how context providers work in the Orion Context Broker.
I followed the examples in the step-by-step guide written by Jason Fox. However, I still do not exactly get what happens in the background and how the context broker exactly creates the POST from the registration. Here is what I am trying to do:
I do have a WeatherStation that provides sensor data for a neighborhood.
{
"id": "urn:ngsi-ld:WeatherStation:001",
"type": "Device:WeatherStation",
"temperature": {
"type": "Number",
"value": 20.5,
"metadata": {}
},
"windspeed": {
"type": "Number",
"value": 60.0,
"metadata": {}
}
}
Now I like the WeatherStation to be a context provider for all buildings.
{
"id": "urn:ngsi-ld:building:001",
"type": "Building"
}
Here is the registration that I try to use.
{
"id": null,
"description": "Random Weather Conditions",
"provider": {
"http": {
"url": "http://localhost:1026/v2"
},
"supportedForwardingMode": "all"
},
"dataProvided": {
"entities": [
{
"id": "null",
"idPattern": ".*",
"type": "Building",
"typePattern": null
}
],
"attrs": [
"temperature",
"windspeed"
],
"expression": null
},
"status": "active",
"expires": null,
"forwardingInformation": null
}
The context broker accepts both entities and the registration without any error.
Since I have a multi-tenant setup I use one fiware_service for the complete neighborhood but every building would later have a seperate fiware_servicepath. Hence, the weatherstation has a different servicepath than the building. Although I also tried to put them both on the same path.
For now I used the same headers for all entities.
{
"fiware-service": "filip",
"fiware-servicepath": "/testing"
}
Here is the log of the context broker (version: 3.1.0):
INFO#2021-09-23T19:17:17.944Z logTracing.cpp[212]: Request forwarded (regId: 614cd2b511c25270060d873a): POST http://localhost:1026/v2/op/query, request payload (87 bytes): {"entities":[{"idPattern":".*","type":"Building"}],"attrs":["temperature","windspeed"]}, response payload (2 bytes): [], response code: 200
INFO#2021-09-23T19:17:17.944Z logTracing.cpp[130]: Request received: POST /v2/op/query?options=normalized%2Ccount&limit=1000, request payload (55 bytes): {"entities": [{"idPattern": ".*", "type": "Building"}]}, response code: 200
The log says that it receives the request and forwards it as expected. However, as I understand it this would simply point to the same building entity again. Hence, it is somehow a circular forwarding. I also cannot tell anything about the headers of the request.
I do not understand how the forwarded request from the building can actually query the weather station for information. When I query my building I still only receive the entity with no own properties:
{
"id": "urn:ngsi-ld:building:001",
"type": "Building"
}
I also tried to vary the url of the registration but with no success.
Is this scenario actually possible with the current implementation? It would be very useful
Is there any example for this including also the headers?
I know that I could simply use reference but that would put more work on the user.
Thanks for any help on this.
It is messy, but you could achieve this via a subscription. Hold the weather station as a separate entity in the context broker and poll or push updates into the entity. The subscription would fire whenever the data changes and make two NGSI requests:
Find all entities which have a Relationship servicedBy=WeatherStationX
Run an upsert on all entities to add a Property to each entity:
{
"temperature" : {
"type" : "Property",
"value" : 7,
"unitCode": "CEL",
"observedAt": "XXXXX",
"providedBy": "WeatherStation1"
}
}
Where observedAt comes either from the payload of the weather station or the notification timestamp.
Within the existing IoT Agents, provisioning the link attribute allows a device to propagate measures to a second entity (e.g. this Thermometer entity is measuring temperature for an associated Building entity)
{
"entity_type": "Device",
"resource": "/iot/d",
"protocol": "PDI-IoTA-UltraLight",
..etc
"attributes": [
{"object_id": "l", "name": "temperature", "type":"Float",
"metadata":{
"unitCode":{"type": "Text", "value" :"CEL"}
}
}
],
"static_attributes": [
{
"name": "controlledAsset",
"type": "Relationship",
"value": "urn:ngsi-ld:Building:001",
"link": {
"attributes": ["temperature"],
"name": "providedBy",
"type": "Building"
}
}
]
}
At the moment the logic just links direct one-to-one, but it would be possible to raise a PR to check for an Array and update multiple entities in an upsert - the relevant section of code is here

Converting JSON Table Array Data in Azure Data Factory from Log Analytics REST API JSON Response to Multiple JSON Documents in same file

I'm currently trying to extract data out of Log Analytics through its REST API. I have been successful at using a Copy Data activity to store the response in an Azure Data Lake Gen 2 account.
The format is roughly similar to the example from the Log Analytics API Reference Page.
{
"tables": [
{
"name": "PrimaryResult",
"columns": [
{
"name": "Category",
"type": "string"
},
{
"name": "count_",
"type": "long"
}
],
"rows": [
[
"Administrative",
20839
],
[
"Recommendation",
122
],
[
"Alert",
64
],
[
"ServiceHealth",
11
]
]
}
] }
My dataset is much larger with more columns more values etc but the principals are the same.
What I am trying to do is generate a new JSON file that would hold the table but multiple documents in the same file e.g.
[{
"Category": "Administrative",
"count_": 20839
},
{
"Category": "Recommendation",
"count_": 122
},
{
"Category": "Alert",
"count_": 64
},
{
"Category": "ServiceHealth",
"count_": 11
}]
The output of this would be stored back into the data lake and then ideally could be used as a source for a copy activity to go into an Azure SQL Database.
I have tried accomplishing this using Data Flows Flattening but haven't been successful with this up until this point as when trying to map the column name it doesn't see individual column names just that level of the document where the column names are defined.
How would I go about flattening the dataset so it appears as desired? Is this an unrealistic expectation of Data flows or is this task more suitable for something like Azure Databricks?

Autodesk Forge import IFC failed

our customer try to show IFC model through method https://developer.api.autodesk.com/oss/v2/buckets/bucket_name/objects/model_name. IFC model is huge, about 100km, it is road design. He can't show it in Forge, he got error that model is empty, although it has 120MB. Model is in IFC4. Has You any documentation to supported IFC classes, or model size to show in FOrge ? He try to divide model to smaller parts, but nothing happen.
Milan Nemec, Graitec
A few things to check:
Did you call the conversion job to translate the IFCs to SVF first? Using this endpoint here?
Is the conversion completed? Try here to query the progress.
And if everything works out the manifest response should contain SVF derivatives similar to:
{
"type": "manifest",
"hasThumbnail": "true",
"status": "success",
"progress": "complete",
"region": "US",
EDIT
For IFC models exported from Revit try switch to the IFC loader by specifying in your job payload:
{
"input": {
"urn": "dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6c2JzYjIzMzMvc2JiLmlmYw"
},
"output": {
"formats": [
{
"type": "svf",
"views": ["3d", "2d"], "advanced": {
"switchLoader": true
}
}]

How to view NWD of point cloud data with Forgeviewer

I read point cloud data with Navisworks, output NWD, and converted it to SVF.
The results were as follows
{
"type": "warning",
"code": "Navisworks-PointUnsupported",
"message": "Point and snappoint are not supported"
},
{
"type": "warning",
"code": "Navisworks-EmptyFile",
"message": "There is no geometry that can be displayed"
}
Does Forge not support point cloud data?
You can just view the point cloud data as a single object with Forgeviewer, but is that impossible? You don't even need attribute information, just look at it ...

Autodesk Forge Viewer get bucket files to display multiple views

Does anyone know if it is possible to get all the bucket files data in some kind of array or similiar? I'm thinking of building a viewer where you can load a different view, containing a different model, when the user clicks on the desired model (thumbnail)
Yes if I do not misunderstand your requirement. You can get all your buckets by GET buckets API, you will get an bucket array like this:
{
"items": [
{
"bucketKey": "mybucket1",
"createdDate": 1508056179005,
"policyKey": "persistent"
},
{
"bucketKey": "mybucket2",
"createdDate": 1502411682779,
"policyKey": "transient"
},
{
"bucketKey": "mybucket3",
"createdDate": 1502420840319,
"policyKey": "transient"
}
]
}
Then, you can iterate all these buckets to get all the files under each bucket by GET buckets/:bucketKey/objects API, it will provide you an array of items like this:
{
"items": [
{
"bucketKey": "mybucket1",
"objectKey": "mytestbim1.rvt",
"objectId": "urn:adsk.objects:os.object:mybucket1/mytestbim1.rvt",
"sha1": "248205b7609ca95c04e4d60fee2ad7b6bd9a2uy2",
"size": 17113088,
"location": "https://developer.api.autodesk.com/oss/v2/buckets/mybucket1/objects/mytestbim1.rvt"
},
{
"bucketKey": "mybucket1",
"objectKey": "mytestbim2.rvt",
"objectId": "urn:adsk.objects:os.object:mybucket1/mytestbim2.rvt",
"sha1": "248205b7609ca95c04e4d60fee2ad7b6bd8a2322",
"size": 17113088,
"location": "https://developer.api.autodesk.com/oss/v2/buckets/mybucket1/objects/mytestbim2.rvt"
}
]
}
The most important value is the "objectId", it will be the urn after base64 encoded, you can get all the derivative with this urn, and also you can load the urn in Forge Viewer after it's translated to SVF.
We have an code example of Forge Node.js Boilers, and you can check the project 5 to see if that is something you are interested.
Hope it helps.