Fiware Upload Image - fiware

I want to know how to use NSGI-LD to upload an image even though these static files are not stored in Orion Context Broker or Mongo. I want to know if there is a way to configure the NSGI-LD to forward the images to AWS S3 Buck or another location?

As you correctly identified, binary files are not a good candidate for context data, and should not be held directly within a context broker. The usual paradigm would be as follows:
Imagine you have a number plate reader library linked to Kurento and wish to store the images of vehicles as they pass. In this case the event from the media stream should cause two separate actions:
Upload the raw image to a storage server
Upsert the context data to the context broker including an attribute holding the URI of the stored image.
Doing things this way means you can confirm that the image is safely stored, and then send the following:
{
"vehicle_registration_number": {
"type": "Property",
"value": "X123RPD"
},
"image_download": {
"type": "Property",
"value": "http://example.com/url/to/image"
}
}
The alternative would be to simply include some link back to the source file somehow as metadata:
{
"vehicle_registration_number": {
"type": "Property",
"value": "X123RPD",
"origin": {
"type": "Property",
"value": "file://localimage"
}
}
}
Then if you have a registration on vehicle_registration_number which somehow links back to the server with the original file, it could upload the image after the context broker has been updated (and then do another upsert)
Option one is simpler. Option two would make more sense if the registration is narrower. For example, only upload images of VRNs for cars whose speed attribute is greater than 70 km/h.
Ontologically you could say that Device has a relationship to a Photograph which would mean that Device could have an additional latestRecord attribute:
{
"latestRecord": {
"type": "Relationship",
"object": "urn:ngsi-ld:CatalogueRecordDCAT-AP:0001"
},
}
And and create a separate entity holding the details of the Photograph itself using a standard data model such as CatalogueRecordDCAT-AP which is defined here. Attributes such as source and sourceMetadata help define the location of the raw file.
{
"id": "urn:ngsi-ld:CatalogueRecordDCAT-AP:0001",
"type": "CatalogueRecordDCAT-AP",
"dateCreated": "2020-11-02T21:25:54Z",
"dateModified": "2021-07-02T18:37:55Z",
"description": "Speeding Ticket",
"dataProvider": "European open data portal",
"location": {
"type": "Point",
"coordinates": [
36.633152,
-85.183315
]
},
"address": {
"streetAddress": "2, rue Mercier",
"addressLocality": "Luxembourg",
"addressRegion": "Luxembourg",
"addressCountry": "Luxembourg",
"postalCode": "2985",
"postOfficeBoxNumber": ""
},
"areaServed": "European Union and beyond",
"primaryTopic": "Public administration",
"modificationDate": "2021-07-02T18:37:55Z",
"applicationProfile": "DCAT Application profile for data portals in Europe",
"changeType": "First version",
"source": "http://example.com/url/to/image"
"sourceMetadata": {"type" :"jpeg", "height" : 100, "width": 100},
"#context": [
"https://smartdatamodels.org/context.jsonld"
]
}

Related

FIWARE, NGSI-LD - Understand the #context

I am creating a data model for a particular application and I did not start from any base model; since I did not start from any base model, the context below is sufficient, correct?
"#context": [
"https://schema.lab.fiware.org/ld/context",
"https://uri.etsi.org/ngsi-ld/v1/ngsi-ld-core-context-v1.3.jsonld"
]
My data model is not complicated, with just these properties and entity being more "complex":
"address": {
"type": "Property",
"value": {
"streetAddress": "",
"postalCode": "",
"addressLocality": "",
"addressCountry": ""
}
},
"location": {
"type": "Point",
"coordinates": [
,
]
},
{
"id": "urn:ngsi-ld:MeasurementSensor:",
"type": "MeasurementSensor",
"measurementVariable": {
"type": "Property",
"value": "Temperature"
},
"measurementValue": {
"type": "Property",
"value": 32.0,
"unitCode": "ÂșC",
"observedAt": "2022-05-10T11:09:00.000Z"
},
"refX": {
"type": "Relationship",
"object": "urn:ngsi-ld:"
},
"#context": [
"https://schema.lab.fiware.org/ld/context",
"https://uri.etsi.org/ngsi-ld/v1/ngsi-ld-core-context-v1.3.jsonld"
]
}
If you are using your own custom vocabulary you should declare your types and properties in your own LD #context. For instance,
{
"#context": [
{
"MeasurementSensor": "https://example.org/my-types/MesaurementSensor"
},
"https://schema.lab.fiware.org/ld/context",
"https://uri.etsi.org/ngsi-ld/v1/ngsi-ld-core-context-v1.3.jsonld"
]
}
it also seems you are not using URNs properly, you should check. unitCode seems to be broken as well, as it must follow the UN/CEFACT unit codes.
Nonetheless, I would not recommend to define your own vocabulary for sensors, given there are existing Vocabularies such as SAREF or W3C SOSA that can and should be reused.
I'm not a data model expert but I do know a thing or two about NGSI-LD and NGSI-LD brokers.
The #context you use is an array of "https://schema.lab.fiware.org/ld/context" and v1.3 of the core context.
"https://schema.lab.fiware.org/ld/context" in its turn is an array of "https://fiware.github.io/data-models/context.jsonld" and v1.1 of the core context ...
And, ""https://fiware.github.io/data-models/context.jsonld" doesn't define any of the three terms you are using, so, no need to give any context for that. The terms will be expanded using the default URL of the core context (the value of the #vocab member of the core context defines the default URL).
An NGSI-LD broker has the core context built-in, you don't need to pass it, so do yourself a favor, and get faster responses by not passing the core context to the broker. No need.
And, if you need a user context, pass it in the HTTP Header "Link" instead.
Host it somewhere (an NGSi-LD broker offers that service), so you don't force the poor broker to parse the #conterxt in each and every request.
Lastly, do follow Jose Manuels recommendations and use standard names for your attributes (and value for unitCode).

How to create a Design Automation workitem with a composite design Revit file with nested references

Given the following situation, where "->" is a Xref reference in either overlay or attachment mode:
TOPHOST.rvt -> LINKA.rvt -> LINKA1.rvt
I know that I can use .../:version_id/relationships/refs to retrieve references from TOPHOST.rvt, which includes the reference to LINKA.rvt.
I can repeat this with a query for LINKA.rvt, which will return the reference to LINKA1.rvt.
This way, I can gather all information necessary to create a workitem for design automation, following this guide on how to include links (see "Host RVT file with linked models").
This works for versions that are not marked as isCompositeDesign (not documented in versions/:version_id, but isCompositeDesign is a key in attributes.extension.data with boolean values). For these versions, the .../:version_id/relationships/refs API will return empty data, ie. no references!
This is a huge problem, as in active projects, items are isCompositeDesign=true most of the time.
How can I get the reference information necessary to create a Design Automation workitem in scenarios with composite designs?
Update Apr. 28, 2023
It seems related to one known issue FDM-3977. I will update here once our engineering team gets back.
====================
If your target version urn shows that it's a composite design in its attributes.extension.data.isCompositeDesign like this one, according to Why an RVT model is (sometimes) downloaded as ZIP from BIM 360, then you should get a ZIP file that contains the host and all linked RVTs while downloading the host RVT file via GET buckets/wip.dm.prod/objects/XXXX.rvt. Isn't it what you want?
{
"type": "versions",
"id": "urn:adsk.wipprod:fs.file:vf.UTLEaKw?version=4",
"attributes": {
"name": "test.rvt",
"displayName": "test.rvt",
//...
"versionNumber": 4,
"mimeType": "application/vnd.autodesk.r360",
"storageSize": 111297725,
"fileType": "rvt",
"extension": {
"type": "versions:autodesk.bim360:C4RModel",
"version": "1.1.0",
"schema": {
"href": "https://developer.api.autodesk.com/schema/v1/versions/versions:autodesk.bim360:C4RModel-1.1.0"
},
"data": {
"modelVersion": 3,
"isCompositeDesign": true,
"mimeType": "application/vnd.autodesk.r360",
"compositeParentFile": "test.rvt",
//..
"modelType": "multiuser",
//..
"processState": "PROCESSING_COMPLETE",
"extractionState": "SUCCESS",
"splittingState": "NOT_SPLIT",
"reviewState": "NOT_IN_REVIEW",
"revisionDisplayLabel": "4",
"sourceFileName": "test.rvt",
"conformingStatus": "NONE"
}
}
},
"relationships": {
//...
"storage": {
"data": {
"type": "objects",
"id": "urn:adsk.objects:os.object:wip.dm.prod/XXXX.rvt"
},
"meta": {
"link": {
"href": "https://developer.api.autodesk.com/oss/v2/buckets/wip.dm.prod/objects/XXXX.rvt"
}
}
}
}
}

FIWARE - Orion Context Broker as Context Provider

I'm having a hard time understanding how context providers work in the Orion Context Broker.
I followed the examples in the step-by-step guide written by Jason Fox. However, I still do not exactly get what happens in the background and how the context broker exactly creates the POST from the registration. Here is what I am trying to do:
I do have a WeatherStation that provides sensor data for a neighborhood.
{
"id": "urn:ngsi-ld:WeatherStation:001",
"type": "Device:WeatherStation",
"temperature": {
"type": "Number",
"value": 20.5,
"metadata": {}
},
"windspeed": {
"type": "Number",
"value": 60.0,
"metadata": {}
}
}
Now I like the WeatherStation to be a context provider for all buildings.
{
"id": "urn:ngsi-ld:building:001",
"type": "Building"
}
Here is the registration that I try to use.
{
"id": null,
"description": "Random Weather Conditions",
"provider": {
"http": {
"url": "http://localhost:1026/v2"
},
"supportedForwardingMode": "all"
},
"dataProvided": {
"entities": [
{
"id": "null",
"idPattern": ".*",
"type": "Building",
"typePattern": null
}
],
"attrs": [
"temperature",
"windspeed"
],
"expression": null
},
"status": "active",
"expires": null,
"forwardingInformation": null
}
The context broker accepts both entities and the registration without any error.
Since I have a multi-tenant setup I use one fiware_service for the complete neighborhood but every building would later have a seperate fiware_servicepath. Hence, the weatherstation has a different servicepath than the building. Although I also tried to put them both on the same path.
For now I used the same headers for all entities.
{
"fiware-service": "filip",
"fiware-servicepath": "/testing"
}
Here is the log of the context broker (version: 3.1.0):
INFO#2021-09-23T19:17:17.944Z logTracing.cpp[212]: Request forwarded (regId: 614cd2b511c25270060d873a): POST http://localhost:1026/v2/op/query, request payload (87 bytes): {"entities":[{"idPattern":".*","type":"Building"}],"attrs":["temperature","windspeed"]}, response payload (2 bytes): [], response code: 200
INFO#2021-09-23T19:17:17.944Z logTracing.cpp[130]: Request received: POST /v2/op/query?options=normalized%2Ccount&limit=1000, request payload (55 bytes): {"entities": [{"idPattern": ".*", "type": "Building"}]}, response code: 200
The log says that it receives the request and forwards it as expected. However, as I understand it this would simply point to the same building entity again. Hence, it is somehow a circular forwarding. I also cannot tell anything about the headers of the request.
I do not understand how the forwarded request from the building can actually query the weather station for information. When I query my building I still only receive the entity with no own properties:
{
"id": "urn:ngsi-ld:building:001",
"type": "Building"
}
I also tried to vary the url of the registration but with no success.
Is this scenario actually possible with the current implementation? It would be very useful
Is there any example for this including also the headers?
I know that I could simply use reference but that would put more work on the user.
Thanks for any help on this.
It is messy, but you could achieve this via a subscription. Hold the weather station as a separate entity in the context broker and poll or push updates into the entity. The subscription would fire whenever the data changes and make two NGSI requests:
Find all entities which have a Relationship servicedBy=WeatherStationX
Run an upsert on all entities to add a Property to each entity:
{
"temperature" : {
"type" : "Property",
"value" : 7,
"unitCode": "CEL",
"observedAt": "XXXXX",
"providedBy": "WeatherStation1"
}
}
Where observedAt comes either from the payload of the weather station or the notification timestamp.
Within the existing IoT Agents, provisioning the link attribute allows a device to propagate measures to a second entity (e.g. this Thermometer entity is measuring temperature for an associated Building entity)
{
"entity_type": "Device",
"resource": "/iot/d",
"protocol": "PDI-IoTA-UltraLight",
..etc
"attributes": [
{"object_id": "l", "name": "temperature", "type":"Float",
"metadata":{
"unitCode":{"type": "Text", "value" :"CEL"}
}
}
],
"static_attributes": [
{
"name": "controlledAsset",
"type": "Relationship",
"value": "urn:ngsi-ld:Building:001",
"link": {
"attributes": ["temperature"],
"name": "providedBy",
"type": "Building"
}
}
]
}
At the moment the logic just links direct one-to-one, but it would be possible to raise a PR to check for an Array and update multiple entities in an upsert - the relevant section of code is here

Wanted to enable multi read region for azure cosmosdb account only if i am creating that for PROD environment (ARM Template)

I am creating Azure cosmosdb account using ARM template. wanted to enable multi read region for cosmosdb only if the environment name is "PROD". i am using the same template across all my other environment.
any suggestions. refer to the below sample script: Highlighted location should only be used if my environment name is prod.
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"environmentName": {
"type": "String",
"metadata": {
"description": "dev,dev1,qa,prod,etc"
}
},
-------------------
--------------------
-------------------
"resources": [
{
"type": "Microsoft.DocumentDB/databaseAccounts",
"apiVersion": "2015-04-08",
"name": "[parameters('cosmosDBName')]",
"location": "[resourceGroup().location]",
"tags": {
"Environment": "[parameters('environmentName')]",
"Project": "DevOps",
"CreatedBy": "ARMTemplate",
"description": "Azure Cosmos DBName"
},
"properties": {
"name": "[parameters('cosmosDBName')]",
"databaseAccountOfferType": "[variables('cosmosdbOfferType')]",
"locations": [
{
"locationName": "[resourceGroup().location]",
"failoverPriority": 0
},
**{
"locationName": "Central US",
"failoverPriority": 1
}**
]
}
}
]
There are multiple possibilities here including logical functions, deployment conditions or even passing in the regions as a parameter that could help solve the particular situation you have.
The most flexible option would be to pass the regions into the script by a parameter. This would allow it to be more flexible for reuse, in case environment names change or new ones are added. Also it would not be reliant on having a specific environment name set and would give more flexibility for its use. You can then easily setup the correct regions to be passed through in your deployment pipeline or parameter files for each environment.
If you don't want to do down that route there are other options such as using conditions to selectively deploy a resource with the correct setup. For your example this would lead to code duplication as you would need to have the resource elements twice with different setup and a condition tag that determined which one is run. This is not ideal due to the code duplication but might be useful on certain occasions.
Finally there is also the option of using Logical Functions to generate the locations required. This is similar to passing the regions to the script but you would generate the regions required using a function. This is slightly less flexible than passing the regions into the script but if you really need to set this up from a environment name this would probably be the way to go.
I have described each of the options above in a little more detail below with a few script examples. Please note the examples have not been tested so there may be typos or minor amends needed but they should generally be along the lines of what you need.
Passing regions to the template
Using this method you would have something similar to the following. Add a regions array as part of your parameters. e.g.
"regionsList": {
"type": "string",
"defaultValue": "Central US",
"metadata": {
"description": "Comma separated region list"
}
},
You could then have a variable setup to generate your list of locations using the copy function to dynamically set the locations based on the list passed in. e.g.
"variables": {
"regionArray": "[split(parameters('regionsList'), ',')]",
"locations": {
"copy": [
{
"name": "values",
"count": "[length(variables('regionArray'))]",
"input": {
"locationName": "[variables('regionArray')[copyIndex('values')]]",
"failoverPriority": "[copyIndex('values')]"
}
}
]
},
Once that is setup you only need to reference the variable within the locations property on the resource e.g.
"locations": "[variables('locations').values]",
so your resource section would look something similar to this if you go down that route
"resources": [
{
"type": "Microsoft.DocumentDB/databaseAccounts",
"apiVersion": "2015-04-08",
"name": "[parameters('cosmosDBName')]",
"location": "[resourceGroup().location]",
"tags": {
"Environment": "[parameters('environmentName')]",
"Project": "DevOps",
"CreatedBy": "ARMTemplate",
"description": "Azure Cosmos DBName"
},
"properties": {
"name": "[parameters('cosmosDBName')]",
"databaseAccountOfferType": "[variables('cosmosdbOfferType')]",
"locations": "[variables('locations').values]",
}
} ]
Deploy Conditions
For a conditional deployment you can setup a resource to have a deploy condition set on it. It works like an if statement so if the value is true it will deploy and if it is false it will not. In your case you would have something like this on the resource section itself.
"condition": "[equals(parameters('environmentName'), 'PROD')]"
As this is on the resource level though you would need to have your resource listed twice with different locations / setup and an opposite condition on each. So one setup with a condition of environment equal to PROD (this one with the additional location) and the other setup with an environment not equal to prod.
As mentioned above though this causes duplication and is not ideal and I would not go with it unless there are significant difference in the template for the prod environment and even then there are better ways.
Logical Functions
Logical functions can allow you to do transforms on values before using them and include things like if statements. In this case you could use them to determine the locations required for your resource based on the environment name passed in. This is similar to the passing regions to the template but without actually passing the regions in. It is slightly less flexible due to that.
You could then have a variable setup to generate your list of locations using the copy function to dynamically set the locations based on the list passed in. e.g.
"variables": {
"regionsList" : "[if(equals(parameters('environmentName'), 'PROD'), 'East US,Central US','East US')]"
"regionArray": "[split(variables('regionsList'), ',')]",
"locations": {
"copy": [
{
"name": "values",
"count": "[length(variables('regionArray'))]",
"input": {
"locationName": "[variables('regionArray')[copyIndex('values')]]",
"failoverPriority": "[copyIndex('values')]"
}
}
]
},
Once that is setup you only need to reference the variable within the locations property on the resource e.g.
"locations": "[variables('locations').values]",
so your resource section would look something similar to this.
"resources": [
{
"type": "Microsoft.DocumentDB/databaseAccounts",
"apiVersion": "2015-04-08",
"name": "[parameters('cosmosDBName')]",
"location": "[resourceGroup().location]",
"tags": {
"Environment": "[parameters('environmentName')]",
"Project": "DevOps",
"CreatedBy": "ARMTemplate",
"description": "Azure Cosmos DBName"
},
"properties": {
"name": "[parameters('cosmosDBName')]",
"databaseAccountOfferType": "[variables('cosmosdbOfferType')]",
"locations": "[variables('locations').values]",
}
} ]
With all of these options its also good to understand what all of the features actually do / are for. Please see below for links to the docs around the features I mentioned above.
ARM Copy
ARM Logical Functions
ARM Deployment Conditions

Defining query parameters for basic CRUD operations in Loopback

We are using Loopback successfully so far, but we want to add query params to our API documentation.
In our swagger.json file, we might have something that looks like =>
{
"swagger": "2.0",
"info": {
"version": "1.0.0",
"title": "poc-discovery"
},
"basePath": "/api",
"paths": {
"/Users/{id}/accessTokens/{fk}": {
"get": {
"tags": [
"User"
],
"summary": "Find a related item by id for accessTokens.",
"operationId": "User.prototype.__findById__accessTokens",
"parameters": [
{
"name": "fk",
"in": "path",
"description": "Foreign key for accessTokens",
"required": true,
"type": "string",
"format": "JSON"
},
{
"name": "id",
"in": "path",
"description": "User id",
"required": true,
"type": "string",
"format": "JSON"
},
{
"name":"searchText",
"in":"query",
"description":"The Product that needs to be fetched",
"required":true,
"type":"string"
},
{
"name":"ctrCode",
"in":"query",
"description":"The Product locale needs to be fetched. Example=en-GB, fr-FR, etc.",
"required":true,
"type":"string"
},
],
I am 99% certain the swagger.json information gets generated dynamically via information from the .json files in the /server/models directory.
I am hoping that I can add the query params that we accept for each model in those .json files. What I want to avoid is having to modify swagger.json directly.
What is the best approach to add our query params so that they show up in our docs? Very confused as to how to best approach this.
After a few hours of tinkering, I'm afraid there is not a straight forward way to achieve this as the swagger spec generated here is representation of remoting metadata for model methods along with Model data from model.json files.
Thus, updating remoting metadata for built-in model methods would be challenging and it might not be fully supported by method implementations.
Right approach, IMO, here is to:
- create a remoteMethod wrapper around built-in method for which you want additional params to be injected with requried http mapping data.
- And, disable the REST end-point for the built-in method using
MyModel.disableRemoteMethod(<methodName>, <isStatic>).