Fiware Orion MQTT notification not working (anymore) - fiware

I don’t know where to look anymore, maybe someone has an idea what’s going wrong?
I created an MQTT subscription on my Orion Context Broker:
{
"description": "Subscription to notify of all WaterQualityObserved changes",
"subject": {
"entities": [{
"idPattern": ".*",
"type": "WaterQualityObserved"
}],
"condition": {
"attrs": []
}
},
"notification": {
"mqtt": {
"url": "mqtt://127.0.0.1:1883",
"topic": "water-quality-observed-changed"
}
}
}
I have both my Orion Context Broker and Mosquitto MQTT broker running locally in Docker containers.
I get this when listing the subscriptions in my Orion CB:
[
{
"id": "633bf12fe929777b6a60242b",
"description": "MQTT subscription to notify of all WaterQualityObserved changes",
"status": "active",
"subject": {
"entities": [
{
"idPattern": ".*",
"type": "WaterQualityObserved"
}
],
"condition": {
"attrs": []
}
},
"notification": {
"timesSent": 3,
"lastNotification": "2022-10-04T08:47:55.000Z",
"attrs": [],
"onlyChangedAttrs": false,
"attrsFormat": "normalized",
"mqtt": {
"url": "mqtt://127.0.0.1:1883",
"topic": "water-quality-observed-changed",
"qos": 0
},
"lastFailure": "2022-10-04T08:47:55.000Z",
"failsCounter": 3,
"covered": false
}
}
]
As you can see “timesSent” augments when I PATCH the entity.
The strange thing is it worked before!
Any idea what I’m doing wrong?
Thanks.
Guy

The "The strange thing is it worked before!" sentence make me think it has to do with connectivity between container. I'd suggest to review all the involved connectivity (Orion -> MQTT broker, MQTT broker -> your MQTT subscriber). If that doesn't help, a re-deploy of all the docker containers could help.

Related

ARM template for Data Factory connector in Logic Apps with Managed Identity

I have a Logic App that uses the Azure Data Factory action "Create a pipeline run" that works perfectly.
This is how the Logic App looks like
The authentication method to Azure Data Factory that I use is "System assigned" managed identity.
After creating and testing the Logic App, I now want to create an ARM template to save it in the code repository for deployment, however I'm struggling to get the authentication part of the ARM template to work. I'm not sure how the syntax should be and I don't find anything in the Microsoft documentation.
In the Logic App resource I have added:
"identity": {
"type": "SystemAssigned"
}
This is how the connections part of the Logic app resource looks like:
"$connections": {
"value": {
"azuredatafactory": {
"connectionId": "[parameters('connections_azuredatafactory_externalid')]",
"connectionName": "[parameters('connections_azuredatafactory_name')]",
"connectionProperties": {
"authentication": {
"type": "ManagedServiceIdentity"
}
},
"id": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Web/locations/francecentral/managedApis/azuredatafactory')]"
}
}
}
And this is how the connector resource look like (I think I'm missing something here (?)):
{
"type": "Microsoft.Web/connections",
"apiVersion": "2016-06-01",
"name": "[parameters('connections_azuredatafactory_name')]",
"location": "francecentral",
"kind": "V1",
"properties": {
"displayName": "[parameters('connections_azuredatafactory_displayname')]",
"alternativeParameterValues": {},
"parameterValueSet": {
"name": "managedIdentityAuth",
"values": {}
},
"statuses": [
{
"status": "Ready"
}
],
"api": {
"id": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Web/locations/francecentral/managedApis/azuredatafactory')]"
}
}
}
The error message I get when trying to deploy this through Visual studio 2022 is:
Template deployment returned the following errors:
Resource Microsoft.Logic/workflows 'logic-d365-dwh-01-ip-dev-rxlse' failed with message '{
"error": {
"code": "WorkflowManagedIdentityConfigurationInvalid",
"message": "The workflow connection parameter 'azuredatafactory' is not valid. The API connection 'azuredatafactory' is not configured to support managed identity."
}
}'
Anyone who knows what the problem could be?
1)I have created azure logic App with 3 actions (http request, create ADF pipeline, response).
Here is the reference image:
2)Then to connect to ADF used system assigned managed identity & I have given access for logic App to create pipeline in ADF.
Here is the reference image:
Then I have tested in portal & it is succussed
Then I have exported ARM Template & downloaded.
Then in visual studio I have created new project of type Azure resource group then I have edited logicapp.json & logic app parameters file based on template.
Then I have deployed it and it is succussed.
ARM template code which I have used for reference:
{
"$schema": "[https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#"](https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#%22 "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#%22"),
"contentVersion": "1.0.0.0",
"parameters": {
"workflows_so1LP_name": {
"defaultValue": "so1LP",
"type": "String"
},
"connections_azuredatafactory_1_externalid": {
"defaultValue": "/subscriptions/<subscription-id>/resourceGroups/so1/providers/Microsoft.Web/connections/azuredatafactory-1",
"type": "String"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.Logic/workflows",
"apiVersion": "2017-07-01",
"name": "[parameters('workflows_so1LP_name')]",
"location": "centralus",
"identity": {
"type": "SystemAssigned"
},
"properties": {
"state": "Enabled",
"definition": {
"$schema": "[https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#"](https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#%22 "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#%22"),
"contentVersion": "1.0.0.0",
"parameters": {
"$connections": {
"defaultValue": {},
"type": "Object"
}
},
"triggers": {
"manual": {
"type": "Request",
"kind": "Http",
"inputs": {}
}
},
"actions": {
"Create_a_pipeline_run": {
"runAfter": {},
"type": "ApiConnection",
"inputs": {
"host": {
"connection": {
"name": "#parameters('$connections')['azuredatafactory_1']['connectionId']"
}
},
"method": "post",
"path": "/subscriptions/#{encodeURIComponent('<subscription id>')}/resourcegroups/#{encodeURIComponent('so1')}/providers/Microsoft.DataFactory/factories/#{encodeURIComponent('sodf1')}/pipelines/#{encodeURIComponent('sopipeline')}/CreateRun",
"queries": {
"x-ms-api-version": "2017-09-01-preview"
}
}
},
"Response": {
"runAfter": {
"Create_a_pipeline_run": [
"Succeeded"
]
},
"type": "Response",
"kind": "Http",
"inputs": {
"statusCode": 200
}
}
},
"outputs": {}
},
"parameters": {
"$connections": {
"value": {
"azuredatafactory_1": {
"connectionId": "[parameters('connections_azuredatafactory_1_externalid')]",
"connectionName": "azuredatafactory-1",
"connectionProperties": {
"authentication": {
"type": "ManagedServiceIdentity"
}
},
"id": "/subscriptions/<subscription-id>/<Subscriotion id>providers/Microsoft.Web/locations/centralus/managedApis/azuredatafactory"
}
}
}
}
}
}
],
"outputs": {}
}
Here is the reference image:
NOTE: I am using free subscription, so I don't have any restrictions but, in your case, maybe you have some restrictions that's why maybe your facing issue.
The second reasons may be your using system assigned access after creating logic app to give access to ADF & once check are you giving managed identity after creating ADF give access to logic app also. so maybe you are skipping one of managed identity that's why getting error in ARM template deployment. So, give access to both from ADF to logic app and logic app to ADF.
Here are some images for reference for logic app to ADF:
Go to "access control" of logic app.
Select owner as role.
Select managed identity as data factory.
Here are some images for reference for ADF to logic app:
Go to "access control" of data factory.
Select owner as role.
Select managed identity as logic app.
Did you try using "parameterValueType": "Alternative" instead of "parameterValueSet"?
{
"type": "Microsoft.Web/connections",
"apiVersion": "2016-06-01",
"name": "[parameters('connections_azuredatafactory_name')]",
"location": "francecentral",
"kind": "V1",
"properties": {
"displayName": "[parameters('connections_azuredatafactory_displayname')]",
"customParameterValues": {},
"parameterValueType": "Alternative"
"api": {
"id": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Web/locations/francecentral/managedApis/azuredatafactory')]"
}
}
}

Failed to retrieve function source code when deploying a cloud function from a repository on a different project

I am trying to deploy a Cloud Function from a Cloud Source Repository placed in a different project, but getting the following error: Failed to retrieve function source code (see full proto below).
Project-A contains the cloud function and service accounts listed below.
Project-B contains the source repository.
I have successfully deployed the function on Project-B.
I've tried giving the following service accounts the Source Repository Administrator role on the cloud source repository, but that did not help.
{project_A_number}#cloudservices.gserviceaccount.com
{project_A_number}-compute#developer.gserviceaccount.com
{project_A_number}#cloudbuild.gserviceaccount.com
Project-A#appspot.gserviceaccount.com
I have also tried disabling the Cloud Functions API on Project-A and turning it back on again.
I am not sure what is going wrong - if anyone has a clue as to where to further look, I would appreciate it - thanks in advance!
The deployment creates two entries in monitoring - a NOTICE followed by an ERROR:
The ERROR log:
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 5,
"message": "Failed to retrieve function source code"
},
"authenticationInfo": {
"principalEmail": "***#***.**"
},
"serviceName": "cloudfunctions.googleapis.com",
"methodName": "google.cloud.functions.v1.CloudFunctionsService.UpdateFunction",
"resourceName": "projects/Project-A/locations/europe-west1/functions/pubsub-to-gcs"
},
"insertId": "-vmfbt4cd54",
"resource": {
"type": "cloud_function",
"labels": {
"function_name": "pubsub-to-gcs",
"region": "europe-west1",
"project_id": "Project-A"
}
},
"timestamp": "2021-10-20T12:21:45.352043Z",
"severity": "ERROR",
"logName": "projects/Project-A/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operations/cm9ldHotbGlmZS1kYXRhLXRlc3QvZXVyb3BlLXdlc3QxL3B1YnN1Yi10by1nY3MvVEhFbUQtLTZITWM",
"producer": "cloudfunctions.googleapis.com",
"last": true
},
"receiveTimestamp": "2021-10-20T12:21:45.781856467Z"
}
The NOTICE log (logged right before the ERROR):
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"authenticationInfo": {
"principalEmail": "***#****.**"
},
"requestMetadata": {
"callerIp": "35.205.252.75",
"callerSuppliedUserAgent": "google-cloud-sdk gcloud/360.0.0 command/gcloud.functions.deploy invocation-id/917d697431e84b91bfa2bd9f9cc4f302 environment/devshell environment-version/None interactive/True from-script/False python/3.7.3 term/screen (Linux 5.4.144+),gzip(gfe),gzip(gfe)",
"requestAttributes": {
"time": "2021-10-20T12:21:44.909430Z",
"auth": {}
},
"destinationAttributes": {}
},
"serviceName": "cloudfunctions.googleapis.com",
"methodName": "google.cloud.functions.v1.CloudFunctionsService.UpdateFunction",
"authorizationInfo": [
{
"resource": "projects/Project-A/locations/europe-west1/functions/pubsub-to-gcs",
"permission": "cloudfunctions.functions.update",
"granted": true,
"resourceAttributes": {}
}
],
"resourceName": "projects/Project-A/locations/europe-west1/functions/pubsub-to-gcs",
"request": {
"#type": "type.googleapis.com/google.cloud.functions.v1.UpdateFunctionRequest",
"function": {
"timeout": "60s",
"status": "UNKNOWN",
"serviceAccountEmail": "Project-A#appspot.gserviceaccount.com",
"availableMemoryMb": 256,
"name": "projects/Project-A/locations/europe-west1/functions/pubsub-to-gcs",
"runtime": "python39",
"labels": {
"deployment-tool": "cli-gcloud"
},
"entryPoint": "pubsub-to-gcs",
"updateTime": "2021-10-20T12:21:40.149Z",
"sourceRepository": {
"url": "https://source.developers.google.com/projects/Project-B/repos/my-repo/moveable-aliases/master/paths/my-folder"
},
"httpsTrigger": {},
"ingressSettings": "ALLOW_ALL",
"versionId": "1"
},
"updateMask": "eventTrigger,httpsTrigger,runtime,sourceRepository"
},
"resourceLocation": {
"currentLocations": [
"europe-west1"
]
}
},
"insertId": "1xdbim3e16pgu",
"resource": {
"type": "cloud_function",
"labels": {
"function_name": "pubsub-to-gcs",
"region": "europe-west1",
"project_id": "Project-A"
}
},
"timestamp": "2021-10-20T12:21:44.650257Z",
"severity": "NOTICE",
"logName": "projects/Project-A/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operations/cm9ldHotbGlmZS1kYXRhLXRlc3QvZXVyb3BlLXdlc3QxL3B1YnN1Yi10by1nY3MvVEhFbUQtLTZITWM",
"producer": "cloudfunctions.googleapis.com",
"first": true
},
"receiveTimestamp": "2021-10-20T12:21:45.832588036Z"
}
Turns out it wasn't an IAM issue: I've tried deploying the function from the UI, but that's not possible when deploying from a source repo in a different project.
Deploying using gcloud function deploy solved the issue.

FIWARE - Orion Context Broker as Context Provider

I'm having a hard time understanding how context providers work in the Orion Context Broker.
I followed the examples in the step-by-step guide written by Jason Fox. However, I still do not exactly get what happens in the background and how the context broker exactly creates the POST from the registration. Here is what I am trying to do:
I do have a WeatherStation that provides sensor data for a neighborhood.
{
"id": "urn:ngsi-ld:WeatherStation:001",
"type": "Device:WeatherStation",
"temperature": {
"type": "Number",
"value": 20.5,
"metadata": {}
},
"windspeed": {
"type": "Number",
"value": 60.0,
"metadata": {}
}
}
Now I like the WeatherStation to be a context provider for all buildings.
{
"id": "urn:ngsi-ld:building:001",
"type": "Building"
}
Here is the registration that I try to use.
{
"id": null,
"description": "Random Weather Conditions",
"provider": {
"http": {
"url": "http://localhost:1026/v2"
},
"supportedForwardingMode": "all"
},
"dataProvided": {
"entities": [
{
"id": "null",
"idPattern": ".*",
"type": "Building",
"typePattern": null
}
],
"attrs": [
"temperature",
"windspeed"
],
"expression": null
},
"status": "active",
"expires": null,
"forwardingInformation": null
}
The context broker accepts both entities and the registration without any error.
Since I have a multi-tenant setup I use one fiware_service for the complete neighborhood but every building would later have a seperate fiware_servicepath. Hence, the weatherstation has a different servicepath than the building. Although I also tried to put them both on the same path.
For now I used the same headers for all entities.
{
"fiware-service": "filip",
"fiware-servicepath": "/testing"
}
Here is the log of the context broker (version: 3.1.0):
INFO#2021-09-23T19:17:17.944Z logTracing.cpp[212]: Request forwarded (regId: 614cd2b511c25270060d873a): POST http://localhost:1026/v2/op/query, request payload (87 bytes): {"entities":[{"idPattern":".*","type":"Building"}],"attrs":["temperature","windspeed"]}, response payload (2 bytes): [], response code: 200
INFO#2021-09-23T19:17:17.944Z logTracing.cpp[130]: Request received: POST /v2/op/query?options=normalized%2Ccount&limit=1000, request payload (55 bytes): {"entities": [{"idPattern": ".*", "type": "Building"}]}, response code: 200
The log says that it receives the request and forwards it as expected. However, as I understand it this would simply point to the same building entity again. Hence, it is somehow a circular forwarding. I also cannot tell anything about the headers of the request.
I do not understand how the forwarded request from the building can actually query the weather station for information. When I query my building I still only receive the entity with no own properties:
{
"id": "urn:ngsi-ld:building:001",
"type": "Building"
}
I also tried to vary the url of the registration but with no success.
Is this scenario actually possible with the current implementation? It would be very useful
Is there any example for this including also the headers?
I know that I could simply use reference but that would put more work on the user.
Thanks for any help on this.
It is messy, but you could achieve this via a subscription. Hold the weather station as a separate entity in the context broker and poll or push updates into the entity. The subscription would fire whenever the data changes and make two NGSI requests:
Find all entities which have a Relationship servicedBy=WeatherStationX
Run an upsert on all entities to add a Property to each entity:
{
"temperature" : {
"type" : "Property",
"value" : 7,
"unitCode": "CEL",
"observedAt": "XXXXX",
"providedBy": "WeatherStation1"
}
}
Where observedAt comes either from the payload of the weather station or the notification timestamp.
Within the existing IoT Agents, provisioning the link attribute allows a device to propagate measures to a second entity (e.g. this Thermometer entity is measuring temperature for an associated Building entity)
{
"entity_type": "Device",
"resource": "/iot/d",
"protocol": "PDI-IoTA-UltraLight",
..etc
"attributes": [
{"object_id": "l", "name": "temperature", "type":"Float",
"metadata":{
"unitCode":{"type": "Text", "value" :"CEL"}
}
}
],
"static_attributes": [
{
"name": "controlledAsset",
"type": "Relationship",
"value": "urn:ngsi-ld:Building:001",
"link": {
"attributes": ["temperature"],
"name": "providedBy",
"type": "Building"
}
}
]
}
At the moment the logic just links direct one-to-one, but it would be possible to raise a PR to check for an Array and update multiple entities in an upsert - the relevant section of code is here

Google App Engine ERROR Throttling refreshCfg with Gcloud MySQL instance

Continuous failed connection attempts errors are occurring in Google Cloud MySQL running on Google APP Engine with public IP.
These are some of the logs:
receiveTimestamp resource.labels.module_id resource.labels.project_id resource.labels.version_id resource.labels.zone resource.type severity textPayload timestamp
2021-06-08T05:48:43.497385728Z zzzzzzzzzz xxxxxxx 4 eu5 gae_app ERROR Throttling refreshCfg(xxxxxxx:europe-west1:yyyyyyyyy): it was only called 80.802µs ago 2021-06-08T05:48:43.494284Z
2021-06-08T05:19:08.394840567Z zzzzzzzzzz xxxxxxx 4 eu5 gae_app ERROR Throttling refreshCfg(xxxxxxx:europe-west1:yyyyyyyyy): it was only called 42.519µs ago 2021-06-08T05:19:08.391909Z
2021-06-08T05:13:42.889911567Z zzzzzzzzzz xxxxxxx 4 eu5 gae_app ERROR Throttling refreshCfg(xxxxxxx:europe-west1:yyyyyyyyy): it was only called 73.279µs ago 2021-06-08T05:13:42.888659Z
2021-06-08T04:47:07.470804269Z zzzzzzzzzz xxxxxxx 4 eu5 gae_app ERROR Throttling refreshCfg(xxxxxxx:europe-west1:yyyyyyyyy): it was only called 85.928µs ago 2021-06-08T04:47:07.467377Z
I tried some different configurations of max_connections, pool_size, pool_timeout with no success.
I have consulted this previous Issue.
And this documentation.
Some help would be appreciated.
More information. The error is always preceded by this warning in the log record:
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {},
"authenticationInfo": {
"principalEmail": "bbbbbbbb#appspot.gserviceaccount.com",
"serviceAccountDelegationInfo": [
{
"firstPartyPrincipal": {
"principalEmail": "app-engine-appserver#prod.google.com"
}
}
]
},
"requestMetadata": {
"callerIp": "2600:1900:2001:12::8",
"requestAttributes": {
"time": "2021-06-09T05:59:27.400680Z",
"auth": {}
},
"destinationAttributes": {}
},
"serviceName": "cloudsql.googleapis.com",
"methodName": "cloudsql.instances.connect",
"authorizationInfo": [
{
"resource": "instances/aaaaaaaaaaa",
"permission": "cloudsql.instances.connect",
"granted": true,
"resourceAttributes": {}
}
],
"resourceName": "instances/aaaaaaaaa",
"request": {
"project": "bbbbbbb",
"#type": "type.googleapis.com/google.cloud.sql.v1beta4.SqlInstancesCreateEphemeralCertRequest",
"instance": "zzzzzzzzz",
"body": {}
},
"response": {
"#type": "type.googleapis.com/google.cloud.sql.v1beta4.SslCert",
"kind": "sql#sslCert"
}
},
"insertId": "-rgtsssssssssss",
"resource": {
"type": "cloudsql_database",
"labels": {
"region": "europe-west1",
"project_id": "bbbbbbbb",
"database_id": "aaaaaaaaaaaaaaa"
}
},
"timestamp": "2021-06-09T05:59:27.381352Z",
"severity": "NOTICE",
"logName":
"projects/demosmf/logs/cloudaudit.googleapis.com%2Factivity",
"receiveTimestamp": "2021-06-09T05:59:27.746071609Z"
I think it has something to do with the management of ssl certificates.
I have verified that the application certificates are valid and have not expired
This error has been reported via Google's Public Issue Tracker.
You can follow the thread I mentioned above to track the progress.

Fiware Orion Context Broker: Restrictions on subscriptions

Does Orion support restrictions on subscriptions? Ex. I want to receive context updates only when temperature > 30
That functionality is not implemented in NGSIv1, but planned for NGSIv2 (see conditions field in "Subscriptions" at the NGSIv2 draft specification). However, it has not been yet implemented in the last Orion version at the time of writting this (0.25.0).
EDIT: this functionality has been finally implemented in Orion 0.27.0, eg:
POST /v2/subscriptions
...
{
"subject": {
"entities": [
{
"idPattern": ".*",
"type": "device"
}
],
"condition": {
"attributes": [ "temperature" ],
"expression": {
"q": "temperature>30"
}
}
},
"notification": {
"callback": "http://foo.bar:5050/notify",
"attributes": [ ]
},
"expires": "2050-04-05T14:00:00.00Z"
}