adding nested OPC-UA Variable results in "String cannot be coerced to a nodeId" - fiware

Error: String cannot be coerced to a nodeId
Hi,
I was busy setting up a connection between the Orion Broker and an PLC with OPC-UA Server using the opcua iotagent agent.
I managed to setup all parts and I am able to receive (test) data, but I am unable to follow the tutorial with regards to adding an entity to the Orion-Broker using a json file:
curl http://localhost:4001/iot/devices -H "fiware-service: plcservice" -H "fiware-servicepath: /demo" -H "Content-Type: application/json" -d #add_device.json
The expected result would be an added entity to the OrionBroker with the supplied data, but this only results in a error message:
{"name":"Error","message":"String cannot be coerced to a nodeId : ns*4:s*MAIN.mainVar"}
suspected Error
Is it possible that the iotagent does not work nicely with nested Variables?
steps taken
doublechecked availability of OPC Data:
OPC data changes every second, can be seen in Broker log
reduced complexity of setup to only include Broker and IOT-agent
additional information:
add_device.json file:
{
"devices": [
{
"device_id": "plc1",
"entity_name": "PLC1",
"entity_type": "plc",
"attributes": [
{
"object_id": "ns*4:s*MAIN.mainVar",
"name": "main",
"type": "Number"
}
],
"lazy": [
],
"commands" : []
}
]
}
config of IOT-agent (from localhost:4081/config):
{
"config": {
"logLevel": "DEBUG",
"contextBroker": {
"host": "orion",
"port": 1026
},
"server": {
"port": 4001,
"baseRoot": "/"
},
"deviceRegistry": {
"type": "memory"
},
"mongodb": {
"host": "iotmongo",
"port": "27017",
"db": "iotagent",
"retries": 5,
"retryTime": 5
},
"types": {
"plc": {
"service": "plcservice",
"subservice": "/demo",
"active": [
{
"name": "main",
"type": "Int16"
},
{
"name": "test1",
"type": "Int16"
},
{
"name": "test2",
"type": "Int16"
}
],
"lazy": [],
"commands": []
}
},
"browseServerOptions": null,
"service": "plc",
"subservice": "/demo",
"providerUrl": "http://iotage:4001",
"pollingExpiration": "200000",
"pollingDaemonFrequency": "20000",
"deviceRegistrationDuration": "P1M",
"defaultType": null,
"contexts": [
{
"id": "plc_1",
"type": "plc",
"service": "plcservice",
"subservice": "/demo",
"polling": false,
"mappings": [
{
"ocb_id": "test1",
"opcua_id": "ns=4;s=test.TestVar.test1",
"object_id": null,
"inputArguments": []
},
{
"ocb_id": "test2",
"opcua_id": "ns=4;s=test.TestVar.test2",
"object_id": null,
"inputArguments": []
},
{
"ocb_id": "main",
"opcua_id": "ns=4;s=MAIN.mainVar",
"object_id": null,
"inputArguments": []
}
]
}
]
}
}

I'm one of the maintainers of the iotagent-opcua repo, we have identified and fixed the bug you were addressing, please update your agent to the latest version (1.4.0)
If you haven't ever heard about it, starting from 1.3.8 we have introduced a new configuration property called "relaxTemplateValidation" which let you use previously forbidden characters (e.g. = and ; ). I suggest you to have a look at it on the configuration examples provided.

Related

Parse json response from curl in github action

I have the curl request as shown below and also attached the response of curl request from the response how can I fetch the value which is under "status.azure.resource_name" and store it in some variable as I'm new to GitHub action facing some challenges if this is some programming api I could have resolved it
Request:
curl --location --request PUT $URL \
--header "$AUTH_HEADER" \
--header 'Content-Type: application/json' \
--data-raw "$PAYLOAD"
Response:
{
"labels": {},
"spec": {
"mysql": {
"version": "8.0",
"sku": {
"name": "GP_Gen5_4"
},
"storage_profile": {
"storage_mb": 5120
}
},
"key_vault": {
"access_policies": [
{
"name": "test",
"type": "group",
"project": null
}
]
}
},
"type": "azure-mysql",
"name": "mysql",
"id": "1234",
"created_at": "2012-03-04T10:00:05+00:00",
"updated_at": "2012-03-04T10:00:05+00:00",
"project": {
"id": "ae3dfa99",
"name": "Test",
"url": "www.google.com",
"geography": "in"
},
"links": {
"key_vault": {
"endpoint": {
"url": "",
"description": "test.",
"display_name": "test"
},
"azure_portal": {
"url": "test",
"description": "Link to the resource in the Azure portal.",
"display_name": "Key Vault Azure Portal"
}
},
"azure_portal": {
"url": "test",
"description": "Link to the resource in the Azure portal.",
"display_name": "Azure Portal"
},
"endpoint": {
"url": "test",
"description": "Azure resource endpoint.",
"display_name": "test"
}
},
"url": "test",
"tags": {
"cost_center_id": "471000",
"customer": "internal",
"product_group": "internal",
"environment_type": "test",
"budget_category": "",
"team": ""
},
"spiffe_id": "test",
"status": {
"ready": false,
"state": "reconciling",
"deployment": {
"steps": {}
},
"azure": {
"resource_name": "test",
"id": null,
"subscription_id": "test",
"resource_group": "test"
},
"key_vault": {
"access_policies": []
}
}
}
There are multiple ways to achieve that.
You can use directly jq and read variable in bash step.
You can use one of existing open source actions from marketplace:
- name: get nested property
id: format_script
uses: notiz-dev/github-action-json-property#release
with:
path: 'yourjson.json'
prop_path: 'status.azure.resource_name'
- run: echo ${{steps.format_script.outputs.prop}}
Download just one property from whole JSON if you don't need a file itself:
- uses: senmu/download-json-property-action#v1.0.0
with:
url: 'https://httpbin.org/json'
property_path: status.azure.resource_name

Fiware Context Provider hello world not working

I'm trying to register a Context Provider as source for several KPIs.
So far, it seems registering might be working, as GET http://{{orion}}/v2/registrations returns something similar to what I set in creation:
{
// each registering a new id is returned: "id": "60991a887032541f4539a71d",
"description": "City Inhabitants",
"dataProvided": {
"entities": [
{
"id": "city.inhabitants"
//,"type": "KeyPerformanceIndicator"
}
]
},
"provider": {
"http": {
"url": "http://myhost/v2/inhabitants"
}
//, "legacyForwarding": false //perhaps there's a bug, cause although unset or set to false, broker still returns true.
}
}
However GET http://{{orion}}/v2/entities/city.inhabitants results in:
{
"error": "BadRequest",
"description": "Service not found. Check your URL as probably it is wrong."
}
GET http://{{orion}}/v2/entities?type=KeyPerformanceIndicator returns [] and context-provider is not being invoked any way.
I'm coding a Node.js application from scratch, to better understand what's going on https://github.com/FIWARE/tutorials.Context-Providers, and using tutorial containers as actual Fiware Broker, so all stores/shelf/products are working but not my KPI.
Using the tutorial application where a microservice is listening on the http://context-provider:3000/random/weatherConditions POST endpoint.
The following registration:
curl -L -X POST 'http://localhost:1026/v2/registrations' \
-H 'Content-Type: application/json' \
--data-raw '{
"description": "Get Weather data for KPI 1",
"dataProvided": {
"entities": [
{
"id" : "urn:ngsi-ld:KPI:010",
"type": "KeyPerformanceIndicator"
}
],
"attrs": [
"temperature", "relativeHumidity"
]
},
"provider": {
"http": {
"url": "http://context-provider:3000/random/weatherConditions"
},
"legacyForwarding": false
},
"status": "active"
}'
Will forward and retrieve data for the following request:
curl -L -X GET 'http://localhost:1026/v2/entities/urn:ngsi-ld:KPI:010'
or
curl -L -X GET 'http://localhost:1026/v2/entities/?type=KeyPerformanceIndicator'
Returns:
{
"id": "urn:ngsi-ld:KPI:010",
"type": "KeyPerformanceIndicator",
"temperature": {
"type": "Number",
"value": 11,
"metadata": {}
},
"relativeHumidity": {
"type": "Number",
"value": 39,
"metadata": {}
}
}
The context broker is forwarding the request to the http://context-provider:3000/random/weatherConditions POST end-point with the following payload:
{
"entities": [
{
"id": "urn:ngsi-ld:KPI:010",
"type": "KeyPerformanceIndicator"
}
],
"attrs": [
"temperature",
"relativeHumidity"
]
}

Openshift: Unresolved Image

i'm stuck with Openshift (Origin) and need some help.
Let's say i want to add a grafana deployment via CLI to a new started cluster.
What i do:
Upload a template to my openshift cluster (oc create -f openshift-grafana.yml)
Pull the necessary image from the docker hub (oc import-image --confirm grafana/grafana)
Build a new app based on my template (oc new-app grafana)
These steps creates the deployment config and the routes.
But then i'm not able to start a deployment via CLI.
# oc deploy grafana
grafana deployment #1 waiting on image or update
# oc rollout latest grafana
Error from server (BadRequest): cannot trigger a deployment for "grafana" because it contains unresolved imagesenter code here
In the openshift web console it looks like this:
The images is there, even the link is working. In the web console i can click "deploy" and it's working. But nevertheless i'm not able to rollout a new version via command line.
The only way it works is editing the deployment yml so openshift recognizes a change a starts a deployment based on "config change" (hint: i'm was not changing the image or image name)
There is nothing special in my template, it was just an export via oc export from a working config.
Any hint would be appreciated, i'm pretty much stuck.
Thanks.
I had this same issue and I solved it by adding:
lastTriggeredImage: >-
mydockerrepo.com/repo/myimage#sha256:xxxxxxxxxxxxxxxx
On:
triggers:
- type: ImageChange
imageChangeParams:
Of the deploymentconfig yaml. Looks like if it doesn't know what the last triggered image is, it wont be able to resolve it.
Included below is the template you can use as a starter. Just be aware that the grafana image appears to require it be run as root, else it will not startup. This means you have to override the default security model of OpenShift and enable allowing running of images as root in the project. This is not recommended. The grafana images should be fixed so as not to require they be run as root.
To enable running as root, you would need to run as a cluster admin:
oc adm policy add-scc-to-user anyuid -z default -n myproject
where myproject is the name of the project you are using.
I applied it to the default service account, but better you create a separate service account, apply it to that and then change the template so that only grafana runs as that service account.
It is possible that the intent is that you override the default settings through the grafana.ini file so it uses your mounted emptyDir directories and then it isn't an issue. I didn't attempt to provide any override config.
The template for grafana would then be as follows. Note I have used JSON as I find it easier to work with JSON, but also to avoid indenting being screwed up making the YAML impossible to use.
Before you use this template, you should obviously create the corresponding config map where name is of form ${APPLICATION_NAME}-config where ${APPLICATION_NAME} is grafana unless you override it when using the template. The key in the config map should be grafana.ini and then have as value the config file contents.
{
"apiVersion": "v1",
"kind": "Template",
"metadata": {
"name": "grafana"
},
"parameters": [
{
"name": "APPLICATION_NAME",
"value": "grafana",
"from": "[a-zA-Z0-9]",
"required": true
}
],
"objects": [
{
"apiVersion": "v1",
"kind": "ImageStream",
"metadata": {
"name": "${APPLICATION_NAME}-img",
"labels": {
"app": "${APPLICATION_NAME}"
}
},
"spec": {
"tags": [
{
"name": "latest",
"from": {
"kind": "DockerImage",
"name": "grafana/grafana"
}
}
]
}
},
{
"apiVersion": "v1",
"kind": "DeploymentConfig",
"metadata": {
"name": "${APPLICATION_NAME}",
"labels": {
"app": "${APPLICATION_NAME}",
"type": "monitoring"
}
},
"spec": {
"replicas": 1,
"selector": {
"app": "${APPLICATION_NAME}",
"deploymentconfig": "${APPLICATION_NAME}"
},
"template": {
"metadata": {
"labels": {
"app": "${APPLICATION_NAME}",
"deploymentconfig": "${APPLICATION_NAME}",
"type": "monitoring"
}
},
"spec": {
"containers": [
{
"name": "grafana",
"image": "${APPLICATION_NAME}-img:latest",
"imagePullPolicy": "Always",
"livenessProbe": {
"failureThreshold": 3,
"httpGet": {
"path": "/",
"port": 3000,
"scheme": "HTTP"
},
"periodSeconds": 10,
"successThreshold": 1,
"timeoutSeconds": 1
},
"ports": [
{
"containerPort": 3000,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/etc/grafana",
"name": "grafana-1"
},
{
"mountPath": "/var/lib/grafana",
"name": "grafana-2"
},
{
"mountPath": "/var/log/grafana",
"name": "grafana-3"
}
]
}
],
"volumes": [
{
"configMap": {
"defaultMode": 420,
"name": "${APPLICATION_NAME}-config"
},
"name": "grafana-1"
},
{
"emptyDir": {},
"name": "grafana-2"
},
{
"emptyDir": {},
"name": "grafana-3"
}
]
}
},
"test": false,
"triggers": [
{
"type": "ConfigChange"
},
{
"imageChangeParams": {
"automatic": true,
"containerNames": [
"grafana"
],
"from": {
"kind": "ImageStreamTag",
"name": "${APPLICATION_NAME}-img:latest"
}
},
"type": "ImageChange"
}
]
}
},
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "${APPLICATION_NAME}",
"labels": {
"app": "${APPLICATION_NAME}",
"type": "monitoring"
}
},
"spec": {
"ports": [
{
"name": "3000-tcp",
"port": 3000,
"protocol": "TCP",
"targetPort": 3000
}
],
"selector": {
"deploymentconfig": "${APPLICATION_NAME}"
},
"type": "ClusterIP"
}
},
{
"apiVersion": "v1",
"kind": "Route",
"metadata": {
"name": "${APPLICATION_NAME}",
"labels": {
"app": "${APPLICATION_NAME}",
"type": "monitoring"
}
},
"spec": {
"host": "",
"port": {
"targetPort": "3000-tcp"
},
"to": {
"kind": "Service",
"name": "${APPLICATION_NAME}",
"weight": 100
}
}
}
]
}
For me, I had the image name incorrectly under 'from':
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- alcatraz-ha
from:
kind: ImageStreamTag
name: 'alcatraz-haproxy:latest'
namespace: alcatraz-ha-dev
type: ImageChange
I had name: 'alcatraz-ha:latest' so it could not find the image
Make sure that spec.triggers.imageChangeParams.from.name exists as image stream
triggers:
- imageChangeParams:
from:
kind: ImageStreamTag
name: 'myapp:latest' # Does "myapp" exist if you run oc get is ????!!!

Bluemix WorkLoad Scheduler Process creation REST API does not accept queryparameters and headers in restfulstep

I am using the Bluemix Workload Scheduler REST API to create Processes with a Scheduled Trigger having a oneTimeProperty and a startDate.
Additionally the json i am sending also has a restfulStep.
The issue i have is, that no matter how i provide the "queryParameters" and "headers" for the restfulStep, they are not accepted/configured in the process after the successful process creation.
Here is the json i am using:
{
"name": "my process name",
"processlibraryid": 1234,
"processstatus": true,
"triggers": [
{
"name": "Scheduled Trigger",
"triggerType": "OnceTrigger",
"oneTimeProperty": {
"startDate": "TIMEVALUE"
}
}
],
"steps": [
{
"restfulStep": {
"agent": "AGENTNAME}",
"action": {
"uri": "MYCUSTOMURL",
"contentType": "application/json",
"method": "POST",
"verifyHostname": true,
"queryParameters": [
["param1", "value1"],
["param2", "value2"]
],
"headers": [
["param3", "param4"]
],
"numberOfRetries": 3,
"retryIntervalSeconds": 30
},
"authdata": {
"username": "USERNAME",
"password": "PASSWORD"
},
"input": {
"input": "",
"isFile": false
}
}
}
]
}
issue has been fixed with last Workload Scheduler upgrade.
Could you try using a Json like the following?
{
"name": "myname",
"processlibraryid": <1234>,
"processstatus": false,
"triggers": [
{
"name": "Scheduled Trigger",
"triggerType": "OnceTrigger",
"oneTimeProperty": {
"startDate": "2016-12-16T10:30:43.218Z"
}
}
],
"steps": [
{
"restfulStep": {
"agent": "<MY_AGENT_NAME>",
"action": {
"uri": "<MY_URL>",
"contentType": "application/json",
"method": "GET",
"verifyHostname": true,
"queryParameters": [
["param1", "value1"],
["param2", "value2"]
],
"headers": [
["Accept", "application/json"],
["User-Agent", "Mozilla/5.0 "]
],
"numberOfRetries": 3,
"retryIntervalSeconds": 30
},
"authdata": {
"username": "USERNAME",
"password": "PASSWORD"
},
"input": {
"input": "",
"isFile": false
}
}
}
]
}
Regards
Andrea I
your json is correct but there is a little bug in Workload Scheduler service.
A fix will be released by the end of December.
As workaround, you could use Application Lab in order to create your Restful step. In addition, you could append the queryParameters to your uri address.
At the moment, there is no workarounds for headers.
If you find other issue using the service, do not hesitate to post your comments.
Thanks!
Andrea I

Orion JSON Bad Request

I'm currently trying to subscribe Orion and Cosmos. All data sent to Orion is being updated without any issue. But, when posting to http://xxx.xxx.xx.xx:1026/v1/subscribeContext I'm getting the following error:
{
"subscribeError": {
"errorCode": {
"code": "400",
"reasonPhrase": "Bad Request",
"details": "JSON Parse Error"
}
}
}
This is the json string I'm sending:
{
"entities": [
{
"type": "Location",
"isPattern": "false",
"id": "Device-1"
}
],
"reference": "http://52.31.144.170:5050/notify",
"duration": "PT10S",
"notifyConditions": [
{
"type": "ONCHANGE",
"condValues": [
"position"
]
}
],
"attributes": [
"position"
]
}
The entity updating OK in Orion is:
{
"type": "Location",
"isPattern": "false",
"id": "Device-1",
"attributes": [
{
"name": "position",
"type": "coords",
"value": "24,21",
"metadatas": [
{
"name": "location",
"type": "string",
"value": "WGS84"
}
]
},
{
"name": "id",
"type": "device",
"value": "1"
}
]
}
I've tried with many different examples from readthedocs, and other responses in StackOverflow unsuccessfully.
Which is the correct format? Should I call /subscribeContext before or after updating Orion with /contextEntities?
Orion Context Broker version is 0.26.1.
Thank you in advance.
Taking into account that the same payload works ok when send using curl (see this execution session) I tend to think that some issue in the client (maybe hiden by a programming framework?) is causing the problem.