I have the following configuration for an active lightweightm2m-iotagent attribute (a temperature sensor value). Fiware's IoT agent turns IPSO objects into lazy attributes but I add a mapping to make it an active attribute as in the documentation:
types: {
'Type': {
service: 'service',
subservice: '/service',
commands: [],
lazy: [],
active: [
{
"name": "t",
"type": "number"
}
],
lwm2mResourceMapping: {
"t": {
"objectType": 3303,
"objectInstance": 0,
"objectResource": 5700
}
}
},
According to the documentation for the iotagent-node-lib:
NGSI queries to the context broker will be resolved in the Broker database.
However, when I query my active attribute in Orion, Orion also queries the lightweightm2m-iotagent, requesting a bogus /3303/0/0 path which doesn't even exist in the IPSO definition.
curl -H "Fiware-service: service" -H "Fiware-servicepath: /service" http://172.17.0.1:1026/v2/entities/entity1:Type/attrs/t/value
How can I set up the configuration to get the behavior stated in the documentation, resolving a query for an active attribute in the broker database and avoiding these bogus queries?
Maybe the IoTAgent is not recognizing the active attribute as such, and it may be related to the static configuration of the types via "config.js"; this kind of configuration is not commonly used and may contain some errors (as probably the one you have found). Please, try provisioning the device through the API, as explained in: https://github.com/telefonicaid/lightweightm2m-iotagent/blob/master/docs/deviceProvisioning.md. If it works then, maybe we should flag the static attribute configuration as a buggy.
Related
I'm setting up an InnoDB Cluster using mysqlsh. This is in Kubernetes, but I think this question applies more generally.
When I use cluster.configureInstance() I see messages that includes:
This instance reports its own address as node-2:3306
However, the nodes can only find each other through DNS at an address like node-2.cluster:3306. The problem comes when adding instances to the cluster; they try to find the other nodes without the qualified name. Errors are of the form:
[GCS] Error on opening a connection to peer node node-0:33061 when joining a group. My local port is: 33061.
It is using node-n:33061 rather than node-n.cluster:33061.
If it matters, the "DNS" is set up as a headless service in Kubernetes that provides consistent addresses as pods come and go. It's very simple, and I named it "cluster" to created addresses of the form node-n.cluster. I don't want to cloud this question with detail I don't think matters, however, as surely other configurations require the instances in the cluster to use DNS as well.
I thought that setting localAddress when creating the cluster and adding the nodes would solve the problem. Indeed, after I added that to the createCluster options, I can look in the database and see
| group_replication_local_address | node-0.cluster:33061 |
After I create the cluster and look at the topology, it seems that the local address setting has no effect whatsoever:
{
"clusterName": "mycluster",
"defaultReplicaSet": {
"name": "default",
"primary": "node-0:3306",
"ssl": "REQUIRED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"node-0:3306": {
"address": "node-0:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.29"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "node-0:3306"
}
And adding more instances continues to fail with the same communication errors.
How do I convince each instance that the address it needs to advertise is different? I will try other permutations of the localAddress setting, but it doesn't look like it's intended to fix the problem I'm having. How do I reconcile the address the instance reports for itself with the address that's actually useful for other instances to find it?
Edit to add: Maybe it is a Kubernetes thing? Or a Docker thing at any rate. There is an environment variable set in the container:
HOSTNAME=node-0
Does the containerized MySQL use that? If so, how do I override it?
Apparently this value has to be set at startup. The option for my setup was
--report-host=${HOSTNAME}.cluster
when starting the MySQL instances resolved the issue.
Specifically for Kubernetes, an example is at https://github.com/adamelliotfields/kubernetes/blob/master/mysql/mysql.yaml
I'm playing around with FIWARE Orion, IoT Agent JSON, and IoT Agent OPC UA. I'm wondering that since all the IoT Agents connect with Orion and map different IoT protocols into NGSI, is it possible for devices using different protocols to communicate with each other without adding any additional application logic?
Let's consider a MQTT device A and an OPC UA server B, For example, is it possible for:
B reports its measurements to the Orion Context Broker, A subscribe to that attribute. Some thing like
B-->IoT Agent OPC UA-->Orion-->IoT Agent JSON-->mosquitto-->A
(I tried to make a context provider registration. However, the url of the B entity attributes(orion:1026/v2/B/attrs/XXX) obviously doesn't work since Orion will send POST to an orion:1026/v2/B/attrs/XXX/op/query which doesn't exist), and the provided attribute is not provisioned at IoT Agent JSON...I feel like I'm taking the totally wrong direction)
A and B access the same entity and report their measurements to that entity in the Orion. Since A and B both need their own IoT Agents, and the same entity can not be provisioned at each agent due to duplicated...
Is it a super bad idea that trying to mess up one entity with several protocols' devices...Thank you so much for answering my doubts in advance!!!
Each NGSI entity should map to something that has state in the real world. In your case your data models should be based on Device and you should have a separate OPC-UA-based Device entity and a second separate JSON Device entity. These are the low level entities within your system, which would hold readings from the IoT Devices and would also hold additional data (such as battery-level or a link to the documentation or whatever).
If you want to model the state of a second aggregated entity then you can do that as well - just subscribe to changes in context in the device and Upsert the values and metadata over to the other entity.
curl --location --request POST 'http://localhost:1027/v2/subscriptions/' \
--header 'Content-Type: application/json' \
--header 'fiware-service: openiot' \
--data-raw '{
"description": "Notify Subscription Listener of Lamp context changes",
"subject": {
"entities": [
{
"idPattern": "Lamp.*"
}
],
"condition": {
"attrs": ["luminosity"]
}
},
"notification": {
"http": {
"url": "http://tutorial:3000/device/subscription/luminosity"
},
"attrs": ["luminosity", "controlledAsset", "supportedUnits"]
},
"throttling": 5
}'
Sample code to do the work from the listening endpoint ( /device/subscription/luminosity ) can be found here - it is a tutorial which is still a work-in-progress, so the full documentation is currently missing.
function shadowDeviceMeasures(req, res) {
const attrib = req.params.attrib;
async function copyAttributeData(device, index) {
await upsertDeviceEntityAsLD(device);
if (device[attrib]) {
await upsertLinkedAttributeDataAsLD(device, 'controlledAsset', attrib);
}
}
req.body.data.forEach(copyAttributeData);
res.status(204).send();
}
The point here is that you can (and should) think of data entities at different levels -
I have a thermometer Device - it is sending temperature readings. It has a temperature attribute
I have a Building - it has a thermometer in it - the Building has a temperature attribute and metadata.providedBy linking back to the Device
Depending on your use case you may only need to consider entities at one layer or you may need to use both.
I live in an enterprise environment where most of our production domains are currently non-routable (e.g. .local).
I tried extending the schema but since the non-routable cannot be verified and the default .onmicrosoft I don't think could either. My enterprise allows me to easily create subdomains so I attached it and verified for testing purposes and ran into the same verified domain error.
Per the documentation, I should be able to either us the ID of my domain name or just the scheme name and get 8 random-alpha-chars added. Neither approach works in this case.
POST: https://graph.microsoft.com/v1.0/schemaExtensions
{
"id": "idmdomain.sub.domain.net_Owners",
"description": "Owners of the group",
"targetTypes": [
"Group"
],
"properties": [{
"name": "PrimaryOwners",
"type": "String"
},
{
"name": "SecondaryOwners",
"type": "String"
}
]
}
Message Received:
{
"code": "BadRequest",
"message": "Your organization must own the namespace idmdomain.sub.domain.net as a part of one of the verified domains.",
"request-id": "1c7363f9-d54b-408a-8b29-2c0d2a94280a",
"date": "2018-03-22T21:47:22"
}
From the documentation:
If you already have a vanity .com,.net, .gov, .edu or a .org domain that you have verified with your tenant, you can use the domain name along with the schema name to define a unique name, in this format {domainName}_{schemaName}.
For example, if your vanity domain is contoso.com, you can define an id of, contoso_mySchema. This is the preferred option.
So in your example, idmdomain.sub.domain.net_Owners should simply be domain_Owners. It shouldn't include idmdomain, sub, net or any ..
Thank you Marc for pointing me in the correct direction. Even though my app had the correct delegated permissions (Directory.AccessAsUser.All) I now understand that I needed to execute this change in the user context instead of application as application is not supported.
For those that come behind me {domainName}_{schemaName} works if you validate your domain, if dont and you just leave schemename then the generated guid works as documented. I recommended reviewing the two links below as they were what finally unlocked the puzzle for me.
Helped me understand how this is working (authentication vs authorization)
https://developer.microsoft.com/en-us/graph/docs/concepts/rest
Helped me setup postman to quickly validate
https://blogs.msdn.microsoft.com/softwaresimian/2017/10/05/using-postman-to-call-the-graph-api-using-azure-active-directory-aad/
I should add for the postman route, a few changes...
Auth URL
https://login.microsoftonline.com/yourtennantid/oauth2/authorize?resource=https%3A%2F%2Fgraph.microsoft.com
Access Token URL
https://login.microsoftonline.com/yourtennantid/oauth2/token
Scope = Directory.AccessAsUser.All
I need to modify a registered device in IoTAgent UltraLight. With modify I mean add some attributes and delete others.
I also want to update the entity in Orion CB.
Is it possible to do that? How can I do that?
IoTAs (and the IoTA library in general), expose a north provisioning interface for device creation. The core idea is when you provision a device in the IoTA (directly, or through the IoTA Manager), an entity is automatically created in Context Broker. Such nothr provisioning interface allows for retrieval, removal and update, too.
Being said that, the south interface of the IoTAs is designed to only accept measures and command execution results from the devices. Thus, if a new attribute comes into play, and you provide values for that new attribute via an IoTA, a new attribute will not be appended at Context Broker; simply, this information will be droped.
In order to accept data regarding new attributes, first you'll have to use the above mentioned provisioning interface of the IoTAs, in particular the update device operation, in order to provision such new attribute; that will automatically append a new attribute to the entity at Context Broker level. From here on, values for the new attribute sent to the IoTA will be updated in Context Broker.
Such an update request looks like:
PUT http://iota_host:iota_port/iot/devices/<dev_id>?protocol=<protocol_type>
Fiware-Service: <service>
Fiware-ServicePath: <subservice>
{
"entity_type": <entity_type>,
"attributes": [ <new_active_attrs_if_any> ],
"lazy": [ <new_lazy_attrs_if_any> ],
"commands": [ <new_commands_if_any> ],
"statis_attributes": [ <new_static_attrs_if_any> ]
}
Sadly, already existent attributes cannot be removed, for the time being.
TLTR
How to migrate the pre 0.8 ACL permissions to 0.7.3?
Current setup
I am currently running an ACL enabled Consul 0.7.3 stack.
With Consul 0.8 ACLs will finally also include services and nodes, so that nodes / service (Consul) are not longer shown to anonymous users. This is exactly what I need. Today I tried to enable the new ACL "pre 0.8" using https://www.consul.io/docs/agent/options.html#acl_enforce_version_8
After doing so, my nodes could no longer authenticate against the master ( if authentication is the problem at all ).
I run the consul-network with gossip enabled, I have configured a acl_master_token:
"{acl_master_token":"<token>}"
and a token for the agents:
"{acl_token":"<token>}"
which all agents use / are configured with.
I have these ACL defaults:
{
"acl_datacenter": "stable",
"acl_default_policy": "deny",
"acl_down_policy": "deny"
}
and my Consul config looks like this:
{
"datacenter": "stable",
"data_dir": "/consul/data",
"ui": true,
"dns_config": {
"allow_stale": false
},
"log_level": "INFO",
"node_name": "dwconsul",
"client_addr" : "0.0.0.0",
"server": true,
"bootstrap": true,
"acl_enforce_version_8": true
}
What happens
When I boot, I cannot see my nodes/services using my token at all, neither the nodes/agents can register at the master,
Question
What is exactly needed to get the following:
All agents can see all nodes and all services and all KVs
Anonymous sees nothing, not KV, services or nodes (thats what is possible with 0.8 )
I looked at https://www.consul.io/docs/internals/acl.html "ACL Changes Coming in Consul 0.8" but I could not wrap my head around it. Should I now use https://www.consul.io/docs/agent/options.html#acl_agent_master_token instead of acl_token?
Thank you for any help. I guess I will not be the only one on this migration path and this particular interest, a lot of people are interested in this. You help all of them :)
It looks like the new node policy is preventing the nodes from registering properly. This should fix things:
On your Consul servers configure them with an acl_agent_token that has a policy that can write to any node, like this: node "" { policy = "write" }.
On your Consul agents, configure them with a similar one to the servers to keep things open, or you can give them a token with a more specific policy that only lets them write to some allowed prefix.
Note this gets set as the acl_agent_token which is used for internal registration operations. The acl_agent_master_token is used as kind of an emergency token to use the /v1/agent APIs if there's something wrong with the Consul servers, but it only applies to the /v1/agent APIs.
For "all agents can see all nodes and all services and all KVs" you'd add node read privileges to whatever token you are giving to your agents via the acl_token, so you'd add a policy like:
node "" { policy = "read" }
service "" { policy = "read" }
key "" { policy = "read" }
Note that this allows anyone with access to the agent's client interface to read all these things, so you want to be careful with what you bind to (usually only loopback). Or don't set acl_token at all and make callers pass in a token with each request.