Modify device - IoTAgentUL - fiware

I need to modify a registered device in IoTAgent UltraLight. With modify I mean add some attributes and delete others.
I also want to update the entity in Orion CB.
Is it possible to do that? How can I do that?

IoTAs (and the IoTA library in general), expose a north provisioning interface for device creation. The core idea is when you provision a device in the IoTA (directly, or through the IoTA Manager), an entity is automatically created in Context Broker. Such nothr provisioning interface allows for retrieval, removal and update, too.
Being said that, the south interface of the IoTAs is designed to only accept measures and command execution results from the devices. Thus, if a new attribute comes into play, and you provide values for that new attribute via an IoTA, a new attribute will not be appended at Context Broker; simply, this information will be droped.
In order to accept data regarding new attributes, first you'll have to use the above mentioned provisioning interface of the IoTAs, in particular the update device operation, in order to provision such new attribute; that will automatically append a new attribute to the entity at Context Broker level. From here on, values for the new attribute sent to the IoTA will be updated in Context Broker.
Such an update request looks like:
PUT http://iota_host:iota_port/iot/devices/<dev_id>?protocol=<protocol_type>
Fiware-Service: <service>
Fiware-ServicePath: <subservice>
{
"entity_type": <entity_type>,
"attributes": [ <new_active_attrs_if_any> ],
"lazy": [ <new_lazy_attrs_if_any> ],
"commands": [ <new_commands_if_any> ],
"statis_attributes": [ <new_static_attrs_if_any> ]
}
Sadly, already existent attributes cannot be removed, for the time being.

Related

CAS Multifactor Authentication Provider Selection

I am working with cas-overlay-template project in version 6.1.4. I have implemented two mfa providers on my CAS, Google Authenticator and CAS Simple. Both are working, I have tested them separately and I have got the results I've expected.
Until now, I have been activating the mfa modifying the cas.properties file adding this properties: cas.authn.mfa.globalProviderId=mfa-gauth when I wanted to use Google, or cas.authn.mfa.globalProviderId=mfa-simple when I used the CAS itself.
Well, in CAS documentation is mentioned that is possible to enable a provider selection menu, if resolved more than one just by adding this propertie: cas.authn.mfa.provider-selection-enabled=true. So, my configuration is the following:
cas.authn.mfa.provider-selection-enabled=true
cas.authn.mfa.globalProviderId=mfa-gauth
cas.authn.mfa.globalProviderId=mfa-simple
But when I try to login with any user (I'm using the default one casuser:Mellon), CAS don't show me a menu in which I can select the following mfa provider, It directly goes to mfa-simple provider.
What am I doing wrong?
Well, in CAS documentation is mentioned that is possible to enable a provider selection menu, if resolved more than one just by adding this properties:
So far so good.
So, my configuration is the following:
That's the problem. You are not resolving/triggering more than just one provider. You start with mfa-gauth and then override it with mfa-simple. In CAS 6.1.x, the globalProviderId only accepts a single identifier. It's not a list or a container of any kind to accept more than one value. This has been addressed in the next coming release.
At the moment, to resolve more than one provider you will need to assign the MFA providers to a registered service definition. Like so:
{
"#class": "org.apereo.cas.services.RegexRegisteredService",
"serviceId": "^(https|imaps)://.*",
"name": "Example",
"id": 1,
"description": "This service definition defines a service.",
"evaluationOrder": 1,
"multifactorPolicy" : {
"#class" : "org.apereo.cas.services.DefaultRegisteredServiceMultifactorPolicy",
"multifactorAuthenticationProviders" : [ "java.util.LinkedHashSet", [ "mfa-duo", "mfa-gauth" ] ]
}
}
This means, provider selection can be enabled on a per-application basis. Alternatively, you can write a small groovy script to return more than one provider back to CAS, allowing the selection menu to display the menu items.
Read this post for full details.

Can IoT device with different protocols use IoT Agents to "talk" with each other?

I'm playing around with FIWARE Orion, IoT Agent JSON, and IoT Agent OPC UA. I'm wondering that since all the IoT Agents connect with Orion and map different IoT protocols into NGSI, is it possible for devices using different protocols to communicate with each other without adding any additional application logic?
Let's consider a MQTT device A and an OPC UA server B, For example, is it possible for:
B reports its measurements to the Orion Context Broker, A subscribe to that attribute. Some thing like
B-->IoT Agent OPC UA-->Orion-->IoT Agent JSON-->mosquitto-->A
(I tried to make a context provider registration. However, the url of the B entity attributes(orion:1026/v2/B/attrs/XXX) obviously doesn't work since Orion will send POST to an orion:1026/v2/B/attrs/XXX/op/query which doesn't exist), and the provided attribute is not provisioned at IoT Agent JSON...I feel like I'm taking the totally wrong direction)
A and B access the same entity and report their measurements to that entity in the Orion. Since A and B both need their own IoT Agents, and the same entity can not be provisioned at each agent due to duplicated...
Is it a super bad idea that trying to mess up one entity with several protocols' devices...Thank you so much for answering my doubts in advance!!!
Each NGSI entity should map to something that has state in the real world. In your case your data models should be based on Device and you should have a separate OPC-UA-based Device entity and a second separate JSON Device entity. These are the low level entities within your system, which would hold readings from the IoT Devices and would also hold additional data (such as battery-level or a link to the documentation or whatever).
If you want to model the state of a second aggregated entity then you can do that as well - just subscribe to changes in context in the device and Upsert the values and metadata over to the other entity.
curl --location --request POST 'http://localhost:1027/v2/subscriptions/' \
--header 'Content-Type: application/json' \
--header 'fiware-service: openiot' \
--data-raw '{
"description": "Notify Subscription Listener of Lamp context changes",
"subject": {
"entities": [
{
"idPattern": "Lamp.*"
}
],
"condition": {
"attrs": ["luminosity"]
}
},
"notification": {
"http": {
"url": "http://tutorial:3000/device/subscription/luminosity"
},
"attrs": ["luminosity", "controlledAsset", "supportedUnits"]
},
"throttling": 5
}'
Sample code to do the work from the listening endpoint ( /device/subscription/luminosity ) can be found here - it is a tutorial which is still a work-in-progress, so the full documentation is currently missing.
function shadowDeviceMeasures(req, res) {
const attrib = req.params.attrib;
async function copyAttributeData(device, index) {
await upsertDeviceEntityAsLD(device);
if (device[attrib]) {
await upsertLinkedAttributeDataAsLD(device, 'controlledAsset', attrib);
}
}
req.body.data.forEach(copyAttributeData);
res.status(204).send();
}
The point here is that you can (and should) think of data entities at different levels -
I have a thermometer Device - it is sending temperature readings. It has a temperature attribute
I have a Building - it has a thermometer in it - the Building has a temperature attribute and metadata.providedBy linking back to the Device
Depending on your use case you may only need to consider entities at one layer or you may need to use both.

HasMany relation entry point unauthorized on Loopback

New to Loopback, I tried to make a simple API with a user model and a todo model.
The user model, named Todoer,is based on the built-in User model. create a todoer, login, logout, etc. works like a charm.
The Todo model is based on PersistedModel with no special ACLs on it for the moment.
I made a Belongs To relation from Todo model to Todoer model to have an ownership.
I made also a HasMany relation from Todoer to Todo to be able to retrieve all the todos of a user through the endpoint GET /Todoer/{id}/todos
With a todoer logged in, with the good token and id, I can easily have responses from Todoer endpoints reserved for logged users, like GET /Todoer/{id} for example, so I'm sure the authentication mechanism is working well.
But each time I want to hit GET /Todoer/{id}/todos, I only obtain a error message telling I'm not authorized. I'm always sure I gave the good token and Todoer Id obtained at login.
Even if I make a big ACL telling OK to everything to all on the Todoer model, it happens the same.
What did I miss ? I can't figure it out...
Thank you for your help...
You need to take into account ACLs of a built-in Usermodel. You are actually running into its general DENY ACL rule. It takes priority over your general ALLOW ACL rule (docs on ACL rule precedence).
You can write more specific ACL rule to get pass it (docs on Accessing related models).
{
"accessType": "*",
"principalType": "ROLE",
"principalId": "$everyone",
"permission": "ALLOW",
"property": "__get__todos"
}
Another option, which might be in this case more convenient and safe, is to use a dynamic $owner role on Todo itself (docs on Dynamic roles).
{
"accessType": "*",
"principalType": "ROLE",
"principalId": "$owner",
"permission": "ALLOW"
}
If you want to understand what is happening regarding to ACLs in your application, it's very useful to set a DEBUG environment variable to loopback:security:* to enable quite extensive security logging.

Consul 0.8 ACL migration - how to migrate

TLTR
How to migrate the pre 0.8 ACL permissions to 0.7.3?
Current setup
I am currently running an ACL enabled Consul 0.7.3 stack.
With Consul 0.8 ACLs will finally also include services and nodes, so that nodes / service (Consul) are not longer shown to anonymous users. This is exactly what I need. Today I tried to enable the new ACL "pre 0.8" using https://www.consul.io/docs/agent/options.html#acl_enforce_version_8
After doing so, my nodes could no longer authenticate against the master ( if authentication is the problem at all ).
I run the consul-network with gossip enabled, I have configured a acl_master_token:
"{acl_master_token":"<token>}"
and a token for the agents:
"{acl_token":"<token>}"
which all agents use / are configured with.
I have these ACL defaults:
{
"acl_datacenter": "stable",
"acl_default_policy": "deny",
"acl_down_policy": "deny"
}
and my Consul config looks like this:
{
"datacenter": "stable",
"data_dir": "/consul/data",
"ui": true,
"dns_config": {
"allow_stale": false
},
"log_level": "INFO",
"node_name": "dwconsul",
"client_addr" : "0.0.0.0",
"server": true,
"bootstrap": true,
"acl_enforce_version_8": true
}
What happens
When I boot, I cannot see my nodes/services using my token at all, neither the nodes/agents can register at the master,
Question
What is exactly needed to get the following:
All agents can see all nodes and all services and all KVs
Anonymous sees nothing, not KV, services or nodes (thats what is possible with 0.8 )
I looked at https://www.consul.io/docs/internals/acl.html "ACL Changes Coming in Consul 0.8" but I could not wrap my head around it. Should I now use https://www.consul.io/docs/agent/options.html#acl_agent_master_token instead of acl_token?
Thank you for any help. I guess I will not be the only one on this migration path and this particular interest, a lot of people are interested in this. You help all of them :)
It looks like the new node policy is preventing the nodes from registering properly. This should fix things:
On your Consul servers configure them with an acl_agent_token that has a policy that can write to any node, like this: node "" { policy = "write" }.
On your Consul agents, configure them with a similar one to the servers to keep things open, or you can give them a token with a more specific policy that only lets them write to some allowed prefix.
Note this gets set as the acl_agent_token which is used for internal registration operations. The acl_agent_master_token is used as kind of an emergency token to use the /v1/agent APIs if there's something wrong with the Consul servers, but it only applies to the /v1/agent APIs.
For "all agents can see all nodes and all services and all KVs" you'd add node read privileges to whatever token you are giving to your agents via the acl_token, so you'd add a policy like:
node "" { policy = "read" }
service "" { policy = "read" }
key "" { policy = "read" }
Note that this allows anyone with access to the agent's client interface to read all these things, so you want to be careful with what you bind to (usually only loopback). Or don't set acl_token at all and make callers pass in a token with each request.

HTTP ReST: update large collections: better approach than JSON PATCH?

I am designing a web service to regularly receive updates to lists. At this point, a list can still be modeled as a single entity (/lists/myList) or an actual collection with many resources (/lists/myList/entries/<ID>). The lists are large (millions of entries) and the updates are small (often less than 10 changes).
The client will get web service URLs and lists to distribute, e.g.:
http://hostA/service/lists: list1, list2
http://hostB/service/lists: list2, list3
http://hostC/service/lists: list1, list3
It will then push lists and updates as configured. It is likely but undetermined if there is some database behind the web service URLs.
I have been researching and it seems a HTTP PATCH using the JSON patch format is the best approach.
Context and examples:
Each list has an identifying name, a priority and millions of entries. Each entry has an ID (determined by the client) and several optional attributes. Example to create a list "requiredItems" with priority 1 and two list entries:
PUT /lists/requiredItems
Content-Type: application/json
{
"priority": 1,
"entries": {
"1": {
"color": "red",
"validUntil": "2016-06-29T08:45:00Z"
},
"2": {
"country": "US"
}
}
}
For updates, the client would first need to know what the list looks like now on the server. For this I would add a property "revision" to the list entity.
Then, I would query this attribute:
GET /lists/requiredItems?property=revision
Then the client would see what needs to change between the revision on the server and the latest revision known by the client and compose a JSON patch. Example:
PATCH /list/requiredItems
Content-Type: application/json-patch+json
[
{ "op": "test", "path": "revision", "value": 3 },
{ "op": "add", "path": "entries/3", "value": { "color": "blue" } },
{ "op": "remove", "path": "entries/1" },
{ "op": "remove", "path": "entries/2/country" },
{ "op": "add", "path": "entries/2/color", "value": "green" },
{ "op": "replace", "path": "revision", "value": 10 }
]
Questions:
This approach has the drawback of slightly less client support due to the not-often-used HTTP verb PATCH. Is there a more compatible approach without sacrificing HTTP compatibility (idempotency et cetera)?
Modelling the individual list entries as separate resources and using PUT and DELETE (perhaps with ETag and/or If-Match) seems an option (PUT /lists/requiredItems/entries/3, DELETE /lists/requiredItems/entries/1 PUT /lists/requiredItems/revision), but how would I make sure all those operations are applied when the network drops in the middle of an update chain? Is a HTTP PATCH allowed to work on multiple resources?
Is there a better way to 'version' the lists, perhaps implicitly also improving how they are updated? Note that the client determines the revision number.
Is it correct to query the revision number with GET /lists/requiredItems?property=revision? Should it be a separate resource like /lists/requiredItems/revision? If it should be a separate resource, how would I update it atomically (i.e. the list and revision are both updated or both not updated)?
Would it work in JSON patch to first test the revision value to be 3 and then update it to 10 in the same patch?
This approach has the drawback of slightly less client support due to the not-often-used HTTP verb PATCH.
As far as I can tell, PATCH is really only appropriate if your server is acting like a dumb document store, where the action is literally "please update your copy of the document according to the following description".
So if your resource really just is a JSON document that describes a list with millions of entries, then JSON-Patch is a great answer.
But if you are expecting that the patch will, as a side effect, update an entity in your domain, then I'm suspicious.
Is a HTTP PATCH allowed to work on multiple resources?
RFC 5789
The PATCH method affects the resource identified by the Request-URI, and it also MAY have side effects on other resources
I'm not keen on querying the revision number; it doesn't seem to have any clear advantage over using an ETag/If-Match approach. Some obvious disadvantages - the caches between you and the client don't know that the list and the version number are related; a cache will happily tell a client that version 12 of the list is version 7, or vice versa.
Answering my own question. My first bullet point may be opinion-based and, as has been pointed out, I've asked many questions in one post. Nevertheless, here's a summary of what was answered by others (VoiceOfUnreason) and my own additional research:
ETags are HTTP's resource 'hashes'. They can be combined with If-Match headers to have a versioning system. However, ETag-headers are normally not used to declare the ETag of a resource that is being created (PUT) or updated (POST/PATCH). The server storing the resource usually determines the ETag. I've not found anything explicitly forbidding this, but many implementations may assume that the server determines the ETag and get confused when it is provided with PUT or PATCH.
A separate revision resource is a valid alternative to ETags for versioning. This resource must be updated at the same time as the resource it is the revision of.
It is not semantically enforceable on a HTTP level to have commit/rollback transactions, unless by modelling the transaction itself as a ReST resource, which would make things much more complicated.
However, some properties of PATCH allow it to be used for this:
A HTTP PATCH must be atomic and can operate on multiple resources. RFC 5789:
The server MUST apply the entire set of changes atomically and never provide (e.g., in response to a GET during this operation) a partially modified representation. If the entire patch document cannot be successfully applied, then the server MUST NOT apply any of the changes.
The PATCH method affects the resource identified by the Request-URI, and it also MAY have side effects on other resources; i.e., new resources may be created, or existing ones modified, by the application of a PATCH. PATCH is neither safe nor idempotent
JSON PATCH can consist of multiple operations on multiple resources and all must be applied or none must be applied, making it an implicit transaction. RFC 6902: Operations are applied sequentially in the order they appear in the array.
Thus, the revision can be modeled as a separate resource and still be updated at the same time. Querying the current revision is a simple GET. Committing a transaction is a single PATCH request containing first a test of the revision, then the operations on the resource(s) and finally the operation to update the revision resource.
The server can still choose to publish the revision as ETag of the main resource.