I need to know all the available Fiware-ServicePath for a specific Fiware-Service.
For example: I have the following:
Fiware-Service: MyCompany
Fiware-ServicePath: /app1
Fiware-ServicePath: /app2
Fiware-ServicePath: /app3
What I want is a service that retrieves something like these:
{
- {
Service: "/app1"
},
- {
Service: "/app2"
},
- {
Service: "/app3"
}
}
Thanks!
Orion Context Broker API doesn't allow to get a list of service paths (at least in the current version, i.e. Orion 1.7.0). As a workaround, you can get the list if you have access to the DB, for example running this query:
> db.entities.aggregate([{$group: {_id: "$_id.servicePath"}}])
A possibility would be to wrap the above query with a REST service (using ligthweight frameworks such as Flask in Python) and offer the information in a format as the one you suggest. It shouldn't be too difficult.
Related
i'm actually lost in the See of information. i have found a lot of Resources like this page but NONE works for some reason. so i have already created an AD app for PowerBI using the Power BI embedding setup tool. I choose the Embed Power BI for your organization's internal users—for enterprises.
i didn't know what i should have given as home page URL so i typed a local host which i think is what that's messing everything up.
then i gave all the permissions it needs at the end i got this:
so my problem is that i want to Refresh a specific Dataset with HTTP request tool in a Logic app. the link looks like this:
https://api.powerbi.com/v1.0/myorg/groups/{G_id}/datasets/{D_id}/refresh
but i found out before that i need a token for it because it returns 403 Forbidden Error. so i read Docs and got lost. i tried This Page which suggests this request:
POST: https://login.microsoftonline.com/common/oauth2/token
data: {
grant_type: password
scope: openid
resource: https://analysis.windows.net/powerbi/api
client_id: {Client ID} (got this from Azure Active Directory app)
username: {PBI Account Username} (I used email and User from the Picture above but did's work)
password: {PBI Account Username} (I used email and User from the Picture above but did's work)
}
when i tried it, it return this error:
{
"error": "invalid_request",
"error_description": "AADSTS900144: The request body must contain the following parameter: 'grant_type'.\r\nTrace ID: 247iop60-42-407f-a184-1e15e500\r\nCorrelation ID: f3ca10-d034b7-13-50747a3e\r\nTimestamp: 2022-08-17 11:40:05Z",
"error_codes": [
900144
],
"timestamp": "2022-08-17 11:40:05Z",
"trace_id": "2473f960-3a42-407f-a184-1e15eb24d500",
"correlation_id": "f35cca10-d034-4eb7-9113-507642647a3e",
"error_uri": "https://login.microsoftonline.com/error?code=900144"
}
maybe i'm doing sth in Postman App wrong:
I tried to reproduce the same in my environment and got the same error as below:
POST: https://login.microsoftonline.com/common/oauth2/token
data: {
grant_type: password
scope: openid
resource: https://analysis.windows.net/powerbi/api
client_id: {Client ID}
username: {Username}
password: {Username}
}
Response:
To resolve the error, you need to give the parameters in x-www-form-urlencoded like below:
Make sure to give App secret in client_secret parameter.
I got the tokens for PowerBI successfully like below:
After generating the tokens try refreshing the dataset with HTTP request tool in a Logic app.
I'm playing around with FIWARE Orion, IoT Agent JSON, and IoT Agent OPC UA. I'm wondering that since all the IoT Agents connect with Orion and map different IoT protocols into NGSI, is it possible for devices using different protocols to communicate with each other without adding any additional application logic?
Let's consider a MQTT device A and an OPC UA server B, For example, is it possible for:
B reports its measurements to the Orion Context Broker, A subscribe to that attribute. Some thing like
B-->IoT Agent OPC UA-->Orion-->IoT Agent JSON-->mosquitto-->A
(I tried to make a context provider registration. However, the url of the B entity attributes(orion:1026/v2/B/attrs/XXX) obviously doesn't work since Orion will send POST to an orion:1026/v2/B/attrs/XXX/op/query which doesn't exist), and the provided attribute is not provisioned at IoT Agent JSON...I feel like I'm taking the totally wrong direction)
A and B access the same entity and report their measurements to that entity in the Orion. Since A and B both need their own IoT Agents, and the same entity can not be provisioned at each agent due to duplicated...
Is it a super bad idea that trying to mess up one entity with several protocols' devices...Thank you so much for answering my doubts in advance!!!
Each NGSI entity should map to something that has state in the real world. In your case your data models should be based on Device and you should have a separate OPC-UA-based Device entity and a second separate JSON Device entity. These are the low level entities within your system, which would hold readings from the IoT Devices and would also hold additional data (such as battery-level or a link to the documentation or whatever).
If you want to model the state of a second aggregated entity then you can do that as well - just subscribe to changes in context in the device and Upsert the values and metadata over to the other entity.
curl --location --request POST 'http://localhost:1027/v2/subscriptions/' \
--header 'Content-Type: application/json' \
--header 'fiware-service: openiot' \
--data-raw '{
"description": "Notify Subscription Listener of Lamp context changes",
"subject": {
"entities": [
{
"idPattern": "Lamp.*"
}
],
"condition": {
"attrs": ["luminosity"]
}
},
"notification": {
"http": {
"url": "http://tutorial:3000/device/subscription/luminosity"
},
"attrs": ["luminosity", "controlledAsset", "supportedUnits"]
},
"throttling": 5
}'
Sample code to do the work from the listening endpoint ( /device/subscription/luminosity ) can be found here - it is a tutorial which is still a work-in-progress, so the full documentation is currently missing.
function shadowDeviceMeasures(req, res) {
const attrib = req.params.attrib;
async function copyAttributeData(device, index) {
await upsertDeviceEntityAsLD(device);
if (device[attrib]) {
await upsertLinkedAttributeDataAsLD(device, 'controlledAsset', attrib);
}
}
req.body.data.forEach(copyAttributeData);
res.status(204).send();
}
The point here is that you can (and should) think of data entities at different levels -
I have a thermometer Device - it is sending temperature readings. It has a temperature attribute
I have a Building - it has a thermometer in it - the Building has a temperature attribute and metadata.providedBy linking back to the Device
Depending on your use case you may only need to consider entities at one layer or you may need to use both.
I would like to use Twilio autopilot to build a whatsapp Chatbot. I need to know if a user has already used our service before. This means that on initiation, i get the data from an external source, i can then use functions to further specify the chatbot logic.
I am wondering what are the best options in the twilio environment to get that external data loaded in event or memory? I see webhooks are only for diagnostics, and i dont know what are the async capabilities of functions. Could someone elaborate a bit about the pro's and cons of different methods?
Thanks
you can take a look at The Autopilot Request. These request parameters are provided to your application (via a webhook associated with a task), where you can add additional logic to see if this is a new user or returning user and return the appropriate Autopilot Actions, one of which is Remember.
An approach would be setting a webhook that will determine if the User has used the system before on to the Assistant's assistant_initiation more info
The webhook can then reply with a JSON Remember + Redirect
Example -
{
"actions": [
{
"remember": {
"isNewuser": "true"
}
},
{
"redirect": "task://newUserTask"
}
]
}
OR
{
"actions": [
{
"remember": {
"isNewuser": "false"
}
},
{
"redirect": "task://oldUserTask"
}
]
}
Also since "isNewuser": "true" will be on the assistant's memory you can use that info on any following task until the session (4 Hours) expires.
I have the following configuration for an active lightweightm2m-iotagent attribute (a temperature sensor value). Fiware's IoT agent turns IPSO objects into lazy attributes but I add a mapping to make it an active attribute as in the documentation:
types: {
'Type': {
service: 'service',
subservice: '/service',
commands: [],
lazy: [],
active: [
{
"name": "t",
"type": "number"
}
],
lwm2mResourceMapping: {
"t": {
"objectType": 3303,
"objectInstance": 0,
"objectResource": 5700
}
}
},
According to the documentation for the iotagent-node-lib:
NGSI queries to the context broker will be resolved in the Broker database.
However, when I query my active attribute in Orion, Orion also queries the lightweightm2m-iotagent, requesting a bogus /3303/0/0 path which doesn't even exist in the IPSO definition.
curl -H "Fiware-service: service" -H "Fiware-servicepath: /service" http://172.17.0.1:1026/v2/entities/entity1:Type/attrs/t/value
How can I set up the configuration to get the behavior stated in the documentation, resolving a query for an active attribute in the broker database and avoiding these bogus queries?
Maybe the IoTAgent is not recognizing the active attribute as such, and it may be related to the static configuration of the types via "config.js"; this kind of configuration is not commonly used and may contain some errors (as probably the one you have found). Please, try provisioning the device through the API, as explained in: https://github.com/telefonicaid/lightweightm2m-iotagent/blob/master/docs/deviceProvisioning.md. If it works then, maybe we should flag the static attribute configuration as a buggy.
TLTR
How to migrate the pre 0.8 ACL permissions to 0.7.3?
Current setup
I am currently running an ACL enabled Consul 0.7.3 stack.
With Consul 0.8 ACLs will finally also include services and nodes, so that nodes / service (Consul) are not longer shown to anonymous users. This is exactly what I need. Today I tried to enable the new ACL "pre 0.8" using https://www.consul.io/docs/agent/options.html#acl_enforce_version_8
After doing so, my nodes could no longer authenticate against the master ( if authentication is the problem at all ).
I run the consul-network with gossip enabled, I have configured a acl_master_token:
"{acl_master_token":"<token>}"
and a token for the agents:
"{acl_token":"<token>}"
which all agents use / are configured with.
I have these ACL defaults:
{
"acl_datacenter": "stable",
"acl_default_policy": "deny",
"acl_down_policy": "deny"
}
and my Consul config looks like this:
{
"datacenter": "stable",
"data_dir": "/consul/data",
"ui": true,
"dns_config": {
"allow_stale": false
},
"log_level": "INFO",
"node_name": "dwconsul",
"client_addr" : "0.0.0.0",
"server": true,
"bootstrap": true,
"acl_enforce_version_8": true
}
What happens
When I boot, I cannot see my nodes/services using my token at all, neither the nodes/agents can register at the master,
Question
What is exactly needed to get the following:
All agents can see all nodes and all services and all KVs
Anonymous sees nothing, not KV, services or nodes (thats what is possible with 0.8 )
I looked at https://www.consul.io/docs/internals/acl.html "ACL Changes Coming in Consul 0.8" but I could not wrap my head around it. Should I now use https://www.consul.io/docs/agent/options.html#acl_agent_master_token instead of acl_token?
Thank you for any help. I guess I will not be the only one on this migration path and this particular interest, a lot of people are interested in this. You help all of them :)
It looks like the new node policy is preventing the nodes from registering properly. This should fix things:
On your Consul servers configure them with an acl_agent_token that has a policy that can write to any node, like this: node "" { policy = "write" }.
On your Consul agents, configure them with a similar one to the servers to keep things open, or you can give them a token with a more specific policy that only lets them write to some allowed prefix.
Note this gets set as the acl_agent_token which is used for internal registration operations. The acl_agent_master_token is used as kind of an emergency token to use the /v1/agent APIs if there's something wrong with the Consul servers, but it only applies to the /v1/agent APIs.
For "all agents can see all nodes and all services and all KVs" you'd add node read privileges to whatever token you are giving to your agents via the acl_token, so you'd add a policy like:
node "" { policy = "read" }
service "" { policy = "read" }
key "" { policy = "read" }
Note that this allows anyone with access to the agent's client interface to read all these things, so you want to be careful with what you bind to (usually only loopback). Or don't set acl_token at all and make callers pass in a token with each request.