Receiving empty json messages from edge device - json

I'm sending messages from a particle photon micro controller to a Azure IoT Hub. I monitor the outgoing messages from the particle portal and they seem to be just fine. Below is a recent sample:
{
"name": "*****",
"data": "{\"eventSentUtcTime\": \"2017-11-03 10:42:00\", \"machine\": \"x10\", \"eventType\": \"coffeeMaintenance\", \"data\": {\"category\": \"MillingPlantCoffee\", \"count\": \"24868\"}",
"ttl": 60,
"published_at": "2017-11-03T09:42:39.233Z",
"coreid": "*****",
"userid": "*****",
"version": 37,
"public": false,
"productID": 1427
}
But when I check incoming messages from the Azure IoT Hub, they're empty except the schema. I'm using the Device explorer from Azure's github for monitoring.
03.11.2017 10:42:09> Device: [*****], Data:[{"data":{"count":"","category":""},"eventType":"","machine":"","eventSentUtcTime":""}]
I double checked the incoming messages inside a sql database, which also displays an empty json message except the given schema.
data,eventType,machine,eventSentUtcTime,EventProcessedUtcTime,PartitionId,EventEnqueuedUtcTime,IoTHub
Record,,,,2017-11-03T10:01:26.8295948Z,1,2017-11-03T10:01:25.7270000Z,Record
The access policy I'm using has all permissions checked. I don't know where to problem lies.

It seems like your issue is more on the Particle connector side, assuming you are following this tutorial.

Related

Zabbix API to get the details of traffic used by the applications

I would like to get the details/usage of network traffic used by each application (not by host).
I tried and I was able to get the list of application running on a host, by using:
{
"jsonrpc": "2.0",
"method": "application.get",
"params": {
"output": "extend",
"hostids": "10107"
},
"auth": "02axxxxxxx6e1023exxx252cd2xx70",
"id": 1
}
but I need the network traffic consumption details:
There is no native way to do it.
The application.get api retrieves the application list from the template/host, Zabbix uses it as a grouping mechanism.
See Configuration -> Templates -> Pick one -> Applications

Unable to extend schema within a verified sub domain directory

I live in an enterprise environment where most of our production domains are currently non-routable (e.g. .local).
I tried extending the schema but since the non-routable cannot be verified and the default .onmicrosoft I don't think could either. My enterprise allows me to easily create subdomains so I attached it and verified for testing purposes and ran into the same verified domain error.
Per the documentation, I should be able to either us the ID of my domain name or just the scheme name and get 8 random-alpha-chars added. Neither approach works in this case.
POST: https://graph.microsoft.com/v1.0/schemaExtensions
{
"id": "idmdomain.sub.domain.net_Owners",
"description": "Owners of the group",
"targetTypes": [
"Group"
],
"properties": [{
"name": "PrimaryOwners",
"type": "String"
},
{
"name": "SecondaryOwners",
"type": "String"
}
]
}
Message Received:
{
"code": "BadRequest",
"message": "Your organization must own the namespace idmdomain.sub.domain.net as a part of one of the verified domains.",
"request-id": "1c7363f9-d54b-408a-8b29-2c0d2a94280a",
"date": "2018-03-22T21:47:22"
}
From the documentation:
If you already have a vanity .com,.net, .gov, .edu or a .org domain that you have verified with your tenant, you can use the domain name along with the schema name to define a unique name, in this format {domainName}_{schemaName}.
For example, if your vanity domain is contoso.com, you can define an id of, contoso_mySchema. This is the preferred option.
So in your example, idmdomain.sub.domain.net_Owners should simply be domain_Owners. It shouldn't include idmdomain, sub, net or any ..
Thank you Marc for pointing me in the correct direction. Even though my app had the correct delegated permissions (Directory.AccessAsUser.All) I now understand that I needed to execute this change in the user context instead of application as application is not supported.
For those that come behind me {domainName}_{schemaName} works if you validate your domain, if dont and you just leave schemename then the generated guid works as documented. I recommended reviewing the two links below as they were what finally unlocked the puzzle for me.
Helped me understand how this is working (authentication vs authorization)
https://developer.microsoft.com/en-us/graph/docs/concepts/rest
Helped me setup postman to quickly validate
https://blogs.msdn.microsoft.com/softwaresimian/2017/10/05/using-postman-to-call-the-graph-api-using-azure-active-directory-aad/
I should add for the postman route, a few changes...
Auth URL
https://login.microsoftonline.com/yourtennantid/oauth2/authorize?resource=https%3A%2F%2Fgraph.microsoft.com
Access Token URL
https://login.microsoftonline.com/yourtennantid/oauth2/token
Scope = Directory.AccessAsUser.All

sensu client subscriptions non-responding

I have setup sensu-server and client successfully and all is working except one thing . in this image
you can see that there are alerts for mysql and web ports.but I have given only "mysql" subscription right now in my client.json file in my client system. I have removed the "webserver" subscription from client.json (which I added initially before replacing it with "mysql" ) but still the checks associated with the "webserver" subscription are displayed. why is this? and how to display only the checks associated with the given subcription. here is my client.json
{
"client": {
"name": "sensuclient2",
"address": "127.0.0.1",
"keepalive": {
"thresholds": {
"warning": 60,
"critical": 120
},
"handlers": ["default", "mailer", "sns"]
},
"subscriptions": [
"mysql"
]
}
}
It's possible Uchiwa is showing older checks, prior to the change you made to your client configuration file (at least I went through that once!). Try deleting the events. If the API is not running the checks anymore, the events won't come up again.
You can either use sensu-cli to delete the events:
sensu-cli event delete sensuclient2 check_http
https://github.com/agent462/sensu-cli
Or make an API call...
curl -s -i -X DELETE http://yourhost:yourport/events/sensuclient2/check_http
https://sensuapp.org/docs/1.1/api/events-api.html#eventsclientcheck-delete
If the checks do come back you should check both server and and client side checks and client configuration.
Also, the simplest is the best, #vishal.k himself reminded me:
you can always delete the events using Uchiwa's interface. :)

Context Broker, ONTIMEINTERVAL subscribe immediatelly sends request to reference

The problem is even if I put condValues to PT10S, when I send request to contextBroker it requests back the reference url rigth away, not after 10 sec, and then it continues to send requests at 10 sec.
My question: is there a way to avoid the first initial request?
Here is a body of the request that I send to server where contextBroker is installed.
{
"entities": [{
"type": "Cycle",
"isPattern": "false",
"id": "someid"
}],
"attributes": [
...
],
"reference": "someurl"
"duration": "P1M",
"notifyConditions": [{
"type": "ONTIMEINTERVAL",
"condValues": [
"PT10S"
]
}]
}
At the present moment (Orion 1.1) initial notification cannot be avoided. However, being able to configure that behaviour would be an interesting feature to develop in the future and, consecuently, a github issue was created time ago about it.
In addition, note that ONTIMEINTERVAL subscriptions are no longer supported so you should avoid to use them:
ONTIMEINTERVAL subscriptions have several problems (introduce state in CB, thus making horizontal scaling configuration much harder, and makes it difficult to introduce pagination/filtering). Actually, they aren't really needed, as any use case based on ONTIMEINTERVAL notification can be converted to an equivalent use case in which the receptor runs queryContext at the same frequency (and taking advantage of the features of queryContext, such as pagination or filtering).
EDIT: the posibility of avoiding initial notification has been finally implemented at Orion. Details are at this section of the documentation. It is now in the master branch (so if you use fiware/orion:latest docker you will get it) and will be include in next Orion version (2.2.0).

Setting up S3QL with FIWARE Object Storage GE (Openstack Swift)

I am trying to setup S3QL with Object Storage GE and there seems to be only one piece of information missing.
I successfully installed S3QL thanks to this pretty good tutorial: https://dmsimard.com/2014/09/29/s3ql-a-filesystem-over-http-with-swift/
Now I am stuck when trying to mount an object-container 'test' that I created in region 'Lannion2'.
The URL-syntax requires a 'region' to be defined (swiftks://<hostname>[:<port>]/<region>:<container>) but I have no clue how this maps to the fiware-stack. When trying the following command, s3ql seems to succeed in connecting and authenticating with Keystone but cannot find the region.
mkfs.s3ql swiftks://cloud.lab.fiware.org:4730/Lannion2:test --backend-options no-ssl
Enter backend login:
Enter backend passphrase:
Results in:
No accessible object storage service found in region Lannion2 (available regions: )
Unfortunately no available regions are listed in the response. Authentication works correctly as mistyping login or passphrase results in an authentication-error.
Is there any documentation about the naming of regions in keystone/fiware cloud?
Authenticate to keystone via:
Post http://cloud.lab.fi-ware.org:4730/v2.0/tokens with Content-type application/json and Body: {"auth": {"passwordCredentials":
{"username": "", "password": ""}, "tenantId":"***"}}
In the response, you should receive a list of endpoints, including a swift endpoint. There should be an entry there that looks like:
{"adminURL": "", "region": "Lannion2", "internalURL": ":8080/v1/AUTH_", "id": "", "publicURL": "/v1/AUTH_"}