How to check if IOTA is receiving data from device - fiware

I recently suspect I have configuration problem with the SOUTHBOUND traffic. Device already provisioned with IOTA. How do I check if the agent is receiving measurements?

First, try looking in the agent's log file, to discard errors. If you do not find anything, try changing the severity of the log file to a more comprehensive value like DEBUG.
I have never used this FIWARE IoT Agent, but with others like IoT Agent for the Ultralight if you configure the severity of the log file in DEBUG the reception of each message is shown.
If you need help to change the log file severity, start reading the agent configuration page here.
On the other hand, if you have your agent attached to an Orion context broker, you should be able to validate that you are receiving data, observing how your contexts change as you receive data. For this you have the Orion API.
EDIT:
All FIWARE agents shares a core library called node-lib and that component define interfaces to manage device provisioning and similar operations.
There are HTTP methods to create, fetch and delete devices and in particularly for updating a device. Check apiary blueprint here for more information.
As an example:
Getting all devices
curl -X GET [your_agent_host]:[port]/iot/devices
Update a device
curl -X POST [your_agent_host]:[port]/iot/devices/{device_id} -d
'{"attributes": [{
"object_id": "attr_id",
"name": "attr_name",
"type": "attr_type"} ...
]}'
I think there is no way to perform a partial update on a single device, you must specify all the attributes again.
Greetings and I hope I have been of some help!

Related

Can we receive webhooks from Foundry when a dataset updates?

Can Foundry we receive Webhook calls to an arbitrary external endpoint when a dataset updates (e.g. new transaction committed, or jobspec updated)?
I checked around and couldn't find any hooks, maybe there is something at actions level, but that is downstream from transactions.
Alternatively you can workaround it by having a downstream transform that just exists to build once upstream has transactions, and use it to make said http requests from your transforms via the driver. I understand it's not the same as you requested, but it's the best workaround I can think of.
You'll have to request via your support to have whatever endpoint you want white listed within Foundry Configuration. This will likely have to be applied by Palantir, and it will raise security questions, so you should reach out internally via your support mechanisms.
Once you do that, something like this should work.
import requests
#transform_df(
_noop_df=Input("your upstream"),
Output("output dataset"))
def transform(_noop_df):
req = requests.Request('POST','http://stackoverflow.com',headers={'X-Custom':'Test'},data='a=1&b=2')
# write the result of the request into the output if you want.

How to get Serilog json-formatted logs to appear correctly in Datadog

I have been asked to implement a centralized monitoring and logging system using DataDog that will receive information from various services and applications, some running as Windows Services on virtual machines and some running inside a Kubernetes cluster. In order to implement the logging aspect so that DataDog can correctly ingest the logs, I'm using Serilog to do the logging.
My plan is currently to write the logs to the console in json format and have the DataDog agent installed on each server or k8s node capture and ship them to DataDog. This works, at least for the k8s node where I've implemented it so far. (I'm trying to avoid using the custom Serilog sink for DataDog as that's discouraged in the DataDog documentation).
My problem is that I cannot get logs ingested correctly on the DataDog side. DataDog expects the json to contain a property call Message but Serilog names this property RenderedMessage (if I use JsonFormatter(renderMessage: true)) or #m (if I use RenderedCompactJsonFormatter()).
How can I get my logs shipped to DataDog and ingested correctly on the DataDog end?
Answering my own question.
The DataDog logging page has a Configuration section. On that page the "Pre processing for JSON logs" section allows you to specify alternate property names for a few of the major log message properties. If you add #m to the Message attributes section and #l to the Status attributes section you will correctly ingest JSON messages from the RenderedCompactJsonFormatter formatter. If you add RenderedMessage and Level respectively you will correctly ingest JsonFormatter(renderMessage: true) formatter. You can specify multiple attributes in each section, so you can simultaneously support both formats.
If you would use the Console Sink
just apply a corresponding colored Theme to it.

No Traces in Azure API Response

The Flag Ocp-Apim-Tracehas been set to true.
The API Response displays this information under the Trace Tab:
Trace location was not specified in the response or trace log is not
available.
Yet no traces are available. How does one resolve this?
To enable trace, you need to include "Ocp-Apim-Trace" and "Ocp-Apim-Subscription-Key" in request header.
If the API does not require subscription, you can still get admin subscription key in developer portal. This enforces that only admin can get tracing log. To get admin subscription key if you are an admin, go to Developer Portal -> Profile -> find your target API and copy the key.
The Ocp-Apim-Trace feature enables you to specify whether or not APIM should generate a trace file on blob storage.
Setting the header to 'true' within Postman for example, will give you back a HTTP Header in the response called Ocp-Apim-Trace-Location.
This will contain the URL to your trace file, which you can open in any browser.
You might want to install a plugin/extension to be able to format JSON files properly in order to make it easy to read.
Just setting the flag Ocp-Apim-Trace to true will not suffice.
One needs to set the subscription key as well as per this doc.
https://learn.microsoft.com/en-us/azure/api-management/api-management-advanced-policies#Trace
So, for API's which do not have a subscription key, not sure how one can get the traces

WSO2 IS 5.1.0 as OAuth/OIDC IdP response with different claims on UserInfo endpoint

Anyone know why if I make a call to /userinfo endpoint I obtain different JSON response? Specifically:
When I make a call with curl from command line, like $curl -k -H "Authorization: Bearer 2bcea7cc9d7e4b63fd2257aa31116512" https://localhost:9443/oauth2/userinfo?schema=openid I obtain as response the JSON: {"sub":"asela","name":"asela","preferred_username":"asela","given_name":"asela","family_name":"asela"}
If I make the call with a java client (a library that implement the Authorization Code Flow), when the client make the /userinfo call I have as response a JSON like {"sub":"asela#carbon"} without all other claims.
The claims for the service defined in WSO2 IS are the default ones. Thanks for any help.
I have tried this and got the same issue that you have faced. As I have mentioned in my previous comment, the issue occurs due to the claim mapping issue. Normally we get the user's attributes from the “http://wso2.org/claims” dialect. But when we call to OpenID userInfo endpoint, it will provide the user's attributes from “http://wso2.org/oidc/claim”. But all the claims in http://wso2.org/claims are not defined in http://wso2.org/oidc/claim. (Ex:Mobile, Address, Organization). So we have to define those required claims on http://wso2.org/oidc/claim dialect, if it is not defined.
You can check this claims from Identity Server Management console. To do this, Log into ManagementConsole > Main > List (under Claims)
Then you can go though the two claim dialects and add required claims to http://wso2.org/oidc/claim dialect. To add new claim, Goto ManagementConsile > Main > Add(under Claims) > Add new claim. See the attached screen shot of defining a sample claim. Here you need to map the exact Mapped Attribute & Claim Uri with the http://wso2.org/claims.
Hope this will helpful.
WSO2 IS normally returns the claims that are configured under the “http://wso2.org/oidc/claim” claim dialect. But the claim in the response should return normally. So make sure you have defined claim values in the user's profile. You can follow [1] & [2] for more details about this. Still you couldn't get the correct response, please attached your SP configurations and claim configurations for further analyze.
[1] http://xacmlinfo.org/2015/03/09/openid-connect-support-with-resource-owner-password-grant-type/
[2] http://shanakaweerasinghe.blogspot.com/2016/01/get-user-profile-for-oauth-token-using.html

Is there a way to configure a "Topic exchange" to send the non routed messages to a queue?

With the default behaviour when a message is not routed the message is lost.
You could create a queue that receives all messages by using the # for the routing key in the binding. Then create a process that handles all the non routed messages. The process will have to connect to the queue and receive all messages and somehow know whether they have been routed or not. What you will need to do is call the management plugin cli to return all the bindings for an exchange. Parse that result to get you this list of the bindings for the exchange and ignore any incoming message which matches the bindings. Then you can just process the ones that never got routed in the first place. You could even read them back to another queue for a worker process to consume.
have a look at this for information on the management plugin cli.
If you prefer to use the rabbitmqctl you could use
sudo rabbitmqclt report
to get a report that would need to be parsed to get all the bindings. See here