Periodically update ContextBroker from an external API - fiware

I installed a Fiware platform with ContextBroker and Cygnus. Everything is working properly. I would like ContextBroker to update itself automatically every hour, by fetching data from an API on an external server (API in JSON format that returns data with a GET request).
Is it possible ? How to do ?
Every hour:
ContextBroker request to get weather data
ContextBroker update "weather" entitie with the data returned
Thanks

In general, Orion Context Broker expects context producers to push data.
The only case in which Orion pulls data is in context provider scenarios and does only in a transient way, i.e. it gets the data from the context provider and sends to the client in the response but the data is not stored in the context database managed by Orion.
In addition, you could have a look to the FIWARE Device Simulator. This is a powerful and flexible tool which allows to use external as a source of data, allow acting as a bridge between your source of data and Orion Context Broker. From its documentation:
external: Information about an external source from which to load, to transform and to register data into a Context Broker instance.

Related

Is there a way to push Grafana alerts into mongoDB using webhook

I have configured alerts in grafana, I want to push the data(in JSON format) using webhook to mongo DB.
The issue is when alerts are getting triggered, its not sending data to DB. (It may be due to different format of JSON, not sure).
We have configured the endpoint by taking the reference of https://webhook.site/
Here we triggered the test alert & whatever format of JSON it gave based on that we have created the API endpoint to connect to grafana which we have configured in Grafana-> Contact Points -> URL.
Test Notification works now.
But there are real metrics like CPU, memory etc. which reaches the threshold , it does not send the data to DB. Why so?
Is every metrics sends different JSON format ?
If so How to configure such thing?

Different notification url in Orion subscription rule with the same Cygnus instance

Is it possible to make several subscription of Orion just changing the notification url of cygnus using the same instance of Cygnus?
As mentioned in the official documentation of Orion context broker
The URL where to send notifications is defined with the url sub-field. Here we are using the URL of the accumulator-server.py program, previously started. Only one URL can be included per subscription. However, you can have several subscriptions on the same context elements (i.e. same entity and attribute) without any problem.

Using Web Services to Load Customer Orders using .Net

I would like to create web service(s) that I can publish to external facing network to allow our customers team to send us CRUD operations on customer orders.
What would be the best practice in this case of using Microsoft or Open-Source technologies to serve the customer reqeusts?
Option1:
The web service accepts data XML/JSON
Stores the data locally in a file
A task picks up the file and attempts data load it in the background
Send an email for records that failed
Drawback here is the response from the web service will not be realtime and validation will be limited.
Option2:
The web service accepts data XML/JSON
Attempt data load
Respond immediately if load was success or failure
Drawback here is if the volume of orders increases increases several folds in near future if the infrastructure can handle it.
I am open to using REST with WCF or Web API and any other helpful technologies that can be scaled when demand grows.
Have you tried message queueing?
Basically, in this architecture, there is a client application (called producer) that submits a message to the message broker (message queue), and there is another application (called consumer) that connects to the broker and subscribes for the message to be processed.
The message can be just simple information or a task that will be processed by another application.
The application can act both as producer and consumer.
There are many message queue software, one of them is rabbitmq.
Here is the complete intro about this: https://www.cloudamqp.com/blog/2015-05-18-part1-rabbitmq-for-beginners-what-is-rabbitmq.html
Since the communication is done through the middleman (aka the message queue) it will not provide an immediate response. But you don't need to send the process result (i.e. Order processing in your case) to the email since the application can subscribe for the message of the result.
It is perfect to handle a huge load of processes. As always you can start small (even free) and scale up in the future.
Take a look at the pricing details https://www.cloudamqp.com/ that provides rabbitmq software as a service.
I do this using ActiveMQ as a central message broker (you can also use Azure Service Bus if you have an Azure subscription) and pre-baked domain objects. For your scenario it might look like:
The web service accepts data XML/JSON
yes, you have a REST service that accepts multipart requests, lets say JSON as it's easier to work with on the client side. Before you can send messages it's usually best to convert the incoming client message to a domain message, so all message consumers know the exact format to expect and can therefore validate the message. I usually create these using xsd.exe on Windows using an XSD file that describes the format of the object. xsd.exe turns that XSD into a C# class. It's then just a process of taking the JSON fields and populating an Order class. That Order then gets sent as a message to the broker. At that point you're now in guaranteed messaging land as JMS will take care of that and ActiveMQ will take care of message persistence.
Stores the data locally in a file
rather than a file, you convert the incoming JSON to a domain class, e.g. an Order instance. You'll never see JSON or XML beyond this point as it's all domain classes from here.
A task picks up the file and attempts data load it in the background
yes, the broker has routes defined in a Camel config that tells it to, for example, send messages coming in on the /client topic to the /orders topic. The task is set up as a durable topic subscriber so automatically gets that Order domain object.
Send an email for records that failed
if the Order object contains information about the client (email etc) then the task can send the email on failure but a better pattern is to route the failed Order to the /error topic where a different task, which again is a durable topic subscriber picks it up and logs/sends email/audits etc
if the volume of orders increases increases several folds in near
future
you can cluster the brokers and run multiple Order consumers. If you separate the failure handling into another route, all the order task has to do is process the order and route the message to either the /error or /success topic depending on the outcome. So each route provides a small piece of the puzzle and you can scale up the pieces if the puzzle gets too big.

Azure API Management - User Metadata

I am using Azure API Management to provide API gateway for some APIs. To set up a policy for a particular Api, I have used a Property(Named Value) to restore user metadata and then I assign it into a Variable in incoming request body. When adding a new user I need to add metadata for the new user in to the json. The property value has grown and exceeded the limit now and I cannot add more info to it anymore. I am wondering what the best way is to restore my large metadata in order to be accessible in API Management policy?
Update1:
I have switched the Authentication process from Azure to Auth0 so I can add the user metadata to Auth0 app_metadata and then in Azure policies I validate JWT from Auth0 and obtain token claim(app_metadata) explained in this article. By doing so I can solve the large user metadata (json) issue however this doesn't solve other non-related user metadata stored in other Properties(Named Value) and moreover the API gateway inbound policies are growing and becoming a huge bunch of logic which is not easy to manage and maintain.
At this stage I am looking for a solution to handle all the API gateway inbound policies in a better way and more manageable environment i.e. C#. So my two cents is to implement the API gateway inbound policies in a new .net Api and call this new API in the existing API gateway inbound policies so that it can play a bridge role between Azure API gateway and existing API however I'm still not sure if this is acheivable and whether existing API can be called via new API directly or it should be called via Azure API gateway in some way!
At this point you have to either store it in multiple variables or hardcode it in policy directly.
After more research I ended up with this solution which basically suggests to restore user metadata in Azure Cosmos DB and call Cosmos API in API Management Policy to access to the metadata and also the Cosmos API call can be cached in the policy.

Cannot see POI on Wirecloud Map Viewer

We are using the Map Viewer of the fiware-wirecloud mashup to show as POIs in a world map the location of the 3D printers registered in our Fiware project. The instance of GE implementation used is the "FIWARE Lab Mashup Portal" and the POIs are are created and retrieved from the "FIWARE Lab Global Instance" of the "Orion Context Broker" (NGSI server URL--> https://orion.lab.fiware.org:1026/)
The application was working fine but several months ago the POIs suddenly disappeared from the map.
After looking over all the related questions in stackoverflow and other resources about this problem, we did the following:
Update the version of the NGSI source operator from v3.0.3 to v3.0.5,
Change the NGSI proxy URL from http://ngsiproxy.lab.fiware.org to https://ngsiproxy.lab.fiware.org, and also
Select the option "Use the FIWARE credentials of the workspace owner" to make public the mashup for all user in the web where it is
embed.
The mashup started to work perfectly.
But last week we note that the mashup again failed to show points of interest.
We've made some checking:
There isn't a new version available of the NGSI source operator in the Marketplace. We are using the last version v3.0.5. Same with the
"NGSI Entity to PoI" operator or the "Map viewer" widget.
There are no changes about the NGSI server URL--> https://orion.lab.fiware.org:1026/ or the NGSI proxy URL-->
https://ngsiproxy.lab.fiware.org
And finally we have checked the data in the public instance of the Orion Context Broker throught a curl request and the conexion to Orion and the returned json seem right.
What it might be happening?
We have looked over all the previous similar questions in stackoverflow and other sources, but this time the answers don't help us.
Thank you in advanced for your help.
There were a temporal problem with the global Orion Context broker and it was not sending notifications (queries and other operations were working well). The context broker team is checking the global instance and should be operational in a short period of time.
NOTE: Check the URL of the context broker, it should be: http://orion.lab.fiware.org:1026/ (without the s of https).