I have configured alerts in grafana, I want to push the data(in JSON format) using webhook to mongo DB.
The issue is when alerts are getting triggered, its not sending data to DB. (It may be due to different format of JSON, not sure).
We have configured the endpoint by taking the reference of https://webhook.site/
Here we triggered the test alert & whatever format of JSON it gave based on that we have created the API endpoint to connect to grafana which we have configured in Grafana-> Contact Points -> URL.
Test Notification works now.
But there are real metrics like CPU, memory etc. which reaches the threshold , it does not send the data to DB. Why so?
Is every metrics sends different JSON format ?
If so How to configure such thing?
Related
I have an incoming stream of data from a message broker. The data is in JSON format. I also have RESTful API I built using Flask where I can get certain information. How can I run the server for the API and store the incoming data in a dictionary at the same time?
you can try to using Redis to caching data from message broker follow key-value format.
I would like to create web service(s) that I can publish to external facing network to allow our customers team to send us CRUD operations on customer orders.
What would be the best practice in this case of using Microsoft or Open-Source technologies to serve the customer reqeusts?
Option1:
The web service accepts data XML/JSON
Stores the data locally in a file
A task picks up the file and attempts data load it in the background
Send an email for records that failed
Drawback here is the response from the web service will not be realtime and validation will be limited.
Option2:
The web service accepts data XML/JSON
Attempt data load
Respond immediately if load was success or failure
Drawback here is if the volume of orders increases increases several folds in near future if the infrastructure can handle it.
I am open to using REST with WCF or Web API and any other helpful technologies that can be scaled when demand grows.
Have you tried message queueing?
Basically, in this architecture, there is a client application (called producer) that submits a message to the message broker (message queue), and there is another application (called consumer) that connects to the broker and subscribes for the message to be processed.
The message can be just simple information or a task that will be processed by another application.
The application can act both as producer and consumer.
There are many message queue software, one of them is rabbitmq.
Here is the complete intro about this: https://www.cloudamqp.com/blog/2015-05-18-part1-rabbitmq-for-beginners-what-is-rabbitmq.html
Since the communication is done through the middleman (aka the message queue) it will not provide an immediate response. But you don't need to send the process result (i.e. Order processing in your case) to the email since the application can subscribe for the message of the result.
It is perfect to handle a huge load of processes. As always you can start small (even free) and scale up in the future.
Take a look at the pricing details https://www.cloudamqp.com/ that provides rabbitmq software as a service.
I do this using ActiveMQ as a central message broker (you can also use Azure Service Bus if you have an Azure subscription) and pre-baked domain objects. For your scenario it might look like:
The web service accepts data XML/JSON
yes, you have a REST service that accepts multipart requests, lets say JSON as it's easier to work with on the client side. Before you can send messages it's usually best to convert the incoming client message to a domain message, so all message consumers know the exact format to expect and can therefore validate the message. I usually create these using xsd.exe on Windows using an XSD file that describes the format of the object. xsd.exe turns that XSD into a C# class. It's then just a process of taking the JSON fields and populating an Order class. That Order then gets sent as a message to the broker. At that point you're now in guaranteed messaging land as JMS will take care of that and ActiveMQ will take care of message persistence.
Stores the data locally in a file
rather than a file, you convert the incoming JSON to a domain class, e.g. an Order instance. You'll never see JSON or XML beyond this point as it's all domain classes from here.
A task picks up the file and attempts data load it in the background
yes, the broker has routes defined in a Camel config that tells it to, for example, send messages coming in on the /client topic to the /orders topic. The task is set up as a durable topic subscriber so automatically gets that Order domain object.
Send an email for records that failed
if the Order object contains information about the client (email etc) then the task can send the email on failure but a better pattern is to route the failed Order to the /error topic where a different task, which again is a durable topic subscriber picks it up and logs/sends email/audits etc
if the volume of orders increases increases several folds in near
future
you can cluster the brokers and run multiple Order consumers. If you separate the failure handling into another route, all the order task has to do is process the order and route the message to either the /error or /success topic depending on the outcome. So each route provides a small piece of the puzzle and you can scale up the pieces if the puzzle gets too big.
I installed a Fiware platform with ContextBroker and Cygnus. Everything is working properly. I would like ContextBroker to update itself automatically every hour, by fetching data from an API on an external server (API in JSON format that returns data with a GET request).
Is it possible ? How to do ?
Every hour:
ContextBroker request to get weather data
ContextBroker update "weather" entitie with the data returned
Thanks
In general, Orion Context Broker expects context producers to push data.
The only case in which Orion pulls data is in context provider scenarios and does only in a transient way, i.e. it gets the data from the context provider and sends to the client in the response but the data is not stored in the context database managed by Orion.
In addition, you could have a look to the FIWARE Device Simulator. This is a powerful and flexible tool which allows to use external as a source of data, allow acting as a bridge between your source of data and Orion Context Broker. From its documentation:
external: Information about an external source from which to load, to transform and to register data into a Context Broker instance.
I'm currently implementing an IoT solution that has a bunch of sensors sending information in JSON format through a gateway.
I was reading about doing this on azure but couldn't quite figure out how the JSON scheme and the Event Hubs work to display the info on PowerBI?
Can I create a schema and upload it to PowerBI then connect it to my device?
there's multiple sides to this. To start with, the IoT ingestion in Azure is done tru Event Hubs as you've mentioned. If your gateway is able to do a RESTful call to the Event Hubs entry point, Event Hubs will get this data and store it temporarily for the retention period specified. Then stream analytics, will consume the data from Event Hubs and will enable you to do further processing and divert the data to different outputs. In your case, you can set one of the outputs to be a PowerBI dashboard which you can authorize with an organizational account (more on that later) and the output will automatically tied to PowerBI. The data schema part is interesting, the JSON itself defines the data table schema to be used on PowerBI side and will propagate from EventHubs to Stream Analytics to PowerBI with the first JSON package sent. Once the schema is there it is fixed and the rest of the data being streamed in should be in the same format.
If you don't have an organizational account at hand to use with PowerBI, you can register your domain under Azure Active Directory and use that account since it is considered within your org.
There may be a way of altering the schema afterwards using PowerBI rest api. Kindly find the links below..Haven't tried it myself tho.
https://msdn.microsoft.com/en-us/library/mt203557.aspx
Stream analytics with powerbi
Hope this helps, let me know if you need further info.
One way to achieve this is to send your data to Azure Events Hub, read it and send it to PowerBI with Stream Analytics. Listing all the steps here would be too long. I suggest that you take a look at a series of blog posts I wrote describing how I built a demo similar to what you try to achieve. That should give you enough info to get you started.
http://guyb.ca/IoTAzureDemo
Our Architecture is using a Push Engine to send data to the browser ,
Could anybody please tell me what is the use of Push Engine ??
( Why is it required , as the same thing can be achivied using a normal AJAX programming )\
Please guide me .
let's say your visiting a website, and the website is updated continuously. Your browser needs to keep updating the data that you're viewing, meaning that the browser needs to keep communicating with the server, and get the updates.
you can use ajax to make requests every few seconds, each time fetch more data from the server. Problem is - you need to make a lot of ajax calls, and you open a connection (a socket) for each, and eventually, it is a very slow process. if the interval between the requests is large, you will have a delay between the updates on the servers, and the updates in your browser.
to solve that, we can manipulate the HTTP calls - keep the request (the connection) open, and continuously send data. that way, when the server wants to send something to the client (browser), there's an open connection, and it doesn't need to way for the next ajax call by the browser.
HTTP servers have a timeout on the requests, so just before request times out, browser will close it and make a new one.
another (better) method is using XMPP protocal, which is used in chats like facebook's and msn.
AJAX is a pull method - it requires the client to connect to the server. If you have some information that you want to display live - for example a live score in a football game - the AJAX call has to be made at regular intervals - even when there is no data waiting on the server. A Push Engine is the reverse - the client and server maintain a connection and the server pushes data when there is data to be sent.