Read csv file using Prometheus - csv

I have data in csv formate, I want to add that data into prometheus please any help me.
Are there any exporter exist or API etc.

You need an exporter that exposes the data from the CSV files over HTTP in Prometheus format, so that Prometheus can scrape it.
See, for example, https://github.com/stohrendorf/csv-prometheus-exporter if it does what you want.
Alternatively, you can expose the raw data directly from your application with a Prometheus client library or use this to create your own customised exporter.

Related

How do we name the files that are streamed via firehose?

I'm building an architecture using boto3, and I hope to dump the data in JSON format from API to S3. What blocks in my way right now is first, firehose does NOT support JSON; my workaround right now is not compressing them but it's still different from a JSON file. But I still want to see a better choice to make the files more compatible.
And second, the file names can't be customized. All the data I collected will be eventually converted onto Athena for the query, so can boto3 do the naming?
Answering a couple of the questions you have. Firstly if you stream JSON into Firehose it will write JSON to S3. JSON is the file data structure and compression is the file type. Compressing JSON doesn't make it something else. You'll just need to decompress it before consuming it.
RE: file naming, you shouldn't care about that. Let the system name it whatever. If you define the Athena table with the location, you'll be able to query it. When new files are added, you'll be able to query them immediately.
Here is an AWS tutorial that walks you through this process. JSON stream to S3 with Athena query.

Polymer:how to handle csv file upload and convert to json and send to server

i want to handle a requirement in polymer webcomponents where user can upload csv file from ui and csv file can be parsed to json and sent to server ,i searched and found for vaadin upload,looked over the api but i am not sure how to receive the csv file and convert to json and sent to server,can anyone show a jsfiddle of vaadin upload or any other web component to handle this scenario?
First of all, I am wondering why you would not simply do the conversion on the server side.
In this case, you would be able to use the vaadin-upload directly indeed.
Here is a snippet that would upload all files to the example.com server, and only allow CSV files.
<vaadin-upload target="https://example.com/upload" method="POST" accept="text/csv">
</vaadin-upload>
There are plenty of resources on how to convert CSV files to JSON.
Here is a snippet
And here is a node library
If you really wanted to do the conversion client side, then I would suggest to create an element that would embed a vaadin-upload, and convert the Files array to Json before manually calling the uploadFiles method.

MarkLogic Java API batch upload files (.csv)

Im trying out the MarkLogic Java API and would want to bulk upload some files with the extension .csv
I'm not sure what to use, since the Java API only supports JSON, XML, and TXT files.
How do I batch upload files using the MarkLogic Java api? Do i convert everything to JSON?
Do i convert everything to JSON?
Yes, that is a common way to do it.
If you would like additional examples of how you can wrangle CSV with the Java Client API, check out OpenCSVBatcherExample and JacksonDatabindTest.testDatabindingThirdPartyPojoWithMixinAnnotations. The first demonstrates converting the csv to XML and using a custom REST extension. The second example (well, unit test...) demonstrates converting the csv to JSON and using the batch upload (Bulk Writes) capabilities Justin linked to.
If you have CSV files on your filesystem, I’d start with mlcp, as suggested above. It will handle all of the parsing and splitting into multiple transactions/batches for you. Take a look at the mlcp documentation for more details and some example configurations.
If you’d like more control over the parsing and splitting logic than mlcp gives you out-of-the-box or you’re getting CSV from some other source (i.e. not files on the filesystem), you can use the Java Client API. The Java Client API allows you to efficiently write batches using a WriteSet. Take a look at the “Bulk Writes” example.
According to your reply to Justin, you cannot use MLCP because it is command line and you need to integrate it into a web portal.
Well, MLCP is released as open cource software under the Apache2 licence. So if you are happy with this licence, then you have the source to integrate.
But what I see as your main problem statement is more specific:
How can I create miltiple XML OR JSON documents from a CSV file [allowing the use of the java API to then upload them as documents in MarkLogic]
With that specific problem statement:
1) have a look at SplitDelimitedTextReader.java from the mlcp source
2) try some java libraries for this purpose such as http://jsefa.sourceforge.net/quick-tutorial.html

Big Query table to be extracted as JSON in Local machine

I have an idea on how to extract Table data to Cloud storage using Bq extract command but I would like rather like to know, if there are any options to extract a Big Query table as NewLine Delimited JSON to Local Machine?
I could extract Table data to GCS via CLI and also download JSON data from WEB UI but I am looking for solution using BQ CLI to download table data as JSON in Local machine?. I am wondering is that even possible?
You need to use Google Cloud Storage for your export job. Exporting data from BigQuery is explained here, check also the variants for different path syntaxes.
Then you can download the files from GCS to your local storage.
Gsutil tool can help you further to download the file from GCS to local machine.
You first need to export to GCS, then to transfer to local machine.
If you use the BQ Cli tool, then you can set output format to JSON, and you can redirect to a file. This way you can achieve some export locally, but it has certain other limits.
this exports the first 1000 line as JSON
bq --format=prettyjson query --n=1000 "SELECT * from publicdata:samples.shakespeare" > export.json
It's possible to extract data without using GCS, directly to your local machine, using BQ CLI.
Please see my other answer for details: BigQuery Table Data Export

Get Json From Azure Storage Blob

I want to get the json file in Azure Storage Blob through the browser.
I used Stream Analysis and comes out a json file in the Blob container. Now i need to get the information inside the json file in order to show the IOT device status in real-time.
I tried to use Jsonp,
but I don't know how to add the callBack method in the Json file without download it. Is there any way to add the callBack method??
or Is there another way to get the information inside the container?
for this particular scenario, I'd recommend PowerBI. Now Stream Analytics have direct output to PowerBI and you can pretty much customize the dashboard for your real time IoT needs.
You can refer to this article for step by step Stream Analytics + PowerBI.
Coming back to your question, you need to download the blob to access the content. Stream Analytics to BLOB is usually for archiving or later predictive analysis scenarios.
Instead if you still prefer not to use PowerBI, I'd either arrange the SA output to an event hub and read the data from there in real time or alternatively save the data into a NO-SQL db like DocumentDB on Azure and then read from there. I can recommend Highcharts if you want to use custom gauges etc to visualize the data.
Hope this helps.