How to pull metrics in prometheus using client_golang - json

I am trying to write a JSON exporter in GoLang using client_golang
I could not find any useful example for this. I have a service ABC that produces JSON output over HTTP. I want to use the client-golang to export this metric to prometheus.

Take a look at the godoc for the Go client, it is very detailed and contains plenty of examples. The one for the Collector interface is probably the most relevant here:
https://godoc.org/github.com/prometheus/client_golang/prometheus#example-Collector
Essentially, you would implement the Collector interface, which contains two methods: describe and collect.
describe simply sends descriptions for the possible Metrics of your Collector over the given channel. This includes their name, possible label values and help string.
collect creates actual metrics that match the descriptions from describe and populates them with data. So in your case, it would GET the JSON from your service, unmarshal it, and write values to the relevant metrics.
In your main function, you then have to register your collector, and start the HTTP server, like this:
prometheus.MustRegister(NewCustomCollector())
http.Handle("/metrics", promhttp.Handler())
log.Fatal(http.ListenAndServe(":8080", nil))

You mean you want to write an exporter for your own service using golang? The exporters listed on prometheus exporter page are all good examples, many of which are written in golang, you could pick a simple one like redis exporter to see how it's implemented.
Basically what you need to do is:
Define your own Exporter type
Implement interface prometheus.Collector, you can poll the json data from your service and build metrics based on it
Register your own Exporter to prometheus by prometheus.MustRegister
Start a HTTP server and expose metrics endpoint for prometheus to poll metrics

Related

dynamically update the request json and send it as multipart form data in karate [duplicate]

In my karate tests i need to write response id's to txt files (or any other file format such as JSON), was wondering if it has any capability to do this, I haven't seen otherwise in the documentation. In the case of no, is there a simple JavaScript function to do so?
Try the karate.write(value, filename) API but we don't encourage it. Also the file will be written only to the current "build" directory which will be target for Maven projects / stand-alone JAR.
value can be any data-type, and Karate will write the bytes (or plain-text) out. There is no built-in support for any other format.
Here is an example.
EDIT: for others coming across this answer in the future the right thing to do is:
don't write files in the first place, you never need to do this, and this question is typically asked by inexperienced folks who for some reason think that the only way to "save" a response before validation is to write it to a file. No, please don't waste your time - and please just match against the response. You can save it (or parts of it) to variables while you make other HTTP requests. And do not write your tests so that scenarios (or features) depend on other scenarios, this is a very bad practice. Also note that by default, Karate will dump all HTTP requests and responses in the log file (typically in target/karate.log) and also in the HTML report.
see if karate.write() works for you as per this answer
write a custom Java (or JS function that uses the JVM) to do what you want using Java interop
Also note that you can use karate.toCsv() to convert JSON into CSV if needed.
My justification for writing to a file is a different one. I am using karate explicitly to implement a mock. I want to expose an endpoint wherein the upstream system will send some basic data through json payload using POST/PUT method and karate will construct the subsequent payload file and stores it the specific folder, and this newly created payload file will be exposed through another GET call.

How to write an object to gcp object store with x-goog-if-generation-match from a cloud function

I'd like to write an object to gcp object store, while using the x-goog-if-generation-match feature. Using #google-cloud/storage npm library, the file object does not seem to have an option for setting the required object generation.
What are the alternatives?
As you noticed, the #google-cloud/storage npm library doesn't support generation and metageneration preconditions.
As an alternative, you may use either the Storage XML API or the Storage JSON API which do support it. Depending on if you want to use one or the other, you'll be able to use preconditions via HTTP Headers or query string parameters. You'll find the whole list of those here.
Another alternative is to use some kind of optimistic locking:
get the generation id
write object
get the generation id again
repeat until generation after = generation before + 1

How to force Search and Promote returns JSON format instead of xml?

In our application we send requests to S&P like this:
http://SearchAndPromoteUrl/?parameter1=1&parameter2=2
My current understanding that S&P is external cloud service. sometimes we collect statistic and send to the server. Server analyzes our feed and we can get niformation from this S&P server.
1.Where can I found valid parameters after http://SearchAndPromoteUrl/?
2.Which manipulations should I execute for getting JSON instead of XML(S&P server should returns JSON Now our S&P returns xml).
This article says that it is possible
related information
http://microsite.omniture.com/t2/help/en_US/snp/8.15.0/SPguide.pdf
First I'd suggest using the updated documentation - https://marketing.adobe.com/resources/help/en_US/snp/8.16.0/c_getting_started.html
The recommended way to integrate with S&P is to use the search form they provide. This form takes care of the request and parameters for you.
UPDATE: Here are more details: https://marketing.adobe.com/resources/help/en_US/snp/t_copying_the_html_code_of_the_search_form_into_the_pages_of_your_website.html
Yes, you can use JSON instead of the standard XML response, and it's quite simple: just write your own presentation template using JSON instead of XML (see https://marketing.adobe.com/resources/help/en_US/snp/8.16.0/c_about_templates.html)
Have a nice day.

wso2 API Manager and BAM - How to control API invocation?

How can I retrieve the number of API invocations? I know the data has to be somewhere because wso2 BAM shows piecharts with similar data...
I would like to get that number in a mediation sequencel; is that possible? Might this might be achieved via a DB-lookup?
The way how API Usage Monitoring in WSO2 API Manager works is, there is an API handler (org.wso2.carbon.apimgt.usage.publisher.APIUsageHandler) that gets invoked for each request and response passing through the API gateway. In this handler all pertinent information with regard to API usage is published to the WSO2 BAM server. The WSO2 BAM server persists this data in Cassandra database that is shipped with it. Then there is a BAM Toolbox that has been packaged with required analytic scripts written using Apache Hive that can be installed on the BAM server. These scripts would summarize the data periodically and persist the summarized data to an sql database. So the graphs and charts shown in the API Publisher web application are created using the summarized data from the sql database.
Now, if what you require is extractable from these summarized sql tables then i suppose the process is very straight forward. You could use the DBLookup mediator for this. But if some dimension of the data which you need has been lost due to the summarizing, then you will have a little more work to do.
You have two options.
The easiest approach which involves no coding at all would be to write a custom Hive script that suits your requirement and summarize data to a sql table. Then, like before use a DBLookup mediator to read the data. You can look at the existing Hive scripts that are shipped with the product to get an idea of how it is written.
If you dont want BAM in the picture, you still can do it with minimal coding as follows. The implementation class which performs the publishing is org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageDataBridgeDataPublisher. This class implements the interface org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageDataPublisher. The interface has three instace methods as follows.
public void init()
public void publishEvent(RequestPublisherDTO requestPublisherDTO)
public void publishEvent(ResponsePublisherDTO responsePublisherDTO)
The init() method runs just once during server startup. Here is where you can add all your logic which is needed to bootstrap the class.
The publishEvent(RequestPublisherDTO) is where you publish request events and publishEvent(ResponsePublisherDTO) is where you publish response events. The DTO objects are encapsulated representations of the request and response data respectively.
What you will have to do is write a new implementation for this interface and configure it as the value for DataPublisherImpl property in api-manager.xml. To make things easier you can simply extends the exsiting org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageDataBridgeDataPublisher, write your necessary logic to persist usage data to an sql database within the init(), publishEvent(RequestPublisherDTO) and publishEvent(ResponsePublisherDTO) and at the end of each method just call its respective super class method.
E.g. the overriding init() will call super().init(). This way you are only adding the neccessary code for your requirement, and leaving the BAM stat collection requirement to the super class.

Writing data to MySQL from Hadoop Reducer

I am experimenting with Hadoop Map-Reduce and in my tests I am able to store output of reducers to HBase. However, I want to write data a mysql database instead of HBase. Mappers would still be reading their input data from HBase. I have found this but it requires to use MySQL at both input and output while I need it at only output. Also, above link uses some deprecated classes from org.apache.hadoop.mapred package for which a new package org.apache.hadoop.mapreduce is available now, however I am not able to find any tutorial using this new package till now.
I have found this but it requires to use MySQL at both input and output while I need it at only output.
The InputFormat (DBInputFormat) is independent of the OutputFormat (DBOutputFormat). It should be possible be possible to read from HBase in the Mapper and write to a DB in the Reducer.
With the new MR API set the Job#setInputFormat and Job#setOutputFormat, with the old MR API set the JobConf#setInputFormat and JobConf#setOutputFormat appropriately to what input/output format is required. Both these formats need not be same. It should be possible to read from an XML in a mapper and write to a Queue in the Reducer also if required.
Also, above link uses some deprecated classes from org.apache.hadoop.mapred package for which a new package org.apache.hadoop.mapreduce is available now, however I am not able to find any tutorial using this new package till now.
If you are comfortable with the old API, then go ahead and use it. There is not much difference in the functionality between the new and the old API. There are two DBInputFormat for the old and the new API. Make sure you don't mix the old/new InputFormats with the old/new MR API.
Here is a tutorial on the new API.