Breaking down of one prometheus.yml file? - configuration

I am using Prometheus for our monitoring and I have a lot of configs (our prometheus.yml main config file is 8000+ lines long).
I would like to divide this out into logical groupings so that it becomes much readable.
I came to know that Prometheus doesn't support this and we can use configuration management systems like Ansible.
Has anyone done this with their Prometheus config file? If so, how did you do it?

Assuming you have lots of node to scrape with different tags and such, prometheus support file based discovery which you can use to organize it according to your need. i would go with
in prometheus.yml
- job_name: 'dummy' # it's mandatory
file_sd_configs:
- files:
- /etc/prometheus/file_sd/*.json
and json file can contains logical grouping.
example.json
[
{
"targets": ["host:port"],
"labels": {
"job": "job_name",
"environment": "test_env",
"service": "test_service"
}
}
]
Here is a nice Blog post about it https://www.robustperception.io/using-json-file-service-discovery-with-prometheus

Related

Create REST API without using any backend

How do you create a rest api without using any backend technologies such as node, express? ie: Is it possible to create an api using only client side framework such as react, vue and no server side involved?
I would like to create a very simple rest api which I can host either on github pages or gitlab pages.
I would like a live rest api that I can access via my own domain such as http://username.github.io and not a fake one that you can create using json-server or My JSON Server - and would be able to do the following.
Here is the description of what the api should do.
/quotes.txt - Read the contents of the quotes.txt file and display it
/quotes.json - Read the contents of the quotes.txt file and convert it
to json format
/random.txt - Read the contents of the quotes.txt file and display a
random quote in txt format
/random.json - Read the contents of the quotes.txt file and display a
random quote in json format
Output of the API
/quotes.txt
To be or not to be that is the question - Author Unknown
All your dreams can come true if you have the courage to pursue them - Walt Disney
Stand up for what you believe in even if you are standing alone -
/quotes.json
[
{
"quote": "To be or not to be that is the question",
"author": "Author Unknown"
},
{
"quote": "All your dreams can come true if you have the courage to pursue them",
"author": "Walt Disney"
},
{
"quote": "Stand up for what you believe in even if you are standing alone",
"author": ""
}
]
/random.txt
All your dreams can come true if you have the courage to pursue them - Walt Disney
/random.json
{
"quote": "All your dreams can come true if you have the courage to pursue them",
"author": "Walt Disney"
}
You could use json-server to serve a JSON file. It supports "canonic" ways to express RESTful requests. This tool can be used on a dev machine or on a intranet server. Here's a blog post about how to set it up.
My JSON Server is a service that does the same exact thing in the Internet environment for you. There are also competing services with similar functionalities, e.g. myjson.com. I'm sure you will find more if you search online.
P.S. I'm sure you'll need to do some massaging to actually translate .txt -> .json. I've never seen services that work against .txt files rather than .json ones.
Ugh. I really like question edits that invalidate the answers.

Azure Functions: How do I control development/production/staging app settings?

I've just started experimenting with Azure functions and I'm trying to understand how to control the app settings depending on environment.
In dotnet core you could have appsettings.json, appsettings.development.json etc. And as you moved between different environments the config would change.
However from looking at Azure function documentation all I can find is that you can set up config in the azure portal but I can't see anything about setting up config in the solution?
So what is the best way to manage build environment?
Thanks in advance :-)
The best way, in my opinion, is using a proper build and release system, like VSTS.
What I've done in one of my solutions is creating an ARM template of my Function App and deploy this using a release pipeline with VSTS RM.
This way you can just add a value to the template.json, like the one from below.
"appSettings": [
// other entries
{
"name": "MyValue",
"value": "[parameters('myValue')]"
}
You will need another file, called parameters.json which will hold the values. This file looks like so (at the moment).
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"name": {},
"storageName": {},
"location": {},
"subscriptionId": {}
}
}
Back in VSTS you can just change/override the values of these parameters in the portal.
By using such a workflow you will get a professional CI/CD implementation where no one has to bother themselves with the actual secrets. They are only known to the system administrators.

HTTP ReST: update large collections: better approach than JSON PATCH?

I am designing a web service to regularly receive updates to lists. At this point, a list can still be modeled as a single entity (/lists/myList) or an actual collection with many resources (/lists/myList/entries/<ID>). The lists are large (millions of entries) and the updates are small (often less than 10 changes).
The client will get web service URLs and lists to distribute, e.g.:
http://hostA/service/lists: list1, list2
http://hostB/service/lists: list2, list3
http://hostC/service/lists: list1, list3
It will then push lists and updates as configured. It is likely but undetermined if there is some database behind the web service URLs.
I have been researching and it seems a HTTP PATCH using the JSON patch format is the best approach.
Context and examples:
Each list has an identifying name, a priority and millions of entries. Each entry has an ID (determined by the client) and several optional attributes. Example to create a list "requiredItems" with priority 1 and two list entries:
PUT /lists/requiredItems
Content-Type: application/json
{
"priority": 1,
"entries": {
"1": {
"color": "red",
"validUntil": "2016-06-29T08:45:00Z"
},
"2": {
"country": "US"
}
}
}
For updates, the client would first need to know what the list looks like now on the server. For this I would add a property "revision" to the list entity.
Then, I would query this attribute:
GET /lists/requiredItems?property=revision
Then the client would see what needs to change between the revision on the server and the latest revision known by the client and compose a JSON patch. Example:
PATCH /list/requiredItems
Content-Type: application/json-patch+json
[
{ "op": "test", "path": "revision", "value": 3 },
{ "op": "add", "path": "entries/3", "value": { "color": "blue" } },
{ "op": "remove", "path": "entries/1" },
{ "op": "remove", "path": "entries/2/country" },
{ "op": "add", "path": "entries/2/color", "value": "green" },
{ "op": "replace", "path": "revision", "value": 10 }
]
Questions:
This approach has the drawback of slightly less client support due to the not-often-used HTTP verb PATCH. Is there a more compatible approach without sacrificing HTTP compatibility (idempotency et cetera)?
Modelling the individual list entries as separate resources and using PUT and DELETE (perhaps with ETag and/or If-Match) seems an option (PUT /lists/requiredItems/entries/3, DELETE /lists/requiredItems/entries/1 PUT /lists/requiredItems/revision), but how would I make sure all those operations are applied when the network drops in the middle of an update chain? Is a HTTP PATCH allowed to work on multiple resources?
Is there a better way to 'version' the lists, perhaps implicitly also improving how they are updated? Note that the client determines the revision number.
Is it correct to query the revision number with GET /lists/requiredItems?property=revision? Should it be a separate resource like /lists/requiredItems/revision? If it should be a separate resource, how would I update it atomically (i.e. the list and revision are both updated or both not updated)?
Would it work in JSON patch to first test the revision value to be 3 and then update it to 10 in the same patch?
This approach has the drawback of slightly less client support due to the not-often-used HTTP verb PATCH.
As far as I can tell, PATCH is really only appropriate if your server is acting like a dumb document store, where the action is literally "please update your copy of the document according to the following description".
So if your resource really just is a JSON document that describes a list with millions of entries, then JSON-Patch is a great answer.
But if you are expecting that the patch will, as a side effect, update an entity in your domain, then I'm suspicious.
Is a HTTP PATCH allowed to work on multiple resources?
RFC 5789
The PATCH method affects the resource identified by the Request-URI, and it also MAY have side effects on other resources
I'm not keen on querying the revision number; it doesn't seem to have any clear advantage over using an ETag/If-Match approach. Some obvious disadvantages - the caches between you and the client don't know that the list and the version number are related; a cache will happily tell a client that version 12 of the list is version 7, or vice versa.
Answering my own question. My first bullet point may be opinion-based and, as has been pointed out, I've asked many questions in one post. Nevertheless, here's a summary of what was answered by others (VoiceOfUnreason) and my own additional research:
ETags are HTTP's resource 'hashes'. They can be combined with If-Match headers to have a versioning system. However, ETag-headers are normally not used to declare the ETag of a resource that is being created (PUT) or updated (POST/PATCH). The server storing the resource usually determines the ETag. I've not found anything explicitly forbidding this, but many implementations may assume that the server determines the ETag and get confused when it is provided with PUT or PATCH.
A separate revision resource is a valid alternative to ETags for versioning. This resource must be updated at the same time as the resource it is the revision of.
It is not semantically enforceable on a HTTP level to have commit/rollback transactions, unless by modelling the transaction itself as a ReST resource, which would make things much more complicated.
However, some properties of PATCH allow it to be used for this:
A HTTP PATCH must be atomic and can operate on multiple resources. RFC 5789:
The server MUST apply the entire set of changes atomically and never provide (e.g., in response to a GET during this operation) a partially modified representation. If the entire patch document cannot be successfully applied, then the server MUST NOT apply any of the changes.
The PATCH method affects the resource identified by the Request-URI, and it also MAY have side effects on other resources; i.e., new resources may be created, or existing ones modified, by the application of a PATCH. PATCH is neither safe nor idempotent
JSON PATCH can consist of multiple operations on multiple resources and all must be applied or none must be applied, making it an implicit transaction. RFC 6902: Operations are applied sequentially in the order they appear in the array.
Thus, the revision can be modeled as a separate resource and still be updated at the same time. Querying the current revision is a simple GET. Committing a transaction is a single PATCH request containing first a test of the revision, then the operations on the resource(s) and finally the operation to update the revision resource.
The server can still choose to publish the revision as ETag of the main resource.

How to expose Openshift enviroment variables on a json

I have installed node-push-server. The configuration is loaded from a json like this:
{
"webPort": 8000,
"mongodbUrl": "mongodb://username:password#localhost/database",
"gcm": {
"apiKey": "YOUR_API_KEY_HERE"
},
"apn": {
"connection": {
"gateway": "gateway.sandbox.push.apple.com",
"cert": "/path/to/cert.pem",
"key": "/path/to/key.pem"
},
"feedback": {
"address": "feedback.sandbox.push.apple.com",
"cert": "/path/to/cert.pem",
"key": "/path/to/key.pem",
"interval": 43200,
"batchFeedback": true
}
}
}
How can I set the enviroment variables for my application in this json file?
I don't think it's possible. You should be able to change all these settings in the code though. For example in node you can do: process.env.OPENSHIFT_VARIABLENAME to read an environment variable.
Example for MongoDB connection string from docs:
//provide a sensible default for local development
mongodb_connection_string = 'mongodb://127.0.0.1:27017/' + db_name;
//take advantage of openshift env vars when available:
if(process.env.OPENSHIFT_MONGODB_DB_URL){
mongodb_connection_string = process.env.OPENSHIFT_MONGODB_DB_URL + db_name;
}
As an alternative, there is a quick and easy deployable gear called AeroGear Push that might serve your needs.
Config files can be awkward because including them in your source repo isn't always a good move.
OpenShift deployments are mostly git push-driven, so there are several options for helping you correctly resolve your configs on the server.
Configuring your service using ENV vars is the most common approach, but since this one requires a flat file, you'll need to find a way to update the file with the correct values.
If you know what keys and values are needed, you should be able to write a script that updates the example json, or merges two json objects to produce a flat config file including the strings node-pushserver will expect.
It looks like mongodbUrl, webPort, (and domain?) would need to be populated with OpenShift-provided values (when available). config-multipaas might be able to help with that.
I would probably implement the config bootstrapping / merging work as a build step, allowing you to prep the config file and start node-pushserver in it's usual way

Does anyone know of a webservice for looking up definitions of words that would be able to return results in JSON?

I found http://words.bighugelabs.com/api.php but nothing like this for definitions/dictionary.
Ideally I'd grab a dictionary file and build my own API for this, but this is for a demo and we need something short-term that can be called from within a javascript function.
wiktionary.org provides an API for example:
http://en.wikipedia.org/w/api.php?action=query&list=search&srsearch=Television&format=json
gives back
{
"query": {"searchinfo": {"totalhits": 208862},
"search": [{
"ns": 0,
"title": "Television",
"snippet": "<span class='searchmatch'>Television<\/span> (TV) is a widely used telecommunication medium for transmitting and receiving moving images , either monochromatic (\"black <b>...<\/b> ",
"size": 28228,
"wordcount": 3566,
"timestamp": "2009-10-02T15:09:56Z"},
...
]},
"query-continue": {"search": {"sroffset":10}}
}
I think this is what you are looking for
bighugelabs API - Json fromat
aonaware services - XML format
Not sure if it would fit your needs, but answers.com has webmaster tools that offer various services, including dictionary lookup. Don't know if any can be called from javascript.
At short notice you could set up a reverse-proxy on your server that lets you AJAX your favorite dictionary website and then 'scrape' the definitions from the document that is returned. It's obviously not a long term solution but for a one time thing, you probably won't get into trouble.
This is a web service and have several dictionaries:
http://services.aonaware.com/DictService/DictService.asmx
P.S. I did not notice the JSON part of your question.