Alert on Change in /etc/passwd(or any xyz file ) file from Dynatrace - dynatrace

We are using Dynatrace to monitoring all our infrastructure and we want to monitor some specific files in our servers (I.Ex. /etc/passwd) , but there is no specific monitoring for that. The Dynatrace agent is running inside all our servers.
Does anyone know how to achieve this or has implemented some solution for this?
Thanks.

As per my understanding the answer would be a no. Dynatrace supports custom plugin in python and I have written few custom plugins but that are executed per min to send the metrics.
But I don't think so writing a custom plugin for this would be good use case.
For the triggers perspective you can go to the Settings "Anomaly Detection" and check if there is any option but quite sure there is no such option for this configuration.
You can raise an RFE for this -- this is actually a good requirement not from /etc/passwd perspective but from other stuff can be monitored.

Related

Configure oncall rotation with zabbix

Is it possible to create a rotation of on call personal with Zabbix.
What I want to archive is that emails with issues only go to a specific person during one week, and that change automatically when the we ends.
Thanks
That can not be done with the built-in capabilities. You would have to use external scripts as Zabbix alertscripts, or script things using the Zabbix API.
There is a feature request in Zabbix to do exactly this.
https://support.zabbix.com/browse/ZBXNEXT-537
Until then, you could take a look at "iLert" or "OpsGenie"

What is the recommended way to watch for changes in a Couchbase document?

I want to use Couchbase but I want to implement change tracking in a few areas similar to the way RethinkDB does it.
There appears to be a hand full of ways to have changes pushed to me from a Couchbase server.
DCP
TAP
XDCR
Which one is the correct choice, or is there a better method?
UPDATE
Thanks #Kirk!
Thanks! it looks like DCP does not have a 100% production ready API today (5/19/2015). Your blog ref helped me decide to use XDCR today and migrate to DCP as soon as an official API is ready.
For XDCR this GitHub Repo has been helpful.
Right now the only fully supported way is XDCR as Kirk mentioned already. If you want to save time implementing it, you might want to base your code on this: https://github.com/couchbaselabs/couchbase-capi-server - it implements server side of the XDCR protocol (v1). The ElasticSearch plugin is based on this CAPI server, for example. XDCR is a good choice if your application is a server/service that can wait for incoming connections, so Couchbase (or the administrator) controls how and when Couchbase replicates data to your service.
Depending on what you want to accomplish, DCP might end up being a better choice later, because it's conceptually different from XDCR. Any DCP-based solution would be pull-based (from your code's side), so you have more fine-grained, programmatical, control over how and when to connect to a Couchbase bucket, and how to distribute your connections across different processes if necessary. For a more in-depth example of using DCP, take a look at the Couchbase-Kafka connector here: https://github.com/couchbase/couchbase-kafka-connector
DCP is the proper choice for this if how it works fits your use case and you can write an application to consume the stream as there is no official API...yet. Here is a blog post about doing this in java by one of the Couchbase Solutions Engineers, http://nosqlgeek.blogspot.de/2015/05/dcp-magic.html
TAP is basically deprecated at this point. It is still in the product, but DCP is far superior to it in most every fashion.
XDCR could be used, as it uses DCP, but you'd have to write a plug-in for XDCR. So you'd just be better off to write one directly to consume the DCP stream.

Asterisk - Using realtime to store extension contexts

I would like to be able setup an Asterisk service where users can self register and create their own numbers. I was hoping to use extension contexts to achieve the actual partitioning of accounts. However the only way I can see to do this is by editing the extensions.conf file and manually restarting the service.
Does anybody have any suggestions on how to achieve this by using Realtime? I have seen various patches, etc but they are all very old and never made it into a stable release.
There is no any need in patches.
All that chnages in asterisk core since 1.4 version.
http://www.voip-info.org/wiki/view/Asterisk+RealTime+Extensions
But that will not work very effective
Very likly you need hire expert who do it correctly via db lookups/dialplan.

Adding centralized configuration to our servers

As our systems grow, there are more and more servers and services (different types and multiple instances of the same type that require minor config changes). We are looking for a "cetralized configuration" solution, preferably existing and nothing we need to develop from scrtach.
The idea is something like, service goes up, it knows a single piece of data (its type+location+version+serviceID or something like that) and contacts some central service that will give it its proper config (file, object or whatever).
If the service that goes online can't find the config service it will either use a cached config or refuse to initialize (behavior should probably be specified in the startup parameters it's getting from whom or whatever is bringing it online)
The config service should be highly avaiable i.e. a cluster of servers (ZooKeeper keeps sounding like a perfect candidate)
The service should preferably support the concept of inheritence, allowing a global configuration file for the type of service and then specific overrides or extensions for each instance of the service by its ID. Also, it should support something like config versioning, allowing to keep different configurations of the same service type for different versions since we want to rely more on more on side by side rollout of services.
The other side of the equation is that there is a config admin tool that connects to the same centralized config service, and can review and update all the configurations based on the requirements above.
I know that if I modify the core requirement from serivce pulling config data to having the data pushed to it I can use something like puppet or chef to manage everything. I have to be honest, I have little experience with these two systems (our IT team has more), but from my investigations I can say it seemed they are NOT the right tools for this job.
Are there any systems similar to the one I describe above that anyone has integrated with?
I've only had experience with home grown solutions so my answer may not solve your issue but may help someone else. We've utilized web servers and SVN robots quite successfully for configuration management. This solution would not mean that you would have to "develop from scratch" but is not a turn-key solution either.
We had multiple web-servers each refreshing its configurations from a SVN repository at a synchronized minute basis. The clients would make requests of the servers with the /type=...&location=...&version=... type of HTTP arguments. Those values could then be used in the views when necessary to customize the configurations. We did this both with Spring XML files that were being reloaded live and standard field=value property files.
Our system was pull only although we could trigger a pull via JMX If necessary.
Hope this helps somewhat.
Config4* (of which I am the maintainer) can provide you with most of the capabilities you are looking for out-of-the-box, and I suspect you could easily build the remaining capabilities on top of it.
Read Chapters 2 and 3 of the "Getting Started" manual to get a feel for Config4*'s capabilities (don't worry, they are very short chapters). Doing that should help you decide how well Config4* meets your needs.
You can find links to PDF and HTML versions of the manuals near the end of the main page of the Config4* website.

What is the experience with Google 'Omaha' (their auto-update engine for Chrome)?

Google has open-sourced the auto update mechanism used in Google Chrome as Omaha.
It seems quite complicated and difficult to configure for anybody who isn't Google. What is the experience using Omaha in projects? Can it be recommended?
We use Omaha for our products. Initially there was quite a bit of work to change hardcoded URLs and strings. We also had to implement the server ourselves, because there was not yet an open source implementation. Today, I would use omaha-server.
There are no regrets with ditching our old client update solution and going with Omaha.
Perhaps, you can leverage the courgette algorithm, which is the update mechanism that is used in Google Chrome. It is really easy to use and apply to your infrastructure. Currently, it just works for Windows operating systems. Windows users of Chrome receive updates in small chunks, unlike Mac and Linux users who still receive the chunks in total size.
You can find the source code here in the Chromium SVN repository. It is a compression algorithm to apply small updates to Google Chrome instead of sending the whole distribution all the time. Rather than push the whole 10 MB to the user, you can push just the diff of the changes.
More information on how Courgette works can be found here and the official blog post about it here.
It works like this:
server:
hint = make_hint(original, update)
guess = make_guess(original, hint)
diff = bsdiff(concat(original, guess), update)
transmit hint, diff
client
receive hint, diff
guess = make_guess(original, hint)
update = bspatch(concat(original, guess), diff)
When you check out the source, you can compile it as an executable (right click compile in Visual Studio) and you can use the application in that form for testing:
Usage:
courgette -dis <executable_file> <binary_assembly_file>
courgette -asm <binary_assembly_file> <executable_file>
courgette -disadj <executable_file> <reference> <binary_assembly_file>
courgette -gen <v1> <v2> <patch>
courgette -apply <v1> <patch> <v2>
Or, you can include that within your application and do the updates from there. You can imitate the Omaha auto update environment by creating your own service that you periodically check and run Courgette.
I've been using Omaha in various projects since 2016. The projects had between a handful and millions of update clients. Target operating systems were mostly Windows, but also some Linux devices and (via Sparkle) macOS.
Omaha is difficult to set up because it requires you to edit Google's C++ implementation. You also need a corresponding server. The standard implementation is omaha-server and does not come from Google. However, in return it also supports Sparkle for automatic updates on Mac (hence why I mentioned Sparkle above).
While setting up the above components is difficult, once they are configured they are work extremely well. This is perhaps not surprising given that Google use Omaha to update millions (billions?) of devices.
To help others get started with Omaha, I wrote a tutorial that gives a quick overview of how it works.
UPDATE
Customizing google omaha isn't that easy espacialy if you have no knowledge about c++, python or com.
Updates aren't published that frequently
crystalnix/omaha is managed by the community and they try to merge the main repo into their's; additional features are implemented and basic things are fixed
google/omaha is more active and changes from google are added but not frequently
To implement manual updates in any language you can use the com classes
Resume
google omaha is still alive but in a lazy way
bugs are fixed but do not expect hotfixes
google omaha fits for windows client apps supported from windows vista and upwards
the server side I'm using supports also sparkle for crossplatform support
feedbacks and crashes are also supported on the server
feedbacks are sent with the google protocol buffers
crash handling is done with breakpad
I personaly would go for google omaha instead of implementing my own solution. However we will discuss this internal.
In the .NET world you might want to take a look at ClickOnce deployment.
An auto-update mechanism is something I'd personally code myself, and always have in the past. Unless you have a multi-gigabyte application and want to upload bits and pieces only, just rely on your own code/installer. That said, I've not looked at Google's open source library at all.. and didn't even know it existed. I can't imagine it offering anything superior to what you could code yourself, and with your own code you aren't bound by any licensing restrictions.