I would like to be able setup an Asterisk service where users can self register and create their own numbers. I was hoping to use extension contexts to achieve the actual partitioning of accounts. However the only way I can see to do this is by editing the extensions.conf file and manually restarting the service.
Does anybody have any suggestions on how to achieve this by using Realtime? I have seen various patches, etc but they are all very old and never made it into a stable release.
There is no any need in patches.
All that chnages in asterisk core since 1.4 version.
http://www.voip-info.org/wiki/view/Asterisk+RealTime+Extensions
But that will not work very effective
Very likly you need hire expert who do it correctly via db lookups/dialplan.
Related
We are using Dynatrace to monitoring all our infrastructure and we want to monitor some specific files in our servers (I.Ex. /etc/passwd) , but there is no specific monitoring for that. The Dynatrace agent is running inside all our servers.
Does anyone know how to achieve this or has implemented some solution for this?
Thanks.
As per my understanding the answer would be a no. Dynatrace supports custom plugin in python and I have written few custom plugins but that are executed per min to send the metrics.
But I don't think so writing a custom plugin for this would be good use case.
For the triggers perspective you can go to the Settings "Anomaly Detection" and check if there is any option but quite sure there is no such option for this configuration.
You can raise an RFE for this -- this is actually a good requirement not from /etc/passwd perspective but from other stuff can be monitored.
I'm playing around in .netcore and attempting to make use of the user secret store, some details are here: https://docs.asp.net/en/latest/security/app-secrets.html
I'm getting along with it well enough when working locally, but I'm having trouble understanding how this could be utilized effectively in a team environment, and if I wanted to work on this project from more than one computer.
The store itself (at least by default) keeps its configuration json file within the users/appdata (on windows). This feature is good to use if you're uploading the project to github, to hide your API keys, connection strings etc. This is all great when it's just me, on one machine working on a project. But how does this work when working in a team environment, or on multiple machines? The only thing I can think of is to find the configuration file, check it into a private repo, and make sure to replace it in the correct directory when changes occur.
Is there another way to manage this that I'm not aware of?
As you already know, the Secret Manager tool is providing another method to avoid checking sensitive data into source control by adding this layer of control.
So, where should we store sensitive configuration instead? The location should obviously be separate from your source code and, more importantly, secure. It could be in a separate private repository, protected fileshare, document management system, etc.
Rather than finding and sharing the exact configuration file, however, I would suggest keeping a script (e.g. .bat file) that you would run on each machine to set your secrets. For example:
dotnet user-secrets set MySecret1 ValueOfMySecret1 --project c:\work\WebApp1
dotnet user-secrets set MySecret2 ValueOfMySecret2 --project c:\work\WebApp1
This would be more portable between machines and avoid the hassle of knowing where to find and copy the config files themselves.
Also, for these settings, consider whether you need them to be the same across all developers in your team. For local development, I would normally want to have control to install, use, and name resources differently than others in my team. Of course, this depends on your situation and preferences, and I see reasons to share them too.
I've been playing with IBM Bluemix (liking it a lot so far) and we are considering to use it for production. What I'm not totally clear on is what happens when runtime environments or services get updated. I assume this happens quite frequently.
Will the new version be always backward compatible? If so, is this guaranteed somewhere in the terms of service?
What I am trying to avoid is to put production code on the platform and then having to update it constantly (or having it break) due to runtime or service updates.
Does anyone have any experience? Have past updates always been backward compatible?
Mark
While I don't believe there is a guarantee that the buildpacks will always be backwards compatible, you will always be able to select the previous buildpack version.
Try running a 'cf buildpacks' command and have a look at the buildpack names and version info encoded therein and think you'll see what I mean.
When buildpacks are updated they won't be used for your application until you restage it, so you have some control over when to pick up the updates as well. This gives you a chance to test it on non-production versions of the app.
As our systems grow, there are more and more servers and services (different types and multiple instances of the same type that require minor config changes). We are looking for a "cetralized configuration" solution, preferably existing and nothing we need to develop from scrtach.
The idea is something like, service goes up, it knows a single piece of data (its type+location+version+serviceID or something like that) and contacts some central service that will give it its proper config (file, object or whatever).
If the service that goes online can't find the config service it will either use a cached config or refuse to initialize (behavior should probably be specified in the startup parameters it's getting from whom or whatever is bringing it online)
The config service should be highly avaiable i.e. a cluster of servers (ZooKeeper keeps sounding like a perfect candidate)
The service should preferably support the concept of inheritence, allowing a global configuration file for the type of service and then specific overrides or extensions for each instance of the service by its ID. Also, it should support something like config versioning, allowing to keep different configurations of the same service type for different versions since we want to rely more on more on side by side rollout of services.
The other side of the equation is that there is a config admin tool that connects to the same centralized config service, and can review and update all the configurations based on the requirements above.
I know that if I modify the core requirement from serivce pulling config data to having the data pushed to it I can use something like puppet or chef to manage everything. I have to be honest, I have little experience with these two systems (our IT team has more), but from my investigations I can say it seemed they are NOT the right tools for this job.
Are there any systems similar to the one I describe above that anyone has integrated with?
I've only had experience with home grown solutions so my answer may not solve your issue but may help someone else. We've utilized web servers and SVN robots quite successfully for configuration management. This solution would not mean that you would have to "develop from scratch" but is not a turn-key solution either.
We had multiple web-servers each refreshing its configurations from a SVN repository at a synchronized minute basis. The clients would make requests of the servers with the /type=...&location=...&version=... type of HTTP arguments. Those values could then be used in the views when necessary to customize the configurations. We did this both with Spring XML files that were being reloaded live and standard field=value property files.
Our system was pull only although we could trigger a pull via JMX If necessary.
Hope this helps somewhat.
Config4* (of which I am the maintainer) can provide you with most of the capabilities you are looking for out-of-the-box, and I suspect you could easily build the remaining capabilities on top of it.
Read Chapters 2 and 3 of the "Getting Started" manual to get a feel for Config4*'s capabilities (don't worry, they are very short chapters). Doing that should help you decide how well Config4* meets your needs.
You can find links to PDF and HTML versions of the manuals near the end of the main page of the Config4* website.
Google has open-sourced the auto update mechanism used in Google Chrome as Omaha.
It seems quite complicated and difficult to configure for anybody who isn't Google. What is the experience using Omaha in projects? Can it be recommended?
We use Omaha for our products. Initially there was quite a bit of work to change hardcoded URLs and strings. We also had to implement the server ourselves, because there was not yet an open source implementation. Today, I would use omaha-server.
There are no regrets with ditching our old client update solution and going with Omaha.
Perhaps, you can leverage the courgette algorithm, which is the update mechanism that is used in Google Chrome. It is really easy to use and apply to your infrastructure. Currently, it just works for Windows operating systems. Windows users of Chrome receive updates in small chunks, unlike Mac and Linux users who still receive the chunks in total size.
You can find the source code here in the Chromium SVN repository. It is a compression algorithm to apply small updates to Google Chrome instead of sending the whole distribution all the time. Rather than push the whole 10 MB to the user, you can push just the diff of the changes.
More information on how Courgette works can be found here and the official blog post about it here.
It works like this:
server:
hint = make_hint(original, update)
guess = make_guess(original, hint)
diff = bsdiff(concat(original, guess), update)
transmit hint, diff
client
receive hint, diff
guess = make_guess(original, hint)
update = bspatch(concat(original, guess), diff)
When you check out the source, you can compile it as an executable (right click compile in Visual Studio) and you can use the application in that form for testing:
Usage:
courgette -dis <executable_file> <binary_assembly_file>
courgette -asm <binary_assembly_file> <executable_file>
courgette -disadj <executable_file> <reference> <binary_assembly_file>
courgette -gen <v1> <v2> <patch>
courgette -apply <v1> <patch> <v2>
Or, you can include that within your application and do the updates from there. You can imitate the Omaha auto update environment by creating your own service that you periodically check and run Courgette.
I've been using Omaha in various projects since 2016. The projects had between a handful and millions of update clients. Target operating systems were mostly Windows, but also some Linux devices and (via Sparkle) macOS.
Omaha is difficult to set up because it requires you to edit Google's C++ implementation. You also need a corresponding server. The standard implementation is omaha-server and does not come from Google. However, in return it also supports Sparkle for automatic updates on Mac (hence why I mentioned Sparkle above).
While setting up the above components is difficult, once they are configured they are work extremely well. This is perhaps not surprising given that Google use Omaha to update millions (billions?) of devices.
To help others get started with Omaha, I wrote a tutorial that gives a quick overview of how it works.
UPDATE
Customizing google omaha isn't that easy espacialy if you have no knowledge about c++, python or com.
Updates aren't published that frequently
crystalnix/omaha is managed by the community and they try to merge the main repo into their's; additional features are implemented and basic things are fixed
google/omaha is more active and changes from google are added but not frequently
To implement manual updates in any language you can use the com classes
Resume
google omaha is still alive but in a lazy way
bugs are fixed but do not expect hotfixes
google omaha fits for windows client apps supported from windows vista and upwards
the server side I'm using supports also sparkle for crossplatform support
feedbacks and crashes are also supported on the server
feedbacks are sent with the google protocol buffers
crash handling is done with breakpad
I personaly would go for google omaha instead of implementing my own solution. However we will discuss this internal.
In the .NET world you might want to take a look at ClickOnce deployment.
An auto-update mechanism is something I'd personally code myself, and always have in the past. Unless you have a multi-gigabyte application and want to upload bits and pieces only, just rely on your own code/installer. That said, I've not looked at Google's open source library at all.. and didn't even know it existed. I can't imagine it offering anything superior to what you could code yourself, and with your own code you aren't bound by any licensing restrictions.