Monitor changes to settings in app.config - configuration

When I use typesafe application settings, I have something like this in my app.exe.config file:
<setting name="IntervalTimeout" serializeAs="String">
<value>10</value>
</setting>
The Settings.Designer.vb does something like this:
<Global.System.Configuration.ApplicationScopedSettingAttribute(), _
Global.System.Diagnostics.DebuggerNonUserCodeAttribute(), _
Global.System.Configuration.DefaultSettingValueAttribute("10")> _
Public ReadOnly Property IntervalTimeout() As Integer
Get
Return CType(Me("IntervalTimeout"),Integer)
End Get
End Property
And I can access the setting with something like this:
Settings.IntervalTimeout
What is the best practice for monitoring changes to this setting from the outside. I.e. the setting is used by an application that is running as a service and when somebody opens the corresponsing config file and changes the IntervalTimeout to let's say 30, the service application gets notified and does whatever I tell it to do to reconfigure itself.
Usually, I would have used something like this during initialization:
AddHandler Settings.PropertyChanged, New PropertyChangedEventHandler(AddressOf Settings_PropertyChanged)
And then implement an Eventhandler that checks the name of the property that has changed and reacts appropriately.
However, the PropertyChanged event only gets fired when the Set method is called on a property, not when somebody changes the data directly within the file. Of course this would have been heaven, but understandable that this is not the way it works. ;-)
Most probably, I'd implement a filesystemwatcher to monitor changes to the file and then reload the configuration, find out what changed and then do whatever is necessary to reflect those changes.
Does anyone know a better way? Or even a supported one?

ASP.NET monitors the Web.config file for changes. It does this using the System.Web.FileChangesMonitor class (which is marked internal and sealed), which uses another internal class System.Web.FileMonitor. Have a look at them in reflector. It might be too complex for your problem, but I'd certainly look into it if I had a similar requirement.
This looks pretty complex. Bear in mind ASP.NET shuts down the old application and starts a new one once it detects a change. App.config wasn't architectured to detect for changes while the application is running. Perhaps reconsider why you need this?
An easier approach is to restart the service once you've made the change.

Related

How do I track down the source definition of a custom hook event in a Mediawiki extension?

Here's an example:
https://phabricator.wikimedia.org/diffusion/EPFM/browse/master/?grep=BeforeFreeTextSubst
A Mediawiki extension where Hooks::run( 'PageForms::BeforeFreeTextSubst', ...) gets invoked but there's no other record or trace of where it's defined. If there was some mapping of strings/names to functions it would be registered somewhere else, and if it was a function name it should show up somewhere else.
I'm seeing this with a few other function hook events.
There isn't any "source definition" other than where the hook is run from. That is where the hook is defined; it may or may not actually be hooked onto anywhere. All the hook definition is is a name and a set of parameters that are passed to hook callbacks.
To help find out where the hook is actually used, you can use the (new) codesearch tool:
https://codesearch.wmflabs.org/extensions/?q=BeforeFreeTextSubst
(It looks like this one is not used by any extension that is in Wikimedia source control.)
Are you trying to find the functions that get called when the hook is run? The situation is a bit chaotic there. There are two mechanisms for defining hooks:
the $wgHooks global (this is the normal way of registering hooks);
and the Hooks::register method (sometimes used for registering hooks dynamically).
$wgHooks is normally set via the extension.json file, but can be set dynamically, too.
The quickest way to find out what hooks are registered is to run maintenance/shell.php and type in $wgHooks. This will miss hooks registered via the other method, and hooks which are conditionally registered (e.g. only for API calls), but it still works 99% of the time. Otherwise, you'll have to grep for it, as Sam said.

Changing a value in a .config file based on a user's selection in an InstallShield 2013 install

Sorry - I'm a total newbie with InstallShield. I've inherited an InstallShield 2013 project that presents the user with a dialog that let's the user select a SQL Server and based on their selection sets a value in a config file. That's not working, so I opened the project in IS and looked in the Text File Changes under System Configuration and there's nothing there that would do this. So how do I figure out where this is happening (or not happening in my case), and then how do I get it to work? I need to set both data source and initial catalog in a file called server.config.
So how do I determine what the user selected and then save that in this file? It looks like I can set up a Text File Change, but how do I access the values selected by the user? And how can I figure out where the "code" is that is supposed to be doing this?
Thanks,
Ben
I would try to track this from the dialog and controls in question, or by following the value through a verbose log. Since you say it doesn't work today, there will probably be an interruption in the flow I describe below, and since you don't know the full state of the installation project, it may be hard to identify. So search from what you know.
Top down: what gets configured
First, find the dialog that you fill out as a user making the selection. Then figure out the property that the particular control is associated with. Now you've got a thread; pull on it.
Search in the direct editor for references to the property. If the property is named MYCONFIG search for just that: MYCONFIG. You'll probably find some sort of use that looks like [MYCONFIG] instead, which is typically a format string specifying to use the value of MYCONFIG. You may also have to search all the files related to your project, as Custom Action implementations can be code stored outside of your InstallShield project.
The use may be in a ControlEvent, CustomAction, or some other table. If it's in a ControlEvent, it may be used to set another property. Ditto if it's in a CustomAction that sets properties (type 51) which may be easier to understand in the Custom Actions and Sequences view. In that case, also search for the property that gets set.
If you find it in a table like ISSearchReplace* or ISXml*, or IniFile, it's probably part of the Text Files Changes, XML File Changes, or INI File Changes, and that view should make it easier to understand.
Maybe that thread dead-ends somewhere. A property gets set, but never referenced. So try to search from the other end.
Bottom up: what gets written
If there are text file changes, xml file changes, ini file changes, or custom actions that reference the file you need updated, see where they get their information. Try to follow it back. If they're well written, you should be able to identify the property (noting that one called CustomActionData comes from a property matching the name of the custom action it's used in), and then trace that further back using the same ideas as above, but in the other direction.
Where's the problem?
If the threads don't connect, that's probably the problem. It's also possible that a custom action lacks permissions but doesn't reports a failure, or that the file name or path got misconfigured somewhere along the way. Look for small things like that if things look like they should work but don't.
It turns out that I misunderstood the problem and the project was never set up to change that value, so all I had to do was set up a Text File Change and it works perfectly. Thanks #Michael Urman for the thorough response - I really appreciate it!

Verify a Tif with ApprovalTests

I have been asked to update a system where header information gets injected into a tif via a 3rd party console application. I don't need to worry about that bit.
The part I have been asked to look at it the merge process that generates the header information.
The current file generated by the process is assumed as correct, before I make any changes, so I want to add this as an approved result, from that I can then check that the changes I make will alter the file as expected.
I thought this would be a good opportunity to look at using ApprovalTests
The problem I have is that for what ever reason the links to the videos are considered corruptible (Possibly show me kittens jumping into boxes or something, which will stop me working, which ironically means I slow down my work done because I cannot see any help videos).
What I have been looking at is the Approvals.Verify and Approvals.VerifyFile extensions.
But what appears to be happening is confusing me.
using VerifyFile creates a received file, but the contents of the file are just a line the name of the file I have asked it to verify.
using Verify(new FileInfo("FileNameHere")) does not appear to generate the received file that I need to flag as approved, but the test does return saying that it cannot find the approved tif file.
I am probably using VerifyFile completely wrong and might be looking at using Verify wrong as well.
useful info?
Might be useful to know, that as this is a legacy application, running as a windows service, I have wrapped the service in a harness that allows me to call the routines, so the files are physically being written elsewhere on the machine outside of my control (well there is a config, but the return of the service I call generates a file in a fixed location if it is successful). I have tried copying that into the Unit Test project, but that doesn't appear to help.
Verify(File) and VerifyFile(string) are both meant to verify an existing file. As such they merely setting the received file to the file you pass in. You will still need to move/approval/create the approved file.
Here is the pseudo code and process.
[UseReporter(typeof(DiffReporter), typeof(ClipboardReporter)]
public void TestTiff()
{
string tif = YourProcessToCreateTifFile();
Approvals.VerifyFile(tif);
}
[Note: if you don't have an image diff installed, like TortoiseDiff, you might want to use the FileLauncherReporter]
Run this, once you get the result, move the file over by pasting your clipboard into a cmd window.
It will move the temporary tif to your test directory with the name ClassName.TestTiff.approved.tif
After that the test should pass until something changes.
Happy Testing!

Managing configuration in Erlang application

I need to distribute some sort of static configuration through my application. What is the best practice to do that?
I see three options:
Call application:get_env directly whenever a module requires to get configuration value.
Plus: simpler than other options.
Minus: how to test such modules without bringing the whole application thing up?
Minus: how to start certain module with different configuration (if required)?
Pass the configuration (retrieved from application:get_env), to application modules during start-up.
Plus: modules are easier to test, you can start them with different configuration.
Minus: lot of boilerplate code. Changing the configuration format requires fixing several places.
Hold the configuration inside separate configuration process.
Plus: more-or-less type-safe approch. Easier to track where certain parameter is used and change those places.
Minus: need to bring up configuration process before running the modules.
Minus: how to start certain module with different configuration (if required)?
Another approach is to transform your configuration data into an Erlang source module that makes the configuration data available through exports. Then you can change the configuration at any time in a running system by simply loading a new version of the configuration module.
For static configuration in my own projects, I like option (1). I'll show you the steps I take to access a configuration parameter called max_widgets in an application called factory.
First, we'll create a module called factory_env which contains the following:
-define(APPLICATION, factory).
get_env(Key, Default) ->
case application:get_env(?APPLICATION, Key) of
{ok, Value} -> Value;
undefined -> Default
end.
set_env(Key, Value) ->
application:set_env(?APPLICATION, Key, Value).
Next, in a module that needs to read max_widgets we'll define a macro like the following:
-define(MAX_WIDGETS, factory_env:get_env(max_widgets, 1000)).
There are a few nice things about this approach:
Because we used application:set_env/3 and application:get_env/2, we don't actually need to start the factory application in order to have our tests pass.
max_widgets gets a default value, so our code will still work even if the parameter isn't defined.
A second module could use a different default value for max_widgets.
Finally, when we are ready to deploy, we'll put a sys.config file in our priv directory and load it with -config priv/sys.config during startup. This allows us to change configuration parameters on a per-node basis if desired. This cleanly separates configuration from code - e.g. we don't need to make another commit in order to change max_widgets to 500.
You could use a process (a gen_server maybe?) to store your configuration parameters in its state. It should expose a get/set interface. If a value hasn't been explicitly set, it should retrieve a default value.
-export([get/1, set/2]).
...
get(Param) ->
gen_server:call(?MODULE, {get, Param}).
...
handle_call({get, Param}, _From, State) ->
case lookup(Param, State#state.params) of
undefined ->
application:get_env(...);
Value ->
{ok, Value}
end.
...
You could then easily mockup this module in your tests. It will also be easy to update the process with some new configuration at run-time.
You could use pattern matching and tuples to associate different configuration parameters to different modules:
set({ModuleName, ParamName}, Value) ->
...
get({ModuleName, ParamName}) ->
...
Put the process under a supervision tree, so it's started before all the other processes which are going to need the configuration.
Oh, I'm glad nobody suggested parametrized modules so far :)
I'd do option 1 for static configuration. You can always test by setting options via application:set_env/3,4. The reason you want to do this is that your tests of the application will need to run the whole application anyway at some time. And the ability to set test-specific configuration at that point is really neat.
The application controller runs by default, so it is not a problem that you need to go the application-way (you need to do that anyway too!)
Finally, if a process needs specific configuration, say so in the configuration data! You can store any Erlang-term, in particular, you can store a term which makes you able to override configuration parameters for a specific node.
For dynamic configuration, you are probably better off by using a gen_server or using the newest gproc features that lets you store such dynamic configuration.
I've also seen people use a .hrl (erlang header file) where all the configuration is defined and include it at the start of any file that needs configuration.
It makes for very concise configuration lookups, and you get configuration of arbitrary complexity.
I believe you can also reload configuration at runtime by performing hot code reloading of the module. The disadvantage is that if you use configuration in several modules and reload only one of them, only that one module will get its configuration updated.
However, I haven't actually checked if it works like that, and I couldn't find definitive documentation on how .hrl and hot code reloading interact, so make sure to double-check this before you actually use it.

How to override a Magento function in app/code/core/Mage/Core/functions.php

I need to override a function in this file:
app/code/core/Mage/Core/functions.php
The problem is that this is so core that there’s no class associated to it, probably because Core isn’t even a module. Does anybody know how to override a function in the file without a class?
Any help would be appreciated.
Copying the file to app/code/local/Mage/Core/functions.php should not be used because of the following reasons:
The entire file has to be copied over making it harder for us to identify what changes have been made.
Future upgrades could introduce new features that would not be available unless it is remembered to copy across the new version of that file and implement the changes again.
Future upgrades could address bugs with core that we would miss unless it is remembered to copy across the new version of that file and implement the changes again.
In respect to points 2 & 3 each upgrade could change the way things work that means revisiting what changes we need to make. In some cases this will be true for overriden methods as well but at least we can easily identify where those changes effect us.
What do you do if another person wants to use the same technique? Being able to identify what is core code and what is ours becomes more and more complex.
Keeping our code together as a “module” becomes more difficult as by copying in the core file means that we have effectively locked it into being “guaranteed” to run on the version of the software that we have copied the original code from. It also means reusing this work is a lot more difficult to do.
Identifying why the code was changed it much harder as it is outside our namespacing, ie all development related to “Example_Module” is in the namespace:
/app/code/core/local/Example/Module
whereas code copied to app/code/core/local/Mage only indicates that we have made a change to support an unknown feature etc.
Also Magento occasionally release patches which fixes bugs – these will only patch files inside core leaving your copied file without the patch.
What I would suggest instead is that you write your own function to do what you want and override the function to call your new function instead.
Maybe I did not understood your question right but why not just copy this file into
app/code/**local**/Mage/Core/functions.php
and modify it there in any way you want?
As mentioned by #tweakmag the disadvantages of creating a folder structure and copying the entire Model or controller for a single function override, most important being,
"Future upgrades could introduce new features that would not be
available unless it is remembered to copy across the new version of
that file and implement the changes again."
Thus a solution can be, to extend the core class (Model or controller) and just write the method you want to override, instead of copying all the methods.
For example, if you want to say, override a method getProductCollection in Mage_Catalog_Model_Category, here will be the steps:
Create a Custom Namespace/Module folder with etc folder in it
Register the module in app/etc folder by creating Namespace_Module.xml
setup the config.xml in Namespace/Module/etc/ :
<?xml version="1.0"?>
<config>
<modules>
<Namespace_Module>
<version>1.0</version>
</Namespace_Module>
</modules>
<global>
<models>
<catalog>
<rewrite>
<category>Namespace_Module_Model_Category</category>
</rewrite>
</catalog>
</models>
</global>
</config>
create Model folder in Namespace_Module and a Php File Category.php
Now the main difference here is we extend the Magento's core class instead of writing all the methods
<?php
/**
* Catalog category model
*/
class Namespace_Module_Model_Category extends Mage_Catalog_Model_Category
{
public function getProductCollection()
{
// Include your custom code here!
$collection = Mage::getResourceModel('catalog/product_collection')
->setStoreId($this->getStoreId())
->addCategoryFilter($this);
return $collection;
}
}
The getProductCollection method is overridden so it'll be called instead of the method defined in the core model class.
Also an important point is:
When you override any methods in models, you should make sure that the data type of the return value of that method matches with the data type of the base class method. Since model methods are called from several core modules, we should make sure that it doesn't break other features!
This link gives a step by step approach to it
1. UOPZ
There's a way to do it without having to copy and maintain functions.php file from core, but it involves extension uopz (pecl install uopz), then you can rename magento's function (from foo to foo_uopzOLD for example) and define your own (https://secure.php.net/uopz)
It works and is very useful for magento - usually you'll bump into something you can not change. Uopz is very helpful in such cases.
pros: works ;), you don't have to redo it everytime you update Magento (if you do it right, because inside you still can call foo_uopzOLD so you can assure some backward compatibility... in some cases).
cons: it's little bit implicit
2. Composer post-install-cmd
If you don't like the above, but you use composer you can patch any file you want:
"scripts": {
"post-install-cmd": "patch -p0 < change-core-functions.patch"
}
pros: explicit (when patch fail - composer install fails), and since it's explicit - you can revisit and fix the patch every time you upgrade magento
cons: you have modified core file, so you probably would want to add it to .gitignore
3. Ugly solution (uglier than those above)
If none of the above is possible for you (really, give it a try with composer - there's no excuse for not using it). But when you really can not the only way I can think of
- create app/local/Mage/Core/functions.php
- define this one function you need
- load original /app/core/Mage/Core/functions.php
- surround every function foo() {...} with
if(!function_exists("foo"){
function foo() {...}
}
hold on to your chair and
eval this SOB ;)