J2EE Application/Bean configuration Best Practices? - configuration

What is the best way to manage property sets to apply to EJB, and easily be able to vary them between machines/environments (e.g. DEV, TEST, PROD)? For example, is there a way to configure your EJB properties on the App Server (which guarantees you can vary them by machine/environment).
Specifically:
1) I have a Singleton EJB which needs certain properties set (environment) specific. Is there annotation(s) which are used to tell the EJB Container where to look up those properties and will automatically apply them to the bean?
2) What is the best way to manage different property sets, i.e. dev, test, prod, so that the J2EE app is portable between servers, and you can seamlessly manage the properties specific to each server?
If there are any good documentation links - let me know. I've Googled around and seen nothing directly to the points above.

I use a singleton helper class "PropertiesHelper" which has a Properties member and reads from an xml configuration file upon the first property access attempt. The helper encapsulates the entire set of configuration settings and prevents them from being read from disk more than once.
The file location is specified in my java opts and read in by PropertiesHelper using System.getProperty()
As for system properties annotations, I don't believe Java supports this natively, but if you're so inclined you may want to look at some AOP/Dependency Injection frameworks like Google Guice which are better at this "cross-cutting".

Related

.Net Core 1.0.0, multiple projects, and IConfiguration

TL;DR version:
What's the best way to use the .Net Core IConfiguration framework so values in a single appsettings.json file can be used in multiple projects within the solution? Additionally, I need to be able to access the values without using constructor injection (explanation below).
Long version:
To simply things, lets say we have a solution with 3 projects:
Data: responsible for setting up the ApplicationDbContext, generates migrations
Services: class library with some business logic
WebApi: REST API with a Startup.cs
With this architecture, we have to use a work-around for the "add-migration" issue that remains in Core 1.0.0. Part of this work-around means that we have a ApplicationDbContextFactory class that must have a parameterless constructor (no DI) in order for the migration CLI to use it.
Problem: Right now we have connection strings living in two places;
ApplicationDbContextFactory for the migration work-around
in the WebApi's "appsettings.json" file
Prior to .Net Core, we could use ConfigurationManager to pull connection strings for all solution projects from one web.config file based on the startup project. How do we use this new IConfiguration framework to store connection strings in one place that need to be used all over the solution? Additionally, I can't inject into the ApplicationDbContextFactory class' constructor... so that further complicates things (more-so since they changed how the [FromServices] attribute works).
Side note: I would like to avoid implementing an entire DI middleware just to get attribute injection, since Core includes it's own DI framework. If I can avoid that and still access appsettings.json values, that would be ideal.
If I need to add code let me know, this post was already long enough, so I'll hold off on examples until requested. ;)

Azure : can we check if a setting exists before trying to read it?

I currently use RoleEnvironment.GetConfigurationSettingValue(propertyName) to get the value of a setting defined in my WebRole config file (csdef + cscfg). Ok, sounds right.
This works well if the setting exists but failed with an Exception if the setting is not defined in the csdef and the cscfg.
I'm migrating an existing app to Azure which has many configuration settings in web.config. In my code, to read a setting value, I d'like to test : if it exists in the webRole config (csdef + cscfg) I read it from here, otherwise I read it with ConfigurationManager from web.config.
This would prevent to migrate all settings from my web.config and allow to custom one when the app is already deployed.
Is there a way to do this ?
I don't want to encapsulate the GetConfigurationSettingValue in a try/catch (and read from web.config if I enter the catch) because it's really an ugly way (and mostly it's not performance effective !).
Thanks !
Update for 1.7 Azure SDK.
The CloudConfigurationManager class has been introduced. The allows for a single GetSetting call to look in your cscfg first and then fall back to web.config if the key is not found.
http://msdn.microsoft.com/en-us/LIBRARY/jj157248
Pre 1.7 SDK
Simple answer is no. (That I know of)
The more interesting topic is to consider configuration as a dependency. I have found it to be beneficial to treat configuration settings as a dependency so that the backing implementation can be changed over time. That implementation may be a fake for testing or something more complex like switching from .config/.cscfg to a database implementation for multi-tennent solutions.
Given this configuration wrapper you can write that TryGetSetting as internal method for whatever your source of configuration options are. When this feature is added to the RoleEnvironment members, you would only have to change that internal implementation.

Do any "major" frameworks make use of monkey-patching/open classes

I am curious about the usage of the feature known as open classes or monkey-patching in languages like e.g. Ruby, Python, Groovy etc. This feature allows you to make modifications (like adding or replacing methods) to existing classes or objects at runtime.
Does anyone know if major frameworks (such as Rails/Grails/Zope) make (extensive) use of this opportunity in order to provide services to the developer? If so, please provide examples.
Rails does this to a (IMHO) ridiculous extent.
.Net allows it via extension methods.
Linq, specifically, relies heavily on extension methods monkey-patched onto the IEnumerable interface.
An example of its use on the Java platform (since you mentioned Groovy) is load-time weaving with something like AspectJ and JVM instrumentation. In this particular case, however, you have the option of using compile-time weaving instead. Interestingly, one of my recent SO questions was related to problems with using this load-time weaving, with some recommending compile-time as the only reliable option.
An example of AspectJ using load-time (run-time) weaving to provide a helpful service to the developer can be Spring's #Configuration annotation which allows you to use Dependency Injection on object not instantiated by Spring's BeanFactory.
You specifically mentioned modifying the method (or how it works), and an example of that being used is an aspect which intercepts am http request before being sent to the handler (either some Controller method or doPost, etc) and checking to see if the user is authorized to access that resource. Your aspect could then decide to return – prematurely – a response with a redirect to login. While not modifying the contents of the method per se, you are still modifying the way the method works my changing the return value it would otherwise give.

Why would you want Dependency Injection without configuration?

After reading the nice answers in this question, I watched the screencasts by Justin Etheredge. It all seems very nice, with a minimum of setup you get DI right from your code.
Now the question that creeps up to me is: why would you want to use a DI framework that doesn't use configuration files? Isn't that the whole point of using a DI infrastructure so that you can alter the behaviour (the "strategy", so to speak) after building/releasing/whatever the code?
Can anyone give me a good use case that validates using a non-configured DI like Ninject?
I don't think you want a DI-framework without configuration. I think you want a DI-framework with the configuration you need.
I'll take spring as an example. Back in the "old days" we used to put everything in XML files to make everything configurable.
When switching to fully annotated regime you basically define which component roles yor application contains. So a given
service may for instance have one implementation which is for "regular runtime" where there is another implementation that belongs
in the "Stubbed" version of the application. Furthermore, when wiring for integration tests you may be using a third implementation.
When looking at the problem this way you quickly realize that most applications only contain a very limited set of component roles
in the runtime - these are the things that actually cause different versions of a component to be used. And usually a given implementation of a component is always bound to this role; it is really the reason-of-existence of that implementation.
So if you let the "configuration" simply specify which component roles you require, you can get away without much more configuration at all.
Of course, there's always going to be exceptions, but then you just handle the exceptions instead.
I'm on a path with krosenvold, here, only with less text: Within most applications, you have a exactly one implementation per required "service". We simply don't write applications where each object needs 10 or more implementations of each service. So it would make sense to have a simple way say "this is the default implementation, 99% of all objects using this service will be happy with it".
In tests, you usually use a specific mockup, so no need for any config there either (since you do the wiring manually).
This is what convention-over-configuration is all about. Most of the time, the configuration is simply a dump repeating of something that the DI framework should know already :)
In my apps, I use the class object as the key to look up implementations and the "key" happens to be the default implementation. If my DI framework can't find an override in the config, it will just try to instantiate the key. With over 1000 "services", I need four overrides. That would be a lot of useless XML to write.
With dependency injection unit tests become very simple to set up, because you can inject mocks instead of real objects in your object under test. You don't need configuration for that, just create and injects the mocks in the unit test code.
I received this comment on my blog, from Nate Kohari:
Glad you're considering using Ninject!
Ninject takes the stance that the
configuration of your DI framework is
actually part of your application, and
shouldn't be publicly configurable. If
you want certain bindings to be
configurable, you can easily make your
Ninject modules read your app.config.
Having your bindings in code saves you
from the verbosity of XML, and gives
you type-safety, refactorability, and
intellisense.
you don't even need to use a DI framework to apply the dependency injection pattern. you can simply make use of static factory methods for creating your objects, if you don't need configurability apart from recompiling code.
so it all depends on how configurable you want your application to be. if you want it to be configurable/pluggable without code recompilation, you'll want something you can configure via text or xml files.
I'll second the use of DI for testing. I only really consider using DI at the moment for testing, as our application doesn't require any configuration-based flexibility - it's also far too large to consider at the moment.
DI tends to lead to cleaner, more separated design - and that gives advantages all round.
If you want to change the behavior after a release build, then you will need a DI framework that supports external configurations, yes.
But I can think of other scenarios in which this configuration isn't necessary: for example control the injection of the components in your business logic. Or use a DI framework to make unit testing easier.
You should read about PRISM in .NET (it's best practices to do composite applications in .NET). In these best practices each module "Expose" their implementation type inside a shared container. This way each module has clear responsabilities over "who provide the implementation for this interface". I think it will be clear enough when you will understand how PRISM work.
When you use inversion of control you are helping to make your class do as little as possible. Let's say you have some windows service that waits for files and then performs a series of processes on the file. One of the processes is to convert it to ZIP it then Email it.
public class ZipProcessor : IFileProcessor
{
IZipService ZipService;
IEmailService EmailService;
public void Process(string fileName)
{
ZipService.Zip(fileName, Path.ChangeFileExtension(fileName, ".zip"));
EmailService.SendEmailTo(................);
}
}
Why would this class need to actually do the zipping and the emailing when you could have dedicated classes to do this for you? Obviously you wouldn't, but that's only a lead up to my point :-)
In addition to not implementing the Zip and email why should the class know which class implements the service? If you pass interfaces to the constructor of this processor then it never needs to create an instance of a specific class, it is given everything it needs to do the job.
Using a D.I.C. you can configure which classes implement certain interfaces and then just get it to create an instance for you, it will inject the dependencies into the class.
var processor = Container.Resolve<ZipProcessor>();
So now not only have you cleanly separated the class's functionality from shared functionality, but you have also prevented the consumer/provider from having any explicit knowledge of each other. This makes reading code easier to understand because there are less factors to consider at the same time.
Finally, when unit testing you can pass in mocked dependencies. When you test your ZipProcessor your mocked services will merely assert that the class attempted to send an email rather than it really trying to send one.
//Mock the ZIP
var mockZipService = MockRepository.GenerateMock<IZipService>();
mockZipService.Expect(x => x.Zip("Hello.xml", "Hello.zip"));
//Mock the email send
var mockEmailService = MockRepository.GenerateMock<IEmailService>();
mockEmailService.Expect(x => x.SendEmailTo(.................);
//Test the processor
var testSubject = new ZipProcessor(mockZipService, mockEmailService);
testSubject.Process("Hello.xml");
//Assert it used the services in the correct way
mockZipService.VerifyAlLExpectations();
mockEmailService.VerifyAllExceptions();
So in short. You would want to do it to
01: Prevent consumers from knowing explicitly which provider implements the services it needs, which means there's less to understand at once when you read code.
02: Make unit testing easier.
Pete

Singleton for Application Configuration

In all my projects till now, I use to use singleton pattern to access Application configuration throughout the application. Lately I see lot of articles taking about not to use singleton pattern , because this pattern does not promote of testability also it hides the Component dependency.
My question is what is the best way to store Application configuration, which is easily accessible throughout the application without passing the configuration object all over the application ?.
Thanks in Advance
Madhu
I think an application configuration is an excellent use of the Singleton pattern. I tend to use it myself to prevent having to reread the configuration each time I want to access it and because I like to have the configuration be strongly typed (i.e, not have to convert non-string values each time). I usually build in some backdoor methods to my Singleton to support testability -- i.e., the ability to inject an XML configuration so I can set it in my test and the ability to destroy the Singleton so that it gets recreated when needed. Typically these are private methods that I access via reflection so that they are hidden from the public interface.
EDIT We live and learn. While I think application configuration is one of the few places to use a Singleton, I don't do this any more. Typically, now, I will create an interface and a standard class implementation using static, Lazy<T> backing fields for the configuration properties. This allows me to have the "initialize once" behavior for each property with a better design for testability.
Use dependency injection to inject the single configuration object into any classes that need it. This way you can use a mock configuration for testing or whatever you want... you're not explicitly going out and getting something that needs to be initialized with configuration files. With dependency injection, you are not passing the object around either.
For that specific situation I would create one configuration object and pass it around to those who need it.
Since it is the configuration it should be used only in certain parts of the app and not necessarily should be Omnipresent.
However if you haven't had problems using them, and don't want to test it that hard, you should keep going as you did until today.
Read the discussion about why are they considered harmful. I think most of the problems come when a lot of resources are being held by the singleton.
For the app configuration I think it would be safe to keep it like it is.
The singleton pattern seems to be the way to go. Here's a Setting class that I wrote that works well for me.
If any component relies on configuration that can be changed at runtime (for example theme support for widgets), you need to provide some callback or signaling mechanism to notify about the changed config. That's why it is not enough to pass only the needed parameters to the component at creation time (like color).
You also need to provide access to the config from inside of the component (pass complete config to component), or make a component factory that stores references to the config and all its created components so it can eventually apply the changes.
The former has the big downside that it clutters the constructors or blows up the interface, though it is maybe fastest for prototyping. If you take the "Law of Demeter" into account this is a big no because it violates encapsulation.
The latter has the advantage that components keep their specific interface where components only take what they need, and as a bonus gives you a central place for refactoring (the factory). In the long run code maintenance will likely benefit from the factory pattern.
Also, even if the factory was a singleton, it would likely be used in far fewer places than a configuration singleton would have been.
Here is an example done using Castale.Core >> DictionaryAdapter and StructureMap