Akka default vs runtime configuration - configuration

I read the Akka v2.3.11 docs (Java, not Scala) and am still a bit confused about how configuration works. In section 2.9.2 ("Akka and JAR bundling") it states:
Akka’s configuration approach relies heavily on the notion of every module/jar having its own reference.conf file, all of these will be discovered by the configuration and loaded. Unfortunately this also means that if you put/merge multiple jars into the same jar, you need to merge all the reference.confs as well. Otherwise all defaults will be lost and Akka will not function.
Being brand new to Akka and actors, but having been a Java developer for 10+ years, I have never once before seen a reference.conf file in any JAR. So what is Akka talking about here? Are they insinuating that if my Akka project uses, say, Guice and Guava, that I need to define reference.conf files for each of these?!?
Also, can someone confirm my understanding of application.conf vs reference.conf? My understanding is that you are supposed to define a reference.conf that contains default Akka configs, and then also define an application.conf that overrides it? If that's true, why use reference.conf in the first place? Why not just use an application.conf? I'm so confused.

Akka uses typesafe config library, have a look at the documentation https://github.com/typesafehub/config/blob/master/README.md that explains the difference between reference.conf and application.conf.
Basically reference.conf comes with Akka libraries, user shouldn't touch it (but can use as a reference) unless merging multiple jars (not necessary akka jars) that use typesafe config library into one jar.

Related

Is it possible to write a dual pass checkstyle check?

I have two situations I need a checkstyle check for. Let's say I have a bunch of objects with the annotation #BusinessLogic. I want to do a first pass through all *.java files creating a Set with the full classnames of these objects. Let's say ONE of the classes here is MyBusinessLogic. NEXT, and as part of a custom checkstyle checker, I want to go through and fail the build if there is any lines of code that say "new MyBusinessLogic()" in any of the code. We want to force DI when objects are annotated with #BusinessLogic. Is this possible with checkstyle? I am not sure checkstyle does a dual pass.
Another option I am considering is some gradle plugin perhaps that scans all java files and writes to a file the list of classes annotated with #BusinessLogic and then running checkstyle after that where my checker reads in the file?
My next situation is I have a library delivered as a jar so in that jar, I also have classes annotated with #BusinessLogic and I need to make sure those are also added to my list of classes that should not be newed up manually and only created with dependency injection.
Follow up question from the previous question here after reading through checkstyle docs:
How to enforce this pattern via gradle plugins?
thanks,
Dean
Is it possible to write a dual pass checkstyle check?
Possible, yes, but not officially supported. Support would come at https://github.com/checkstyle/checkstyle/issues/3540 but it hasn't been agreed on.
Multi-file validation is possible with FileSets (still not officially supported), but it becomes harder with TreeWalker checks. This is because TreeWalker doesn't chain finishProcessing to the checks. You can implement your own TreeWalker that will chain this finishProcessing to implementation of AbstractChecks.
You will have to do everything in 1 pass with this method. Log all new XXX and classes with annotation #YYY. In the finishProcessing method, correlate the information obtained between the 2 and print a violation when you have a match.
I have a library delivered as a jar
Checkstyle does not support reading JARs or bytecode. You can always create a hard coded list as an alternative. The only other way is build your own reader into Checkstyle.

Conflicting jackson-jaxrs provider in WildFly with EAR deployment

We have a Java EE 7 application running on a WildFly 9, consisting of an exploded EAR deployment, which contains several WAR files, some JARs at EAR level and a lib folder containing 3rd pary JARs. (I know this is not how one would do it today, but it is like it is.)
One of the WARs contains a JAX-RS REST service, which GETs and POSTs a data object which contains a Java 8 OffsetDateTime. Since JSON-B is not yet available, we used #JsonSerialize/#JsonDeserialize form jackson-databind in order to marshall it from and to JSON.
This worked quite well until due to a change of another WAR, the dependency jackson-jaxrs came into the lib folder at EAR level. What happened then was that the marshalling stopped working, since the container tried to set the date string from JSON directly into the OffsetDateTime type and when getting it, writing all the internal fields of the Java 8 date instead of the formatted string.
I assumed, that the processing of the above-mentioned annotations didn't take place and thus the server tried to map it like other simple types. When I deleted the JARs belonging to the jackson-jaxrs dependeny, everything worked fine again. The application server then probably used its own version of this very JAR from its modules folder.
So, my question is: What is the difference when having the jackson-jaxrs JAR in EARs lib folder additionally to the system provided module or only the latter? Why does it not consider the annontaions in the first case when de/serializing?
Wildfly 9 bundles jackson 1.9 as a base module and the annotations are in the org.codehaus.jackson package.
I suspect that the library added recently is the (more recent) jackson 2.x and the annotations are now in the com.fasterxml.jackson package.
If that's the case, upgrading to jackson 2.x (ideally the same version as the one your get from the EAR) should solve the problem.
Alternatively isolating your subdeployment from the jackson jar present in the EAR might work but it can be messy with transitive dependencies. See class loading in Wildfly
EDIT as you confirmed, there are two different versions running. If you can afford it, aligning the versions would most definitively help solving the problem.
Short of that, you might need to isolate each subdeployment's so it only sees the expected version. See this answer for example (which isolate the entire deployment from the base module).

How does c3p0's JdbcProxyGenerator work (metaprogramming in Java‽)?

I've been using c3p0 with hibernate for a couple of years. When looking at exception stack traces, I see classes such as com.mchange.v2.c3p0.impl.NewProxyPreparedStatement in the stack. I went looking for the source code for these classes and came across the curous com.mchange.v2.c3p0.codegen package.
In particular, it looks like JdbcProxyGenerator is metaprogramming in Java. I'm having a hard time understanding the codegen mechanism and why it is used. The built jar contains these generated classes, so I'm assuming these classes are built during the build, perhaps as part of a two-phase build. The codegen package does not appear to be in the generated jar.
Any insight would be appreciated, just for my own curiosity. Thanks!
yes, you are absolutely right.
c3p0 uses code generation to generate non reflective proxy implementations of large JDBC interfaces, "java bean" classes with lots of properties, and some classes containing debug and logging flags (to set up conditional compilation within the build).
You can always see the generated classes by typing ant codegen in the source distribution, and then looking at the build/codebase directory. The latest binary distribution of c3p0 (0.9.2-pre2) includes the generated sources in a src.jar file, which you can also find as a maven artifact at http://repo1.maven.org/maven2/com/mchange/c3p0/0.9.2-pre2-RELEASE/c3p0-0.9.2-pre2-RELEASE-sources.jar
I hope this helps!

Azure : can we check if a setting exists before trying to read it?

I currently use RoleEnvironment.GetConfigurationSettingValue(propertyName) to get the value of a setting defined in my WebRole config file (csdef + cscfg). Ok, sounds right.
This works well if the setting exists but failed with an Exception if the setting is not defined in the csdef and the cscfg.
I'm migrating an existing app to Azure which has many configuration settings in web.config. In my code, to read a setting value, I d'like to test : if it exists in the webRole config (csdef + cscfg) I read it from here, otherwise I read it with ConfigurationManager from web.config.
This would prevent to migrate all settings from my web.config and allow to custom one when the app is already deployed.
Is there a way to do this ?
I don't want to encapsulate the GetConfigurationSettingValue in a try/catch (and read from web.config if I enter the catch) because it's really an ugly way (and mostly it's not performance effective !).
Thanks !
Update for 1.7 Azure SDK.
The CloudConfigurationManager class has been introduced. The allows for a single GetSetting call to look in your cscfg first and then fall back to web.config if the key is not found.
http://msdn.microsoft.com/en-us/LIBRARY/jj157248
Pre 1.7 SDK
Simple answer is no. (That I know of)
The more interesting topic is to consider configuration as a dependency. I have found it to be beneficial to treat configuration settings as a dependency so that the backing implementation can be changed over time. That implementation may be a fake for testing or something more complex like switching from .config/.cscfg to a database implementation for multi-tennent solutions.
Given this configuration wrapper you can write that TryGetSetting as internal method for whatever your source of configuration options are. When this feature is added to the RoleEnvironment members, you would only have to change that internal implementation.

Handling properties in Scala

I'd like to know what is the most efficient way of handling properties in Scala. I'm tired of having gazillion property files, xml files and other type of configuration files in Java and wonder if there's "best practice" to handle those someway more efficient in Scala?
Why would you have a gazillion property files?
I'm still using the Apache commons Digester, which works perfectly well in Scala. It's basically a very easy way of making a user-defined XML document map to method calls on a user-defined configurator class. I find it extremely useful when I want to parse some configuration data (as opposed to application properties).
For application properties, you might either use a dependency injection framework (like Spring) or just plain old property files. I'd also be interested to see if Scala offers anything on top of this, though.
EDIT: Typesafe config gives you a simple and powerful solution for configuration - https://github.com/typesafehub/config
ORIGINAL (possibly not very useful):
Quoting from "Programming in Scala":
"In Scala, you can configure via Scala code itself."
Scala's runtime linking allows for classes to be swapped at runtime and the general philosophy of these languages tends to favour convention over configuration. If you don't want to deal with gazillion property files, just don't have them.
Check out Configgy which looks like a neat little library. It includes nesting and change-notification. It also include a logging library.
Unfortunately, it didn't compile for me on the Mac instances I tried. Let us know if you have better luck and what you think...
Update: solved Mac compilation problems. See this post.