Conflicting jackson-jaxrs provider in WildFly with EAR deployment - json

We have a Java EE 7 application running on a WildFly 9, consisting of an exploded EAR deployment, which contains several WAR files, some JARs at EAR level and a lib folder containing 3rd pary JARs. (I know this is not how one would do it today, but it is like it is.)
One of the WARs contains a JAX-RS REST service, which GETs and POSTs a data object which contains a Java 8 OffsetDateTime. Since JSON-B is not yet available, we used #JsonSerialize/#JsonDeserialize form jackson-databind in order to marshall it from and to JSON.
This worked quite well until due to a change of another WAR, the dependency jackson-jaxrs came into the lib folder at EAR level. What happened then was that the marshalling stopped working, since the container tried to set the date string from JSON directly into the OffsetDateTime type and when getting it, writing all the internal fields of the Java 8 date instead of the formatted string.
I assumed, that the processing of the above-mentioned annotations didn't take place and thus the server tried to map it like other simple types. When I deleted the JARs belonging to the jackson-jaxrs dependeny, everything worked fine again. The application server then probably used its own version of this very JAR from its modules folder.
So, my question is: What is the difference when having the jackson-jaxrs JAR in EARs lib folder additionally to the system provided module or only the latter? Why does it not consider the annontaions in the first case when de/serializing?

Wildfly 9 bundles jackson 1.9 as a base module and the annotations are in the org.codehaus.jackson package.
I suspect that the library added recently is the (more recent) jackson 2.x and the annotations are now in the com.fasterxml.jackson package.
If that's the case, upgrading to jackson 2.x (ideally the same version as the one your get from the EAR) should solve the problem.
Alternatively isolating your subdeployment from the jackson jar present in the EAR might work but it can be messy with transitive dependencies. See class loading in Wildfly
EDIT as you confirmed, there are two different versions running. If you can afford it, aligning the versions would most definitively help solving the problem.
Short of that, you might need to isolate each subdeployment's so it only sees the expected version. See this answer for example (which isolate the entire deployment from the base module).

Related

.Net Core 1.0.0, multiple projects, and IConfiguration

TL;DR version:
What's the best way to use the .Net Core IConfiguration framework so values in a single appsettings.json file can be used in multiple projects within the solution? Additionally, I need to be able to access the values without using constructor injection (explanation below).
Long version:
To simply things, lets say we have a solution with 3 projects:
Data: responsible for setting up the ApplicationDbContext, generates migrations
Services: class library with some business logic
WebApi: REST API with a Startup.cs
With this architecture, we have to use a work-around for the "add-migration" issue that remains in Core 1.0.0. Part of this work-around means that we have a ApplicationDbContextFactory class that must have a parameterless constructor (no DI) in order for the migration CLI to use it.
Problem: Right now we have connection strings living in two places;
ApplicationDbContextFactory for the migration work-around
in the WebApi's "appsettings.json" file
Prior to .Net Core, we could use ConfigurationManager to pull connection strings for all solution projects from one web.config file based on the startup project. How do we use this new IConfiguration framework to store connection strings in one place that need to be used all over the solution? Additionally, I can't inject into the ApplicationDbContextFactory class' constructor... so that further complicates things (more-so since they changed how the [FromServices] attribute works).
Side note: I would like to avoid implementing an entire DI middleware just to get attribute injection, since Core includes it's own DI framework. If I can avoid that and still access appsettings.json values, that would be ideal.
If I need to add code let me know, this post was already long enough, so I'll hold off on examples until requested. ;)

Akka default vs runtime configuration

I read the Akka v2.3.11 docs (Java, not Scala) and am still a bit confused about how configuration works. In section 2.9.2 ("Akka and JAR bundling") it states:
Akka’s configuration approach relies heavily on the notion of every module/jar having its own reference.conf file, all of these will be discovered by the configuration and loaded. Unfortunately this also means that if you put/merge multiple jars into the same jar, you need to merge all the reference.confs as well. Otherwise all defaults will be lost and Akka will not function.
Being brand new to Akka and actors, but having been a Java developer for 10+ years, I have never once before seen a reference.conf file in any JAR. So what is Akka talking about here? Are they insinuating that if my Akka project uses, say, Guice and Guava, that I need to define reference.conf files for each of these?!?
Also, can someone confirm my understanding of application.conf vs reference.conf? My understanding is that you are supposed to define a reference.conf that contains default Akka configs, and then also define an application.conf that overrides it? If that's true, why use reference.conf in the first place? Why not just use an application.conf? I'm so confused.
Akka uses typesafe config library, have a look at the documentation https://github.com/typesafehub/config/blob/master/README.md that explains the difference between reference.conf and application.conf.
Basically reference.conf comes with Akka libraries, user shouldn't touch it (but can use as a reference) unless merging multiple jars (not necessary akka jars) that use typesafe config library into one jar.

Jackson 1.x to 2.x and the meaning of backwards compatibility

By necessity, I need to upgrade from Jackson 1.x to 2.x. After reading the notes on the release, I thought it would be fine to upgrade, so long as I made the necessary code changes:
http://wiki.fasterxml.com/JacksonRelease20
However, I realized after-the-fact that I still need to be able to deserialize data serialized with 1.x versions in the event that we have pre-upgrade data data flowing back into the service, which is guaranteed to happen.
Is Jackson 2.x suited to this or not? I understand that 2.x requires recompile, but can it still handle the old serialized format?
So, your case is that data serialized with Jackson 1, will be read by Jackson 2, this shouldn't be a problem at all, since both understand JSON format.
There is a possibility you have customizations based on annotation and hierarchies, even if this is the case, almost everything is supported in Jackson 1, should be supported in Jackson 2 (this is where backwards compatibility plays a role).
In the remote case you have something that only can be deserialized with Jackson 1, you can still do a rolling upgrade in your project, the Jackson guys did an amazing job in this scenario, where they changed all packages name to com.fasterxml.jackson from the old org.codehaus.jackson, this means that both version can live in your classpath, allowing you to upgrade things based on priorities, or incrementally.
I have experience in the 3 scenarios I mentioned, since our projects used to use Jackson 1 and now we moved all of them to the latest and greatest.
Hope this helps,
Jose Luis

Xtext project creation concerns

Before I begin I must admit that I am new to Xtext and the designing of DSLs. Some of my questions on this matter may be somewhat "less than intelligent".
I have created an Xtext project using the IDE, and I am simultaneously using one of the sample projects provided with Xtext as a guide to what I need to do in my language. I am seeing a lot of warnings that are making me nervous.
Apparently, when the development environment creates a new project, it somehow configures that project to use the Java 5 libraries. I am using Java 6, and as a result I get warnings saying that my project is configured for Java 5 and there is no Java 5 on my system (which there isn't!).
I have tried altering the build path so that it uses Java 6 libraries, but this generates a number of other warnings -- including warnings that the Java 6referenced in my manifest.mf file is invalid!
Then there are the "plugin.xml" warnings. Apparently, the build.properties file references a file called "plugin.xml" which is not created when the IDE creates the project. I have no idea whether or not this file is important enough to create, and I have no idea what should go into it.
Frankly, I hate warnings. Warnings tend to lead to future problems in what I produce. I like clean compiles and clean deployments. I would like to eliminate these warnings, before they start screwing me up down the road (like putting in Java6 classes that would break in a Java5 library).
Has anyone been able to eliminate these warnings reliably? Please advise.
For the JDK warning, you simply switch in the Manifest.MF to a target environment matching your preferred JDK ('JavaSE-1.6 ' in your case).
The warning regarding the missing plugin.xml will be gone as soon as you have run the grammar generator the first time, as it will produce such a file.

How does c3p0's JdbcProxyGenerator work (metaprogramming in Java‽)?

I've been using c3p0 with hibernate for a couple of years. When looking at exception stack traces, I see classes such as com.mchange.v2.c3p0.impl.NewProxyPreparedStatement in the stack. I went looking for the source code for these classes and came across the curous com.mchange.v2.c3p0.codegen package.
In particular, it looks like JdbcProxyGenerator is metaprogramming in Java. I'm having a hard time understanding the codegen mechanism and why it is used. The built jar contains these generated classes, so I'm assuming these classes are built during the build, perhaps as part of a two-phase build. The codegen package does not appear to be in the generated jar.
Any insight would be appreciated, just for my own curiosity. Thanks!
yes, you are absolutely right.
c3p0 uses code generation to generate non reflective proxy implementations of large JDBC interfaces, "java bean" classes with lots of properties, and some classes containing debug and logging flags (to set up conditional compilation within the build).
You can always see the generated classes by typing ant codegen in the source distribution, and then looking at the build/codebase directory. The latest binary distribution of c3p0 (0.9.2-pre2) includes the generated sources in a src.jar file, which you can also find as a maven artifact at http://repo1.maven.org/maven2/com/mchange/c3p0/0.9.2-pre2-RELEASE/c3p0-0.9.2-pre2-RELEASE-sources.jar
I hope this helps!