How does c3p0's JdbcProxyGenerator work (metaprogramming in Java‽)? - c3p0

I've been using c3p0 with hibernate for a couple of years. When looking at exception stack traces, I see classes such as com.mchange.v2.c3p0.impl.NewProxyPreparedStatement in the stack. I went looking for the source code for these classes and came across the curous com.mchange.v2.c3p0.codegen package.
In particular, it looks like JdbcProxyGenerator is metaprogramming in Java. I'm having a hard time understanding the codegen mechanism and why it is used. The built jar contains these generated classes, so I'm assuming these classes are built during the build, perhaps as part of a two-phase build. The codegen package does not appear to be in the generated jar.
Any insight would be appreciated, just for my own curiosity. Thanks!

yes, you are absolutely right.
c3p0 uses code generation to generate non reflective proxy implementations of large JDBC interfaces, "java bean" classes with lots of properties, and some classes containing debug and logging flags (to set up conditional compilation within the build).
You can always see the generated classes by typing ant codegen in the source distribution, and then looking at the build/codebase directory. The latest binary distribution of c3p0 (0.9.2-pre2) includes the generated sources in a src.jar file, which you can also find as a maven artifact at http://repo1.maven.org/maven2/com/mchange/c3p0/0.9.2-pre2-RELEASE/c3p0-0.9.2-pre2-RELEASE-sources.jar
I hope this helps!

Related

Akka default vs runtime configuration

I read the Akka v2.3.11 docs (Java, not Scala) and am still a bit confused about how configuration works. In section 2.9.2 ("Akka and JAR bundling") it states:
Akka’s configuration approach relies heavily on the notion of every module/jar having its own reference.conf file, all of these will be discovered by the configuration and loaded. Unfortunately this also means that if you put/merge multiple jars into the same jar, you need to merge all the reference.confs as well. Otherwise all defaults will be lost and Akka will not function.
Being brand new to Akka and actors, but having been a Java developer for 10+ years, I have never once before seen a reference.conf file in any JAR. So what is Akka talking about here? Are they insinuating that if my Akka project uses, say, Guice and Guava, that I need to define reference.conf files for each of these?!?
Also, can someone confirm my understanding of application.conf vs reference.conf? My understanding is that you are supposed to define a reference.conf that contains default Akka configs, and then also define an application.conf that overrides it? If that's true, why use reference.conf in the first place? Why not just use an application.conf? I'm so confused.
Akka uses typesafe config library, have a look at the documentation https://github.com/typesafehub/config/blob/master/README.md that explains the difference between reference.conf and application.conf.
Basically reference.conf comes with Akka libraries, user shouldn't touch it (but can use as a reference) unless merging multiple jars (not necessary akka jars) that use typesafe config library into one jar.

Is there a consistent way to deal with libraries being extrenalized in Scala 2.11?

I have painfully come across the facts that scala.util.parsing and scala.swing are apparently no more bundled in Scala 2.11. Each time, I had to google for the right line to add to an sbt configuration, or to find the right link for where to download the jar file.
In case there are other libraries that moved out, how am I supposed to know these things? Or am I supposed to rely only on questions from people having the same problem on Stackoverflow? The Scala Swing project on github does not even document these info.
I like creating Eclipse projects on the fly, and making them depend on other projects in the same workspace, without going through sbt, and it is annoying to run into these library disappearance cases on every computer/workspace where I do this.
The modularization (what you call externalizing) has been discussed for a good while on the scala-users mailing list. But the canonical place where to find this information is in the release notes. While you may not want to read all of those, I would strongly advise reading at least the release notes for a major version of any language you use. Case in point, the release notes for Scala 2.11.0:
Modularization
The core Scala standard library jar has shed 20% of its
bytecode. The modules for xml, parsing, swing as well as the
(unsupported) continuations plugin and library are available
individually or via scala-library-all. Note that this artifact has
weaker binary compatibility guarantees than scala-library – as
explained above. The compiler has been modularized internally, to
separate the presentation compiler, scaladoc and the REPL. We hope
this will make it easier to contribute. In this release, all of these
modules are still packaged in scala-compiler.jar. We plan to ship them
in separate JARs in 2.12.x.

Checkstyle and Findbugs for changed files only on Jenkins (and/or Hudson)

We work with a lot of legacy code and we think about introducing some metrics for new code. Is it possible to let Findbugs and Checkstyle run on changed files only instead of a complete project?
It would be nice to assure that only file with a minimum of quality is checked in, but the code base itself is not (yet) touched and evaluated not to confuse people by thousands of issues.
In theory, it would be possible. You would use a shell script to parse the SVN (or whatever SCM) change logs after a given start date, identify the .java files from these change sets and build two patterns from these:
The Findbugs Maven Plugin expects a comma-separated list of class (or
package) names for the parameter onlyAnalyze, so you'll have
to translate file names to fully qualified class names (this will get
tricky when you're dealing with inner classes)
The Maven Checkstyle Plugin is even worse, it expects a
configuration file for its packageNamesLocation parameter.
Unfortunately, only packages are allowed, not individual files. So
you'll have to translate file names to packages.
In the above examples I assume that you are using maven. I am pretty sure that similar things can be done with ant, but I wouldn't know.
I myself would probably use a Groovy script instead of a shell script to achieve the above results.
Findbugs has ant tasks that can do diffs against different findbugs results to see just the deltas, so only reporting new bugs, see
http://findbugs.sourceforge.net/manual/datamining.html

What are common conventions for using namespaces in Clojure?

I'm having trouble finding good advice and common practices for the use of namespaces in Clojure. I realize that namespaces are not the same as Java packages so I'm trying to tease out the conventions in Clojure, which seem surprisingly hard to determine.
I think I have a pretty good idea how to split functions into clj files and even roughly how I'd want to organize those files into directories. But beyond that I'm having trouble finding the mechanics for my dev environment. Some inter-related questions:
Do I use the same uniqueness conventions for Clojure namespaces as I would normally use for Java packages? [ie backwards-company-domain.project.subsystem]
Should I save my files in a directory structure that matches my namespaces? [ala Java]
If I have multiple namespaces, do I need to compile all of my code into a jar and add it to my classpath to make it accessible?
Should each namespace compile to one jar? Or should I create a single jar that contains clj code from many namespaces?
Thanks...
I guess it's ok if you think it helps, but many Clojure projects don't do so -- cf. Compojure (using a top-level compojure ns and various compojure.* ns's for specific functionality), Ring, Leiningen... Clojure itself uses clojure.* (and clojure.contrib.* for contrib libraries), but that's a special case, I suppose.
Yes! You absolutely must do so, or else Clojure won't be able to find your namespaces. Also note that you musn't use the underscore in namespace names or the hyphen in filenames and wherever you use a hyphen in a namespace name, you must use an underscore in the filename (so that the ns my.cool-project is defined in a file called cool_project.clj in a directory called my).
You need to make sure all your stuff is on the classpath, but it doesn't matter if it's in a jar, multiple jars, a mixture of jars and directories on the filesystem... As long as it obeys the correct naming conventions (your point no. 2) you should be fine.
However, do not compile things ahead-of-time if there's no particular reason to do so -- this may prevent your code from being portable across various versions of Clojure without providing any benefits besides a slightly improved loading time.
You'll still need to use AOT compilation sometimes, notably in some Java interop scenarios -- the documentation of the relevant functions / macros always mentions that. There are examples of things requiring AOT in clojure.contrib; I've never needed it, so I can't provide much in the way of details.
I'd say you should use jars for functional units of code. E.g. Compojure and Ring get packaged as single jars containing many namespaces which together compose the whole package. Also, clojure.contrib is notably packaged as a single jar with multiple unrelated libraries; but that again may be a special case.
On the other hand, a single jar containing all of your project's code together with its dependencies might occasionally be useful for deployment. Check out the Leiningen build tool and its 'uberjar' facility if you think that sort of thing may be useful to you.
Strictly speaking, not necessary, though many Java projects have dropped that convention as well, especially for internal projects or private APIs. Do avoid single-segment namespaces though, which would result in classfiles being generated in the default package.
Yes.
Regarding 3 & 4, packaging and AOT compilation are entirely orthogonal to the question of namespace conventions.

Missing Linq namespaces (Linq to sql, compact framework)

I spent this morning in trying to figure out where the system.linq.expressions namespace is. The following is what I did:
In VS 2008, Create a new C#/Smart Device/Windows Mobile 6 Professional SDK/.NET CF v3.5/Class Library
Used SqlMetal (in Program Files/Microsoft SDKs/Windows/v6.0A/Bin) to generate the data context.
Added the data context .cs file into the project.
Compile and many errors for missing namespaces: System.Data.Linq, System.Data.Linq.Mapping, System.Linq.Expressions
After some research added System.Data.Linq.dll in c:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5 (The dll was not directly listed when I choose to add reference and I used "browse" tab to finally located the one, which is for normal framework)
Compile again, less errors, but still System.Linq.Expressions namespace is missing.
The document says System.Linq.Expressions is in System.Core.dll but it seems my System.Core.dll (located in Program Files\Microsoft.NET\SDK\CompactFramework\v3.5\WindowsCE) contains much less namespace than document says.
Thanks in advance!
The Compact Framework does not support LINQ to SQL. All objects in the documentation for System.Data.Linq confirms this by being completely devoid of the "supported in the CF" icon. For example, look over at the docs for DataTable, which is supported. You'll see a little icon by each supported method/property.
You cannot "add" support by simply referencing a desktop assembly like you did in your step 5. The CF cannot consume full framework assemblies, for a variety of reasons.
Dynamic code generation (Reflection.Emit) is not available in NETCF. What this means is that a lot of features that depend on this is not available, this includes DLR (dyanmic language runtime and hence languages like IronRuby), Linq-to-SQL/
If you just want the Linq.Expressions and you are doing your own stuff with it i.e. not trying to get linq to sql working then you can use the System.Linq.Expression stuff from the db4o guys.
I am using it on my project using linq to objects.
db4o linq implementation