The absolute uri: http://www.opensymphony.com/sitemesh/decorator cannot be resolved in either web.xml or the jar files deployed with this application - maven-jetty-plugin

Using jetty-maven-plugin. Got this error in the upgrade from jetty 8 to jetty 9.
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-maven-plugin</artifactId>
<version>9.4.0.v20161208</version>

Jetty 9 could not find sitemesh*.jar even though it was there.
So you have to add below code to your jetty-context.xml to make it work:
<Configure class="org.eclipse.jetty.maven.plugin.JettyWebAppContext">
<Call name="setAttribute">
<Arg>org.eclipse.jetty.server.webapp.WebInfIncludeJarPattern</Arg>
<Arg>^$|.*/sitemesh-[^/]*\.jar$</Arg>
</Call>
</Configure>
Add all other jars names here which you want jetty to scan.

Related

How to include/copy static files inside final docker image using fabric8 tools

I'm using the Fabric8 Maven tool chain to build and deploy my Camel app on top of Openshift. I'm using the Camel Boot approach... My Mvn profile perform the following goals: clean install docker:build fabric8:json fabric8:appl.
Everything is ok! Except that I'm serving a static file (index.html) using Jetty as part of Camel route app. That file is located in $MY_PROJECT_DIR/src/main/resources. So, it goes to the app's classpath after a normal mvn build. But when using fabric8 build workflow, My app (Camel route) can't find that static content on filesystem classpath?
How can I specify fabric8 plugins to copy my static content inside /deployments of th final build image? Thus my camel endpoints ca refer to it on filesystem. I'm looking for something like maven-resources-plugin.
Well, digging into the src code I discovery you have two option to achieve this...
hawt-app-maven-plugin
if you are using hawt-app-maven-plugin [1], like me, you can use the hawt-app.source config property
during the package/build process all the contents of directory (which defaults to src/main/hawt-app) specified in hawt-app.source will be copied to the ${project.build.directory}/hawt-app/.
docker-maven-plugin
using the fabric8's docker-maven-plugin assembly configuration [2], you can pass a custom maven assembly descriptor. Like this one:
project's pom.xml
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>${docker.maven.plugin.version}</version>
<configuration>
<images>
<image>
<name>${docker.image}</name>
<build>
<from>${docker.from}</from>
<assembly>
<basedir>/deployments</basedir>
<!-- descriptorRef>hawt-app</descriptorRef -->
<descriptor>${project.basedir}/src/main/resources/hawt-app-custom-assembly.xml</descriptor>
</assembly>
hawt-app-custom-assembly.xml
<assembly ...>
<id>hawt-app</id>
<fileSets>
<fileSet>
<includes>
<include>bin/*</include>
</includes>
<directory>${project.build.directory}/hawt-app</directory>
<outputDirectory>.</outputDirectory>
<fileMode>0755</fileMode>
</fileSet>
<fileSet>
<includes>
<include>lib/*</include>
</includes>
<directory>${project.build.directory}/hawt-app</directory>
<outputDirectory>.</outputDirectory>
<fileMode>0644</fileMode>
</fileSet>
<!-- assembly extention... -->
<fileSet>
<includes>
<include>static-content/*</include>
</includes>
<directory>${project.basedir}/src/main/resources</directory>
<outputDirectory>.</outputDirectory>
<fileMode>0644</fileMode>
</fileSet>
</fileSets>
</assembly>
[1] https://github.com/fabric8io/fabric8/tree/master/hawt-app-maven-plugin
[2] https://maven.fabric8.io/#fabric8:build
[3] http://maven.apache.org/plugins/maven-assembly-plugin/assembly.html

SecurityException when running plain JUnit + Mockito in Eclipse RCP Project

I have an Eclipse RCP Project with multiple plugins. I am writing plain JUnit tests (no dependencies to Eclipse/UI) as separate fragments to the plugin-under-test.
When using Mockito and trying to mock an interface from another plugin (which is exported correctly; I can use the interface in my code), I get a SecurityException related to class signing:
org.mockito.exceptions.base.MockitoException:
Mockito cannot mock this class: interface ch.sbb.polar.client.communication.inf.service.IUserService
Mockito can only mock visible & non-final classes.
If you're not sure why you're getting this error, please report to the mailing list.
at org.mockito.internal.runners.JUnit45AndHigherRunnerImpl$1.withBefores(JUnit45AndHigherRunnerImpl.java:27)
[...]
Caused by: org.mockito.cglib.core.CodeGenerationException: java.lang.reflect.InvocationTargetException-->null
at org.mockito.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:238)
[...]
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[...]
Caused by: java.lang.SecurityException: Signers of 'ch.sbb.polar.client.communication.inf.service.IUserService$$EnhancerByMockitoWithCGLIB$$a8bfe723' do not match signers of other classes in package
at java.lang.ClassLoader.checkPackageSigners(ClassLoader.java:361)
at java.lang.ClassLoader.defineClass(ClassLoader.java:295)
... 40 more
When I run the tests as "JUnit Plugin tests", i.e. with an OSGi environment, everything works as expected. But I'd like to use the plain JUnit execution because of speed; in the class under test, I don't need the OSGi environment.
Does anybody know a way to do that?
As is mentioned in the comments, the root cause is that the Eclipse Orbit package of Mockito (which I had added to my target platform) is signed, and because of a bug in the underlying CGLIB, you cannot mock unsigned classes/interfaces with a signed Mockito.
See https://code.google.com/p/mockito/issues/detail?id=393 for the most detailed description. The bug is fixed in CGLIB head, but has not yet appeared in a release. Mockito only uses released versions as dependencies, so the fix is not yet in Mockito, with an unknown (to me) timeline, as when this will be in.
Workaround: Provide unsigned Mockito in separate bundle
The workaround is to package the Mockito JAR (and its dependencies) in its own bundle and export the necessary API packages.
When using Maven Tycho, JUnit, Hamcrest, and Mockito, the only way I was able to make this work and resolve all dependency / classpath / classloader issues correctly was the following way:
Create Maven module with the following entries in the pom.xml:
<packaging>eclipse-plugin</packaging>
[...]
<dependencies>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-core</artifactId>
<version>1.10.19</version>
</dependency>
</dependencies>
[...]
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>copy-test-libs</id>
<goals>
<goal>copy-dependencies</goal>
</goals>
<configuration>
<outputDirectory>lib</outputDirectory>
<stripVersion>true</stripVersion>
<includeScope>runtime</includeScope>
</configuration>
</execution>
</executions>
</plugin>
Use following entries in the MANIFEST.MF:
Bundle-ClassPath: lib/mockito-core.jar,
lib/objenesis.jar
Export-Package: org.mockito,
org.mockito.runners
Require-Bundle: org.junit;bundle-version="4.11.0";visibility:=reexport,
org.hamcrest.library;bundle-version="1.3.0";visibility:=reexport,
org.hamcrest.core;bundle-version="1.3.0";visibility:=reexport
And finally in your unit test fragment, add this new bundle as a dependency.
I ran into this same issue and was able to resolve it by using a more recent Orbit repository which pulls Mockito 2.x:
http://download.eclipse.org/tools/orbit/downloads/drops/R20181128170323/?d
This repository contains Mockito 2.23.0 which uses Byte Buddy instead of CGLIB.
In my target, I simply pull mockito-core 2.23.0 and Byte Buddy Java Agent 1.9.0 from the Orbit repository above.
<unit id="org.mockito" version="2.23.0.v20181106-1534"/>
<unit id="org.mockito.source" version="2.23.0.v20181106-1534"/>
<unit id="net.bytebuddy.byte-buddy-agent" version="1.9.0.v20181106-1534"/>
<unit id="net.bytebuddy.byte-buddy-agent.source" version="1.9.0.v20181106-1534"/>

Missing Java3D on tomcat cartridge

I have small gear with tomcat cartridge. When I try to execute war that that generate images with Java3D I get following exception:
Caused by: java.lang.ClassNotFoundException: javax.media.j3d.Node
As a first think I tried to add Java3D at classpath, I have added to my pom.xml:
<dependency>
<groupId>java3d</groupId>
<artifactId>j3d-core-utils</artifactId>
<version>1.3.1</version>
<scope>compile</scope>
</dependency>
This added to final war following artifacts:
[INFO] +- java3d:j3d-core-utils:jar:1.3.1:compile
[INFO] | +- java3d:vecmath:jar:1.3.1:compile
[INFO] | \- java3d:j3d-core:jar:1.3.1:compile
When I deployd adjusted war following exception raised:
Caused by: java.lang.UnsatisfiedLinkError: no J3D in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1886)
As far as I understand exception, it says that there are no native Java3D libraries at java.library.path. So I installed Java3D from suggested link, I also updated j3dcore.jar, j3dutils.jar and vecmath.jar. Also catalina.sh was updated:
export LD_LIBRARY_PATH=/var/lib/openshift/<my-application-id>/app-root/data/j3d-1_5_2-linux-amd64/lib/amd64
I suppose that there is no X11 server to work with, because of that Java3D have to run in headless mode. It could be set in catalina.sh like this:
JAVA_OPTS=${JAVA_OPTS}" -Djava.awt.headless=true"
Now it seems that all java3D classes and *.so libraries are found. Now there is another problem:
java.awt.HeadlessException
at sun.java2d.HeadlessGraphicsEnvironment.getDefaultScreenDevice(HeadlessGraphicsEnvironment.java:64)
Problem is that Java3D class Canvas3D can't work in headless mode. Only way could be to connect to some X11 server with screen. It could be done with export DISPLAY=:0.0
As far as I was able to test, it seems that there is no X11 server, providing screen to which could Java3D connect. Because of that it's not possible to run Java3D at OpenShift platform with tomcat cartridge.
Thanks for your help.
Have you tried adding it to your pom.xml to get installed via maven? Or add the .jar file to your project manually... http://mvnrepository.com/artifact/java3d/j3d-core-utils/1.3.1
You might require more than just the core package.
Since you are deploying a war file, and not using maven, i think you would need to download the jar files and embed them in your war file as libraries and use them.
You might also check out this article: https://www.openshift.com/kb/kb-e1087-how-to-include-libraries-jar-files-in-your-java-application-without-using-maven
It looks like there is also a .so file that you would need to include with something like -Djava.library.path
Here is the file with the jars & .so file on java.net http://download.java.net/media/java3d/builds/release/1.5.2/j3d-1_5_2-linux-amd64.zip
Speaking with the dev ops team, it does not seem that package is installed on the servers.

Akka configuration overwritting

I'm trying to overwrite Akka configuration in my application. I have created additional lib for the application, which also has application.conf file as it uses Akka. So I have 2 of them:
application.conf in my lib:
my-conf {
something = 1
}
application.conf in my app, which uses the lib:
something-else = "foo"
my-conf {
something = 1000
}
When I'm running the app from Intellij Idea, everything is fine, and lib configuration is being overwritten. To load the config in my app I'm using simple ConfigFactory.load() operation.
But when I create a jar of my app with mvn clean compile assembly:single and try to run it with this command: java -Xmx4048m -XX:MaxPermSize=512M -Xss256K -classpath myApp.jar com.myapp.example.MyMain I get error:
Caused by: com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'something-else'
So I decided to rename the conf file in my app, and load it in such way:
val defConfig = ConfigFactory load
val myConfig = ConfigFactory load "myconf"
val combined = myConfig.withFallback(defConfig)
val config = ConfigFactory load combined
It finds missing settings, but unfortunately config from my app doesn't override config in my lib.
In my lib I load config in default way: val settings = ConfigFactory load
Also, "my-conf.something" is an important setting, and I'd like to overwrite it from my app.
What I'm doing wrong? Thanks in advance!
Also, I thought there could be an issue how I create the jar:
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.4</version>
<configuration>
<archive>
<manifest>
<mainClass>com.myapp.example.MyMain</mainClass>
</manifest>
</archive>
<finalName>loadtest</finalName>
<appendAssemblyId>false</appendAssemblyId>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
<executions>
<execution>
<id>dist-assembly</id>
<phase>package</phase>
<goals>
<goal>assembly</goal>
</goals>
</execution>
</executions>
</plugin>
Straight from akka documentation:
If you are using Maven to package your application, you can also make use of the Apache Maven Shade Plugin support for Resource Transformers to merge all the reference.confs on the build classpath into one.
This resolved the issue for me.
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
<resource>reference.conf</resource>
</transformer>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<manifestEntries>
<Main-Class>akka.Main</Main-Class>
</manifestEntries>
</transformer>
</transformers>
As far as I understand, your library should create file called reference.conf. According to https://github.com/typesafehub/config :
libraries should use a Config instance provided by the app, if any,
and use ConfigFactory.load() if no special Config is provided.
Libraries should put their defaults in a reference.conf on the
classpath.
So, i suggest putting a reference.conf into your library first, to make it clear that it is default configuration, and you won't need to have a withFallback - typesafe-config will handle it for you.
Update: I don't remember how maven-assembly-plugin works - it may combine all of jar files and resources in a single file, meaning that lib/src/main/resources/application.conf will be overwritten by app/src/main/resources/application.conf in your case - so yet another reason to use reference.conf.
That's right! Just in order to add a bit more information related to that reference.conf I would say that you should go to:
Akka Documentation: http://akka.io/docs/?_ga=1.90177882.150089464.1402497958
, pick the version you are using and in it look for General->Configuration
inside that page look for 'Listing of the Reference Configuration', that's all the content you may need for that reference.conf. In my case I just copied it all.
Hope it helps to save sometime!

maven-release-plugin: Perform fails with 'working directory "...workspace\target\checkout\workspace" does not exist!'

I have maven project that fails when release:perform is called, though release;prepare works as expected.
I have found the bug report (below) which certainly seems to resemble the issue I have but not entirely sure I understand the problem:
MRELEASE516
The last few lines of output I get:
[INFO] Executing: cmd.exe /X /C "p4 -d E:\hudson\jobs\myHudsonJob\workspace\target\checkout -p 1.1.1.1:1111: client -d myProjectWorkspace-MavenSCM-E:\hudson\jobs\myHudsonJob\workspace\target\checkout"
[INFO] Executing goals 'deploy'...
[WARNING] Base directory is a file. Using base directory as POM location.
[WARNING] Maven will be executed in interactive mode, but no input stream has been configured for this MavenInvoker instance.
[INFO] ------------------------------------------------------------------------
[ERROR] BUILD ERROR
[INFO] ------------------------------------------------------------------------
[INFO] Error executing Maven.
Working directory "E:\hudson\jobs\myHudsonJob\workspace\target\checkout\workspace" does not exist!
From reading the bug report the possible cause of the error is related to my modules' structure, I've tried to outline it below:
/workspace
|
|+ pom.xml (root pom whose parent is the build pom,
| calling release:perform on this pom)
| [Modules: moduleA and moduleB]
|
|- moduleA
|+ pom.xml (parent is also build pom)
|+ build/pom.xml (the build pom - no custom parent)
|- moduleB
|+ pom.xml (parent is build pom)
It seems that the root pom should be in some common directory inside 'workspace' from the error but tried that and doesn't work, nor make sense as to why I need it.
What does the warning Base directory is a file want me to do instead?! It then figures that the base directory is workspace which then means the working directory is not found...any ideas?
Thanks in advance.
EDIT:
Having checked the SCM configuration it all looks ok to me...in each module and the root pom I have:
<scm>
<connection>
scm:perforce:1.1.1.1:1111://rootToDirectoryContainingRelevantPom
</connection>
<developerConnection>
scm:perforce:1.1.1.1:1111://rootToDirectoryContainingRelevantPom
</developerConnection>
</scm>
EDIT 2:
Maybe I have hit MRELEASE-261?
I got this working by using a newer version of the release plugin. The Maven super pom has a dependency on v2.0 of the release plugin defined. If you don't override this then that the version will be used.
You can specify a newer version when you run the plugin
mvn org.apache.maven.plugins:maven-release-plugin:2.2.1:perform
Or you can override the dependency version in your pom
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-release-plugin</artifactId>
<version>2.2.1</version>
</plugin>
I'm not sure you're facing MRELEASE-516 (which is about release:prepare). However, I wonder if you have correct <scm> informations in each POM. Can you confirm this?
Working directory "E:\hudson\jobs\myHudsonJob\workspace\target\checkout\workspace" does not exist!
I just saw the above line in your log. It looks like you have some screwy path setting somewhere. Do you overwrite the Workspace somewhere? Check your configuration and try to eliminate as much as possible the optional settings.
In my case the same symptoms turned out to be a result of a bug in maven-release-plugin:2.2.1. See MRELEASE-705.
So to get rid of the error, I've got to put this into the parent pom:
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-release-plugin</artifactId>
<version>2.0</version>
</plugin>
</plugins>
</build>
This error was occurring for us
Working directory E:\Data\myproject\target\checkout does not exist!
We're in the middle of a large transition of server tools and maven's release:prepare appeared to be failing silently, claiming the tags and version number changes had been pushed without error. However, after some research, these things had only been committed to the local git repository, not pushed - even though the release:prepare was executing commands to perform a push but never reported a failure -- even with the maven -e and -X command line parameters.
We're using Maven 3.3.9, maven release plugin 2.5.3, and git client 2.9.2.
Our end solution was to add a (or correct the, as your case may be) definition in maven's ~\.m2\settings.xml file for our git server (origin master) including username and password with privileges for pushing tags (as well as pushing to master). The id in the server definition for the git server needed to be the git server's hostname:
<servers>
<server>
<id>git-server</id>
<username>dan</username>
<password>changeit</password>
</server>
<servers>
With this update, the tag completes on the server and the checkout occurred successfully.