Dropwizard integration test: config resource file not found - configuration

I am trying to assemble a Dropwizard integration test with the following app rule:
public static final DropwizardAppRule<MyConfiguration> RULE = new DropwizardAppRule<MyConfiguration>(
MyApplication.class, ResourceHelpers.resourceFilePath("config_for_test.yml"));
When I run the test I get the following error:
java.lang.IllegalArgumentException: resource config_for_test.yml not found.
The full stack trace is:
java.lang.ExceptionInInitializerError
at sun.misc.Unsafe.ensureClassInitialized(Native Method)
at sun.reflect.UnsafeFieldAccessorFactory.newFieldAccessor(UnsafeFieldAccessorFactory.java:43)
at sun.reflect.ReflectionFactory.newFieldAccessor(ReflectionFactory.java:142)
at java.lang.reflect.Field.acquireFieldAccessor(Field.java:1082)
at java.lang.reflect.Field.getFieldAccessor(Field.java:1063)
at java.lang.reflect.Field.get(Field.java:387)
at org.junit.runners.model.FrameworkField.get(FrameworkField.java:69)
at org.junit.runners.model.TestClass.getAnnotatedFieldValues(TestClass.java:156)
at org.junit.runners.ParentRunner.classRules(ParentRunner.java:215)
at org.junit.runners.ParentRunner.withClassRules(ParentRunner.java:203)
at org.junit.runners.ParentRunner.classBlock(ParentRunner.java:163)
at org.junit.runners.ParentRunner.run(ParentRunner.java:308)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException: resource config_for_test.yml not found.
at io.dropwizard.testing.ResourceHelpers.resourceFilePath(ResourceHelpers.java:23)
at de.emundo.sortimo.resource.TaskAdditionTest.<clinit>(TaskAdditionTest.java:28)
... 18 more
Caused by: java.lang.IllegalArgumentException: resource ../config_for_test.yml not found.
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:145)
at com.google.common.io.Resources.getResource(Resources.java:197)
at io.dropwizard.testing.ResourceHelpers.resourceFilePath(ResourceHelpers.java:21)
... 19 more
According to another stackoverflow entry I should just use the parent folder ../config_for_test.yml. However, this does not solve the problem.

Ok, I just found a solution on my own. So Dropwizard will look in ${basedir}/src/test/resources for the configuration file as this is maven's default directory to look for any files. On the other hand, when the application is run the default directory is just ${basedir}. Hence, the following approaches will help:
Just use ../../../config_for_test.yml for the integration test.
Move the config file to the ${basedir}/src/test/resources directory.
I used approach 2 and furthermore moved the according config_for_release.yml to src/main/resources in order to have a symmetric file structure. This, however, leaves the basedir directory empty when the program is run in normal mode (with the arguments server config_for_release.yml). Thus, one can adapt the argument to server src/main/resources/config_for_release.yml. Since I do not like to use such long paths on method startup I chose a different solution: I use the maven copy plugin to copy the file to the target folder:
<plugin>
<artifactId>maven-resources-plugin</artifactId>
<version>2.7</version>
<executions>
<execution>
<id>copy-resources</id>
<!-- here the phase you need -->
<phase>validate</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>${basedir}/target</outputDirectory>
<resources>
<resource>
Å <directory>src/main/resources</directory>
<filtering>true</filtering>
<includes>
<include>**/*.yml</include>
</includes>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>
The program is then started with server target/config_for_release.yml. I use the target folder mainly so that the config file is hidden inside the according folder of Eclipse's Package Explorer and I do not accidentally open the wrong configuration file.

Related

AWS nginx/1.16.1 - "413 Request Entity Too Large" and Spring boot jar

I have Jave project and tried to upload 2 MB file on AWS EBS and getting following error.
<html>
<head>
<title>413 Request Entity Too Large</title>
</head>
<body>
<center>
<h1>413 Request Entity Too Large</h1>
</center>
<hr>
<center>nginx/1.16.1</center>
</body>
</html>
This question looks duplicate however all old questions and correct answer not working with new EBS/ngnx version.
Added file on all suggested folder structure but no solution is working with nginx/1.16.1 , please find following image .
Inside file added
client_max_body_size 20M;
if have added nginx.conf
inside a
client_max_body_size 20M;
sites-available/defual or your active config file
else you using sperate config for a reverse proxy that was not correct config file check you update client_max_body_size in right file
$ sudo systemctl reload nginx.service
if change correctly then restart an Nginx service
I will suggest a test without proxy may be limitations on the tomcat server you have to change it.
After spending few hrs finally resolved issue .
Issue 1:
Spring boot by default ignore all folders so weather you can copy any file and directory structure not going to deployed in EBS.
in short ".ebextensions" ignored while jar created,in order to fix added following plugins into POM
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.6</version>
<executions>
<execution>
<id>prepare</id>
<phase>package</phase>
<configuration>
<tasks>
<unzip src="${project.build.directory}/${project.build.finalName}.jar" dest="${project.build.directory}/${project.build.finalName}" />
<copy todir="${project.build.directory}/${project.build.finalName}/" overwrite="false">
<fileset dir="./" includes=".ebextensions/**"/>
</copy>
<zip compress="false" destfile="${project.build.directory}/${project.build.finalName}.jar" basedir="${project.build.directory}/${project.build.finalName}"/>
</tasks>
</configuration>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
Issue 2:
Following configuration worked for me
Path : root /.ebextensions/nginx/conf.d
File : proxy.config
files:
"/etc/nginx/conf.d/01_proxy.conf":
mode: "000755"
owner: root
group: root
content: |
client_max_body_size 50M;
client_body_buffer_size 16k;
container_commands:
01_reload_nginx:
command: "sudo service nginx reload"

How shoud Fabric8 CD-Pipeline work on OpenShift (without ImageStreams)

I'm struggling getting the F8 CD-Pipeline to work on OpenShift. I use a Jenkinsfile downloaded from the F8 Jenkinsfile Library for Maven builds with steps "CanaryReleaseAndStage". The stage deploy step there looks like the following:
stage('Rollout Staging') {
kubernetesApply(environment: envStage)
}
I looked up the implementation of kubernetesApply() from the Kubernetes Pipeline Plugin. If no file parameter is present in the call (like here) it applies the Kubernetes/OpenShift resources defined in file "target/classes/META-INF/fabric8/openshift.yml", which is generated upon build.
In this file (which is also uploaded as artifact to the nexus, so I can easily fetch it) there are three resources defined:
A Service
A Deployment config, containing a Docker image reference (without tag), also containing a ConfigChange trigger listening for an ImageStreamTag 'my-project:latest'
A Route
... but no ImageStream. However on the build log I see that an image stream definition apparently got generated on a different file:
[INFO] F8: Found tag on ImageStream my-project tag: sha256:c15b56841387a7e0aea960020ccf2efb48f21bd4d12d826e2cd04a94f4d9d748
[INFO] F8: ImageStream my-project written to /home/jenkins/workspace/my-project-dir/target/my-project-is.yml
But I'm afraid that one never gets applied to Kubernetes. Hence there is no image stream in the staging project.
In this configuration the staging deployment cannot even deploy the pod. If I add an image stream manually to the staging project it deploys but is never updated when new builds occur.
I've updated to the newest fabric/jenkins image 2.2.331, but it also does not seem to work here.
My pom.xml (parts essential for f8 building):
<project ...>
<modelVersion>4.0.0</modelVersion>
<groupId>my.package</groupId>
<artifactId>myproject/artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<fabric8.mode>openshift</fabric8.mode>
<fabric8.build.strategy>docker</fabric8.build.strategy>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
<build>
<plugins>
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>3.2.28</version>
<configuration>
<images>

</images>
</configuration>
<executions>
<execution>
<goals>
<goal>resource</goal>
<goal>build</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
So I'd like to know:
How is the CD pipeline supposed to work regarding updates to the staging deployments here?
Why is this image stream definition created if it is not applied. Am I missing some configuration here maybe.
Thanks for any insight!
Any chance of seeing your pom.xml and full build? It sounds like you are using fabric8-maven-plugin right? Its doing a S2I binary build and generating an image stream? It just sounds like somethings going wrong and the generated image stream isn't being included in your target/classes/META-INF/fabric8/openshift.yml maybe?
I wonder if something is going wrong in the order of your maven goals or something (e.g. typically fabric8:resource is first, then fabric8:build which then adds the ImageStream into the generated YAML files)

Google Closure using ES5 strict mode even though I specified non-strict mode (in minify-maven-plugin configuration)

I'm using the com.samaxes.maven minify-maven-plugin to minify a collection of JS source files written using some of the ES6 features Google Closure supports. Here's the relevant configuration in my POM:
<!-- minify-maven-plugin: Minification using Google Closure -->
<plugin>
<groupId>com.samaxes.maven</groupId>
<artifactId>minify-maven-plugin</artifactId>
<version>1.7.6</version>
<executions>
<!-- Creation of the common-[version].js file -->
<execution>
<id>common-minify</id>
<phase>prepare-package</phase>
<configuration>
<charset>UTF-8</charset>
<jsSourceDir>.</jsSourceDir>
<jsSourceFiles>
...
</jsSourceFiles>
<jsFinalFile>./js/common-${project.version}.js</jsFinalFile>
<jsEngine>CLOSURE</jsEngine>
<closureLanguageIn>ECMASCRIPT6</closureLanguageIn>
<closureLanguageOut>ECMASCRIPT5</closureLanguageOut>
</configuration>
<goals>
<goal>minify</goal>
</goals>
</execution>
<!-- 2 other similarly configured executions are here. -->
...
</executions>
</plugin>
The problem is, when I run the maven goal this configuration, I get the following error message:
[INFO] Creating the merged file [common-1.8.24.js].
[INFO] Creating the minified file [common-1.8.24.min.js].
Jan 03, 2017 12:03:06 PM com.google.javascript.jscomp.LoggerErrorManager println
SEVERE: [1mcommon-1.8.24.js:5577: [31mERROR[39m - object literals cannot contain duplicate keys in ES5 strict mode[0m
supportsDataForwarding: function () {
^
This looks to me like Google Closure is trying to compile using ES5 Strict mode, even though I specified the non-strict ECMASCRIPT5 mode in my <closureLanguageOut> option (see doc here). Why is it not disabling strict mode?
i had the same problem and found a way to let the minify-maven-plugin not fail the build in case it complains about ES5 strict mode:
<plugin>
<groupId>com.samaxes.maven</groupId>
<artifactId>minify-maven-plugin</artifactId>
<version>1.7.6</version>
<executions>
<execution>
<id>default-minify</id>
<phase>process-resources</phase>
<configuration>
<charset>UTF-8</charset>
<closureWarningLevels>
<es5Strict>OFF</es5Strict>
</closureWarningLevels>
...
</configuration>
<goals>
<goal>minify</goal>
</goals>
</execution>
</executions>
You may further fine-tune using the following documentation How to tell closure compiler which warnings you want. Hope this helps :)

MiniDFSCluster gives ioexception

I am trying to test in hadoop. have the code as:
System.setProperty("test.build.data","/folder");
config = new Configuration();
cluster = new MiniDFSCluster(config,1,true,null);
but in new MiniDFSCluster(config,1,true,null), it throws exception:
java.io.IOException: Cannot run program "du": CreateProcess error=2, The system cannot find the file specified.
at java.lang.ProcessBuilder.start(ProcessBuilder.java:470)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:149)
at org.apache.hadoop.util.Shell.run(Shell.java:134)
at org.apache.hadoop.fs.DU.<init>(DU.java:53)
at org.apache.hadoop.fs.DU.<init>(DU.java:63)
at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolume.<init>(FSDataset.java:333)
at org.apache.hadoop.hdfs.server.datanode.FSDataset.<init>(FSDataset.java:689)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:302)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:417)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:280)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:124)
at ebay.Crawler.TestAll.testinit(TestAll.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:599)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:232)
at junit.framework.TestSuite.run(TestSuite.java:227)
at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:81)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:49)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified.
at java.lang.ProcessImpl.<init>(ProcessImpl.java:92)
at java.lang.ProcessImpl.start(ProcessImpl.java:41)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:463)
... 33 more
Could someone please give me some hint how to solve this?
Thank you very much.
Looks like the du command is not there on the system or is not in the PATH. If using Hadoop on Windows then Cygwin has to be installed. Anyway, which du will give the location of du binary.
I suspect you're using a Cloudera distribution of Hadoop. Version 1.0.0 of 'vanilla' Hadoop does work on Windows - at least create and writing to a file does.
If you need to run unit tests in a local Windows environment, try using Maven profile properties to set a version of 1.0.0 in you local Maven config, and in the POM specify the 'remote' config. The global setting will override the POM-specific one.
settings.xml
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
<profiles>
<profile>
<id>windows</id>
<properties>
<hadoop.version>1.0.0</hadoop.version>
</properties>
</profile>
</profiles>
<activeProfiles>
<activeProfile>windows</activeProfile>
</activeProfiles>
</settings>
pom.xml
<properties>
<hadoop.version>0.20.2-cdh3u2</hadoop.version>
</properties>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>${hadoop.version}</version>
</dependency>

User and project specific settings in Maven

We develop multiple branches of a project concurrently. Each developer has multiple working copies, each working copy uses its own DB schema. (There will typically be a working copy per branch, but sometimes even more than one working copy per branch.) We need to let Maven know the DB credentials (for the db-migration plugin, for unit tests, for the dev instance of the servlet).
We can't put the credentials in the pom.xml because each developer might use different DB schema names. We can't put the credentials in settings.xml because each developer uses more than one schema.
Where do we put the credentials?
For example, http://code.google.com/p/c5-db-migration/ describes that the DB credentials need to be present in pom.xml but I would like to externalize them out to a file that's not under revision control.
You could put them into a properties file inside the project directory but which is excluded from source control.
With Maven it's possible to read properties from an external file by using a <build><filters><filter> element as instructed here.
Read following answers:
How to read an external properties file in Maven
Reading properties file from Maven POM file
Read a file into a Maven property
or just:
<project>
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>properties-maven-plugin</artifactId>
<version>1.0</version>
<executions>
<execution>
<phase>initialize</phase>
<goals>
<goal>read-project-properties</goal>
</goals>
</execution>
<configuration>
<files>
<file>dev.properties</file> <======== IT IS!!!!!
</files>
</configuration>
</executions>
</plugin>
</plugins>
</build>
</project>