How to setup Robot Framework standalone jar with SwingLibrary? - swing

I'm using Robot Framework with SwingLibrary to test a Java Swing based application. Since I'm not used to Python and also don't want to setup the Python environment, I decided to go with the Robot standalone JAR version (current version 2.8.4).
My problem is the setup in combination with SwingLibrary (version 1.8.0). I don't know where to put the library such that it gets recognized by Robot.
So far, I have the following test case (mytest.txt):
*** Settings ***
Library SwingLibrary
*** Test Cases ***
MyTestCase
Start Application MyApp
I tried with putting the standalone jar in conjunction with the test case in a folder, and created one subfolder (called it Lib) where I put the SwingLibrary JAR (and later also extracted the JAR).
I added the SwingLibrary as well as my own application to the classpath, tried executing robot the following way:
java -Xbootclasspath/a:Lib/swinglibrary-1.8.0.jar:Lib/MyApp.jar -jar robotframework-2.8.4.jar mytest.txt
and also with
java -jar robotframework-2.8.4.jar mytest.txt
I always get one of the following errors:
[ WARN ] Imported library 'SwingLibrary' contains no keywords
==============================================================================
Mytest
==============================================================================
MyTestCase | FAIL |
No keyword with name 'Start Application' found.
or
[ ERROR ] Error in file 'mytest.txt': Importing test library 'SwingLibrary' failed: ImportError: No module named SwingLibrary

You can use the standalone jar without the -jar option, allowing you to specify the classpath in the standard manner. The main class for the standalone jar is org.robotframework.RobotFramework, so the syntax would be
java -cp robotframework-2.8.4.jar:Lib/swinglibrary-1.8.0.jar:Lib/MyApp.jar org.robotframework.RobotFramework
Slightly more verbose but it's standard and so avoids any oddnesses caused by using the non-standard -Xbootclasspath option.

Related

Can a jlinked runtime be deployed with javapackager?

The instructions to javapackager just above Example 2-1 in the Java SE Deployment Guide/Self-Contained Application Packaging state that a jar file is required in the -deploy command.
If I use a modular jar, I get this error message:
Exception: java.lang.Exception: Error: Modules are not allowed in srcfiles: [dist\tcdmod.jar].
If I use the equivalent non-modular jar, the resulting package includes the complete runtime. But I want to use the reduced runtime I made with jlink that is in the /dist folder.
Can the javapackager command deploy with a jlink-generated runtime?
How?
The section titled "Customization of the JRE" makes no mention of the javapackager command.
The following section "Packaging for Modular Applications" has a following line:
Use the Java Packager tool to package modular applications as well as non-modular applications.
Is the Java Packager tool distinct from javapackager? There are no examples using javapackager in this section.
Here is the javapacker command that I used:
javapackager -deploy -native -outdir packages -outfile ToneCircleDrone -srcdir dist -srcfiles tcdplain.jar -appclass com.adonax.tanpura.TCDLaunch -name "ToneCircleDrone" -title "ToneCircleDrone test"
The instructions in the javapackager documentation make no mention of the scenario where a jlink runtime is used. There is a Bundler argument -Bruntime but it is only used to point to an installed runtime other than the system default, AFAIK.
The javapackager provided with JDK 9 and up uses jlink to generate the jre image:
For self-contained applications, the Java Packager for JDK 9 packages
applications with a JDK 9 runtime image generated by the jlink tool. To
package a JDK 8 or JDK 7 JRE with your application, use the JDK 8 Java
Packager.
https://docs.oracle.com/javase/9/tools/javapackager.htm#JSWOR719
You can even pass arguments to jlink using -BjlinkOptions=<options>
Additionally, -Bruntime is only valid for packages deployed using -deploy -native jnlp
For compiling a modular application, instead of -srcdir, use --module-path <dir>, and then specify the main module using -m <module name>.
EDIT: While there is no documentation on -BjlinkOptions, it is present in the javapackager source
jdk.packager/jdk.packager.internal.legacy.JLinkBundlerHelper
https://github.com/teamfx/openjfx-10-dev-rt/blob/bf971fe212e9bd14b164e4c1058bc307734e11b1/modules/jdk.packager/src/main/java/jdk/packager/internal/legacy/JLinkBundlerHelper.java#L96
Example Usage: -BjlinkOptions=compress=2 will make javapackager run jlink with the --compress=2 flag, generating the JRE image with Zip Level compression.
Aditionally, running javapackager with the flag -Bverbose=true will show you exactly which arguments are being passed to jlink with a line in the output something like this:
userArguments = {strip-debug=1 compress=2}

Adding Spark CSV dependency to Zeppelin

I'm running an EMR with a spark cluster on AWS.
Spark version is 1.6
When running the folllowing command:
proxy = sqlContext.read.load("/user/zeppelin/ProxyRaw.csv",
format="com.databricks.spark.csv",
header="true",
inferSchema="true")
I get the following error:
Py4JJavaError: An error occurred while calling o162.load.
: java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at
http://spark-packages.org
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:77)
How can I solve this? I assume I should add a package but how do I install it and where?
There is many way to add packages in Zeppelin :
One of them is to actually change the conf/zeppelin-env.sh configuration file adding the packages you need e.g com.databricks:spark-csv_2.10:1.4.0 in your case to the submit options since Zeppelin uses the spark-submit command under the hood :
export SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-csv_2.10:1.4.0"
But let's say that you don't have actually access to those configuration. You can then use Dynamic Dependency Loading via %dep interpreter (deprecated) :
%dep
z.load("com.databricks:spark-csv_2.10:1.4.0")
This will require that you load the dependencies before launching or restarting the interpreter.
Another way to do it is do add the dependency you need via the interpreter dependency manager as described in the following link : Dependency Management for Interpreter.
Well,
First you need to download the CSV liv from Maven repository:
https://mvnrepository.com/artifact/com.databricks/spark-csv_2.10/1.5.0
Check the scala version that you are using. If is 2.10 or 2.11.
When you call spark-shell our spark-submit or pyspark. Or even your Zeppelin you need to add the option --jars and the path to your lib.
Like this:
pyspark --jars /path/to/jar/spark-csv_2.10-1.5.0.jar
Than you can call it as you did above.
You can see other close issue here: How to add third party java jars for use in pyspark

JUnit are failing when upgraded from JDK 1.6 to JDK 1.8 in maven, but its works in eclipse

I am using the JUnit for code coverage in my project. For db i am using the dbunit as like mock DB. When i am running JUnit from Eclipse UI its getting passed, but its getting failed when run through maven.
Above set up is running fine in JDK 1.6.25 by using maven and its started failing when upgraded to 1.8.51. I had updated the maven compiler plugin, its doesn't work. I am used below versions, junit - 4.7 2.dbunit - 2.4.8 hsqldb - 2.0.0 maven - 2.2.1.
Issue:
-> All test cases which ran fine in JAVA 1.6, started failing on migrating to JDK 1.8.51.
-> Due this we faced build failure issue and also code coverage reduction.
Root Cause:
-> JUnit uses Java reflection to get the test methods from Test classes. In JAVA 1.6 test method order returned as same as declaration in source file.
-> But from JAVA 7 onwards the methods order returned the by JVM is not same as the source file, it will be returned randomly.
-> Since our test cases are dependent on each other, due to order change it started failing.
For Example below test cases are using the same data (Mock DB) for execution,
-> AddOperationTestCase()
-> EditOperationTestCase()
-> DeleteOperationTestCase()
If delete run first due JVM random order, for Add and Edit data won't be available it will fail.
Solution :
-> I had tried to find options in JUnit and Sure Fire plugin to maintain same order as like source file, but I could not find feasibility there.
-> I have identified the class which will returns the order of execution in JUnit library and override that accordingly to run it source file order.
-> As of now I had added this annotation wrapper to failed classes, now build is running successfully.
Link for Wrapper class:
https://somethingididnotknow.wordpress.com/2014/03/07/run-junit-tests-in-order/

Jruby log4j integration

I am currently working on integrating Java application General Architecture For Text Engineering (GATE) with a Rails application using JRuby architecture. When we worked on integrating JRuby with log4j, I am getting following error:
0 [main] DEBUG Main.class - Hello world
gate/Gate.java:80:in `<clinit>': java.lang.NoClassDefFoundError: org/apache/log4
j/Logger (NativeException)
from gateapp/Main.java:86:in `main'
from test.rb:12
test.rb is the name of ruby program.
I tried importing all the log4j apache libraries, and included the class file in the test.rb file.
When I run the Java program alone its running fine. But when I generate the jar file and include them in Ruby file (test.rb) , I am getting this error
java.lang.NoClassDefFoundError: org/apache/log4j/Logger (NativeException) problem is occuring. How to deal with this problem ?
You need to make sure the log4j JAR file is in your classpath. One way to do this is to set the CLASSPATH variable in your environment. Another way would be to call require in your ruby code like
require "/some/path/MyStuff.jar"
Here is my config to set it up with couchbase Java SDK
include Java
def setup_log4j
java::lang.System.setProperty("net.spy.log.LoggerImpl", "net.spy.memcached.compat.log.Log4JLogger")
fa = Java::OrgApacheLog4j::FileAppender.new();
fa.setName("FileLogger");
fa.setFile("./log/#{Rails.env}.log");
fa.setLayout(Java::OrgApacheLog4j::PatternLayout.new("%d %-5p [%c{1}] %m%n"));
fa.setThreshold(Java::OrgApacheLog4j::Level::INFO);
fa.setAppend(true);
fa.activateOptions();
Java::OrgApacheLog4j::Logger.getRootLogger().addAppender(fa)
end
Just beware that I required the lo4j.jar file earlier.
Worth to mention that there is project named log4jruby.

How to register a JDBC driver using jruby-complete.jar?

I'm trying to write a script that is executed with the jruby-complete.jar like so:
java -cp derby.jar; -Djdbc.drivers=org.apache.derby.jdbc.EmbeddedDriver -jar jruby-complete.jar -S my_script.rb
I'm using JVM 1.6.0_11 and JRuby 1.4.
In my jruby script I attempt to connect to the database like this.
connection = Java::com.sql.DriverManager.getConnection("jdbc:derby:path_to_my_DB")
This throws a java.sql.SQLException: "No suitable driver found" exception.
I've tried manually loading the driver into the class loader using Class.forName which gives me the same error.
It looks like to me that the class loader being used by the DriverManager is not the same as the current thread's. I've tried setting the current thread's class loader using:
JThread = java.lang.Thread
...
class_loader = JavaLang::URLClassLoader.new(
[JavaLang::URL.new("jar:file:/derby.jar!/")].to_java(
JavaLang::URL),JRuby.runtime.jruby_class_loader)
JThread.currentThread().setContextClassLoader(class_loader )
But this doesn't help.
Any ideas?
OK I downloaded jruby-complete.jar and had a go....
This seems to work for me:
java -classpath c:\ruby\db-derby-10.5.3.0-bin\lib\derby.jar;jruby-complete-1.4.0.jar org.jruby.Main -S derby.rb
When using the -jar switch, the -classpath option is ignored (maybe the CLASSPATH shell var is too). But on the above line, we put both required jars on the class path and pass the class name to execute (i.e. org.jruby.Main). The script being passed in is as per my other answer.
Another option (which I have not tried) would be to alter the jruby-complete.jar manifest file to specify as classpath, as described here:
Adding Classes to the JAR File's Classpath
First, make sure your driver jar is not corrupted (this made me waste a couple of days one time).
Second, read this about JRuby/Java classloade: JRuby Wiki
Third (because I haven't played with "jruby-complete") try this simple script and then see if you can adapt as you need.
require 'java'
require 'C:\ruby\db-derby-10.5.3.0-bin\lib\derby.jar' # adjust for your machine
include_class "java.sql.DriverManager"
derby = org.apache.derby.jdbc.EmbeddedDriver.new
connection = DriverManager.getConnection("jdbc:derby:derbyDB;create=true")