Polymer async tests timeout in sauce labs, but not locally - polymer

I have created several tests for a Polymer web component.
The tests are written according to the guidelines provided in polycast #36.
When I run the tests with Web-Component-Tester locally, all is fine and my tests pass.
When I run the tests in a local browser (chrome and firefox) all is fine.
I've also set up Travis-CI and Sauce Labs for automated testing.
Travis can run WCT locally in the shell. This works perfectly and my tests pass.
However when WCT is run with the sauce plugin enabled, and the tests are run on the sauce labs browsers, ONLY the async tests fail.
My test is waiting for a JS-event to be fired.
I presume the event is never received.
The output from WCT is not really helpful.
It just complains that the done() method is never called.
Does anyone else experience the same problems with WCT and Sauce Labs?
If so, does anyone have a solution for these async tests?
Edit 1:
I should add that my component wraps a native websocket.
The async tests wait for websocket events that are refired by the component after it catches the websocket event.

The issue might have something to do with Sauce Labs' SSL Bumping.
I could not figure out why the external connection fails when run in a Sauce browser.
The solution is to stub the websocket connection such that only the wrapper gets tested instead of the wrapper + websocket.
An aditional advantage to stubbing is of course that the tests run way faster.

Related

How to run espresso instrumentation test precondition on PC JVM?

I'm using espresso lib for android instrumentation tests. I also need to run precondition before tests (create account on web server, maybe some other data in future.)
I realized, that instrumentation test code actually runs on phone's JVM. I cannot run preconditions from phone, because of I'm planning to use jar-bundled test custom test framework where that precondition already implemented.
I there a way to run preconditions on PC JVM in android instrumentation test?

How would I run integration tests from vivet/googleApi?

I am trying to work out how I would integrate this shared library from GitHub into my code, since it is a shared class library, for starters I just want to run the integration tests, but I cannot work out how go get the test runner to run them.
I created a console application in my main project and a reference to the GoogleMapsApiTest in the console but I am not sure how to call the tests from there to run them.
GoogleAPIClassLibrary
I had to download the gui test runner and build it from GitHub. Link to project
now I can at least run the tests, I am still not sure how to use the library but that should help at least see how it is supposed to work.
I was able to run the unit tests by downloading the NUnit source code at the link in my post and then browsing to the output dll of the class project, to load the tests apparently the gui-test runner is no longer available for download, so hopefully that will help someone else out if they run into a need for running tests in NUnit.

Can't debug my Activator/Play application with eclipse after migrating to Play 2.3.8

I'm migrating my project from play 2.2.3 to 2.3.8. This went good so far with a little bumps on the road. The only remaining issue is that I just can't run Activator/Play in debug mode. It runs fine without debug though.
I'm using:
activator -jvm-debug 9999 run
My app runs fine and Eclipse debug binds to to port 9999 as expected. But unfortunately it never stops for a checkpoint.
My impression is that the debug is only activated for the JVM that runs Activator, but not the JVM that runs my app, although I have no evidence for this, since my knowledge about Activator isn't advanced enough (I just read somewhere that activator starts a new JVM for each app).
This is because Activator / Play 2.3.8 performs forking of the run process without copying of the debug options. You can cancel forking in your build.sbt:
fork in run := false
Or alternatively you can specify the Java debugging options there:
javaOptions in run +=
"-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=9999"
With the latter you don't need the -jvm-debug in the activator command line.
I set the
fork in run := false
in my build.sbt.
It did not stop on my breakpoint in the run() method of my Controller subclass. Although later breakpoints are hit when I hit a URL and a RESTful service is hit.
Might this just be a timing issue? I am probably not fast enough to get the debug start, as it uses Standard (socket attach) method so Play has to start before the listener is there. Wondering how you debug the methods that get hit when startup occurs?

Jacoco Sling Junit Integration-Test Execution

One of our test classes extends RemoteBaseTest but Jacoco ignores it completely.
How can I make Jacoco work with Sling Integration Testing?
For Unit Tests everything works as expected.
We are using Adobe CQ 5.6.1.
I see that this issue has been resolved: sling-issue-tracker-2810
but unsure how to implement it - is it even included in the latest CQ-Version yet?
If not how do I manually add it?
I don't know what RemoteBaseTest is but I assume you are running a JUnit "proxy" test which talks to the Sling JUnit server-side tests subsystem and causes the actual tests to run on your CQ server.
If that's correct, the actual test code doesn't run in the client JVM that's running RemoteBaseTest, it runs in the server JVM that's running CQ. So it's on the server JVM that you need to setup Jacoco to collect coverage data.
If you're running some tests on the client JVM (like common JUnit tests) and some on the server JVM via the Sling testing tools, Jacoco has functions to merge coverage data coming from different JVMs. We have this as a work in progress in https://issues.apache.org/jira/browse/SLING-1803 , which is not fully integrated in Sling yet but should be adaptable to any version of CQ.

Hudson + JUnit + embedded GlassFish, how to provide domain configuration?

I'm using NetBeans and GlassFish 3.0.1 to create an EJB3 application. I have written a few Unit Tests, which get run via JUnit and make use of the embedded GlassFish. Whenever I run these tests on my development machine (so from within NetBeans), it's all good.
Now I would like to let Hudson do those tests. At the moment it is failing with lookup failure on a resource (in this case the datasource to a JPA persistance unit):
[junit] SEVERE: Exception while invoking class org.glassfish.persistence.jpa.JPADeployer prepare method
[junit] java.lang.RuntimeException: javax.naming.NamingException: Lookup failed for 'mvs_devel' in SerialContext
After searching around and trying to learn about this, I believe it is related to the embedded GlassFish not having been configured with resources. In other words it's missing a domain.xml file. Right?
Two questions:
Why does it work with NetBeans on my dev box? What magic does NetBeans do in the background?
How should I provide the file? Where does the embedded GlassFish on the Hudson-box expect it?
Hudson is using the same Ant build-scripts (created by NetBeans).
I've read this post about instanceRoot and the EmbeddedFileSystemBuilder, but I don't understand enough of that. Is this needed for every TestCase (Emb. GF gets started/stopped for each bean-under-test)? Is this part of EJBContainer.createEJBContainer()? Again, why is it not necessary to do this when running tests on NetBeans?
Update
Following Peter's advice I can confirm: when running ant on a freshly checked out copy of the code, with the same properties as hudson is configured, the tests get executed!
10-1 it is a classpath issue as IDE's tend to swap paths in and out depending if you run normally or unittests.
Try running the tests on a commandline from a freshly checked out version from your SCM. Chances are you'll have the same error. Debugging on your local machine is a lot easier than on a remote machine.
When it builds reliably on the command line (in a separate directory) then it is time to move to hudson.