This question already has answers here:
Intellij idea tests compilation takes too long (compared with Eclipse)
(4 answers)
Closed 2 years ago.
In Eclipse, (if I remember correctly) I could run a JUnit test almost instantaneously with virtually no startup time. This meant I could do a codechange+test cycle in a couple of seconds.
I've recently migrating to IDEA IntelliJ, which seems to have to "make" the project before running a unit test if you've changed any source code since the last time. This typically takes 20 seconds for me, which is too long especially for test-driven development.
I can uncheck the "Make before launch" checkbox in the Run Configuration, but then the test is executed without compiling recent changes.
The warnings output during the "make" indicates that it is doing some aspect weaving for at least some of the time. I would imagine that aspects aren't generally wanted for unit testing.
My guess is that Eclipse was constantly compiling in the background every time you changed a source file, and doing so rapidly without doing the aspect weaving.
How can I speed up my codechange+test cycles in IntelliJ?
more info: I have "Compile in background" checked in Compiler Settings. The Java Compiler is ajc in com.springsource.org.aspectj.tools-1.6.8.RELEASE.jar
Pragmatic answer: switch the compiler from "ajc" to "Eclipse" during test-driven development. Remember to revert it when you're deploying the application!
The options i activated in IntelliJ, speeding up tests execution from 20s to 2.5s:
Compiler
Make project automatically
Compile independend modules in parallel
Compiler -> Java Compiler
Use compiler: Eclipse
Generate no warnings
Related
Hi I have a web application runnig on the Aurelia CLI.
From what I’ve read in the documentation, the Aurelia CLI runs always “bundled” and never targeting directly source files. By running the “au run –watch” command, Aurelia “listens” to file changes and recreates the app-bundle.js automatically. Sample output from console:
Starting 'readProjectConfiguration'...
Finished 'readProjectConfiguration'
Starting 'processMarkup'...
Starting 'processCSS'...
Starting 'configureEnvironment'...
Finished 'configureEnvironment'
Starting 'buildJavaScript'...
Finished 'processCSS'
Finished 'processMarkup'
Finished 'buildJavaScript'
Starting 'writeBundles'...
Tracing views/references...
Writing app-bundle.js...
Finished 'writeBundles'
Starting 'reload'...
Finished 'reload'
This is cool, but in my case it leads to a poor developer experience. When I come to work in the morning, any change I make is readily updated in the app.bundle, but after working for some time, the “buildJavaScript” process (see console output) takes always longer to finish, after a few hours of work even up to 30-40 seconds! For me, working as a developer and having to test many small changes, this is extremely painful.
I tried (and still do) from time to time to stop the “au run –watch” command and re-execute it again, and initially it gets a bit better, but after some time the problem is there again.
My question would be: is there a workaround for that, or some way to speed this up or to have it served directly from the source files and not the bundled version, or maybe some other solution? Could this be due to a memory leak in Aurelia itself?
UPDATE:
Every once in a while it gets so slow that it actually crashes.
This is what I got today (and other few times) from the console:
==== Details ================================================
[1]: _tickCallback(aka _tickDomainCallback) [internal/process/next_tick.js:~108] [pc=000000C928AFCE81](this=000003B0DF48BE31 <a process with map 0000012166110B71>) {...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
This is a late answer, but for future reference I think it's important to point out that since the more recent Aurelia CLI releases this problem has been fixed.
The performance issue, together with some major stability issues, have thoroughly discussed in GitHub #293: Error in buildTypeScript: A project cannot be used in two compilations at the same time.
Which means that if you update the Aurelia CLI to v0.30 or higher, you'll experience a significantly better performance and stability.
Following only the instructions here - https://www.chromium.org/developers/how-tos/get-the-code I have been able to successfully build and get a Chromium executable which I can then run.
So, I have been playing around with the code (adding new buttons to the browser etc.) for learning purposes. So each time I make a change (like adding a new button in the settings toolbar) and I use the ninja command to build it takes over 3 hours to finish before I can run the executable. It builds each and every file again I guess.
I have a decently powerful machine (i7, 8GB RAM) running 64-bit Ubuntu. Are there ways to speed up the builds? (At the moment, I have literally just followed the instructions in the above mentioned link and no other optimizations to speed it up.)
Thank you very very much!
If all you're doing is modifying a few files and rebuilding, ninja will only rebuild the objects that were affected by those files. When you run ninja -C ..., the console displays the number of targets that need to be built. If you're modifying only a few files, that should be ~2000 at the high end (modifying popular header files can touch lots of objects). Modifying a single .cpp would result in rebuilding just that object.
Of course, you still have to relink which can take a very long time. To make linking faster, try using a component build, which keeps everything in separate shared libraries rather than one big onw that needs to be relinked for any change. If you're using GN, add is_component_build=true to gn args out/${build_dir}. For GYP, see this page.
You can also peruse faster linux builds and see if any of those tips apply to you. Unfortunately, Chrome is a massive project so builds will naturally be long. However, once you've done the initial build, incremental builds should be on the order of minutes rather than hours.
Follow the recently updated instructions here:
https://chromium.googlesource.com/chromium/src/+/HEAD/docs/windows_build_instructions.md#Faster-builds
In addition to using component builds you can disable nacl, use jumbo builds, turn off symbols for webcore, etc. Jumbo builds are still experimental at this point but they already help build times and they will gradually help more.
Full builds will always take a long time even with jumbo builds, but component builds should let incremental builds be quite fast in many cases.
For building on Linux, you can see how to build faster at: https://chromium.googlesource.com/chromium/src/+/master/docs/linux_build_instructions.md#faster-builds
Most of them require add build argments. To edit build arguments, you can see GN build configuration at: https://www.chromium.org/developers/gn-build-configuration.
You can edit the build arguments on a build directory by:
$ gn args out/mybuild
I have a long running project which is compiled as modules for release, but the test suite potentially runs all the tests for every module.
The project is quite large - currently around 1250 tests cases (classes), and pulling in around 4000 classes in total. It's an asunit3 project so all the test cases are listed in one AllTests.as file.
Obviously I don't run all the tests all the time - the suite takes minutes to run, so most of the time I'm running focussed tests but a couple of times a day I run the full suite which includes integration tests and so on.
As of my last few hours of work, I'm no longer able to successfully build and run the whole suite. We have a script that allows us to filter tests using the package name or class name, so I can testpackage['modules'] or testpackage['com'] etc. I can also exclude packages - testallexcept['utils'] and so on.
I'm able to run any and all subsets of the tests, but if I try to test the whole set, I get:
Warning: Failed to parse corrupt data.
If I filter out just a few classes then I'm able to get the swf to compile and open, but it's just a white box and doesn't actually run the tests. If I filter a few more then all is fine. It doesn't appear to matter which ones I filter - as long as I take around 15 test cases out, all is fine (though I haven't found an exact number that is the line between ok / not ok.)
I'm compiling with -benchmark and get the following output:
Initial setup: 34ms
start loading swcs 7ms Running Total: 41ms
Loaded 45 SWCs: 253ms
precompile: 456ms
Files: 4013 Time: 16087ms
Linking... 91ms
SWF Encoding... 833ms
/Users/me/Documents/clients/project/bin/ProjectShellRunner.swf (4888318 bytes)
postcompile: 927ms
Total time: 17312ms
Peak memory usage: 413 MB (Heap: 343, Non-Heap: 70)
mxmlc finished compiling bin/ProjectShellRunner.swf in 18 seconds
As the peak memory usage is over the default heap in mxmlc, I increased it to
VMARGS="-Xmx1024m -Dsun.io.useCanonCaches=false "
This doesn't appear to have helped.
The way asunit3 and projectsprouts is set up pulls all the tests together in one single AllTests.as file. This is now over 2500 lines long and imports all 1250 test cases.
Is there anything I'm missing in terms of hard limits on number of classes, class length, number of imports in one class, etc? Or any settings I'm able to change other than the VM heap for java? I'm using the Flex 4.2 mxmlc compiler.
Obviously I can work around this via a script to run a series of subsets instead of one single suite, but I'd like to understand why this is happening.
Any clues?
Some extra info based on Qs from twitter:
I'm running Mac OS X 10.8.5
mxmlc is running via command line
I've tried forcing it to use the 32 bit runtime - no change
I've switched mxmlc to use headless mode, also no change
This question already has answers here:
Running JUnit Tests in Parallel in IntelliJ IDEA
(3 answers)
Closed 5 years ago.
Is it possible to run junit tests in intelliJ in parallel? If so, how do i do this?
I set the "fork" parameter to class level and this didn't do anything - actually, it made everything a bit slower, so i'm unsure what "fork" does that is beneficial?
Is it possible to do this just using intelliJ, or do i need some fancy test framework and all the hoo-hah that that would involve?
Finally, assuming this is at all possible, can one control the number of forks or threads or whatever they want to call it?
UPDATE: somebody has linked to a question that might answer this. I looked at that question prior to posting - I'm unsure what that question really "answers". It simply says there is an issue tracker and that this issue has been implemented in intelliJ. I don't see how to implement it anywhere.
UPDATE: What does "didn't do anything" mean?: it just makes things slower, which isn't v. useful. I mean, maybe your tests run blazingly quickly and you want to slow them down to appreciate some Bach? That is cool. I just want mine to run faster, I'm fed up of Bach.
You can make use of the junit-toolbox. This is an extension library for jUnit that is listed on the jUnit site itself.
This extension offers the ParallelSuite. Through this you can create with nearly no effort an AllTest class that executes the tests in parallel. The minimum AllTest could look like the code below, using the pattern feature introduced with junit-toolbox.
#RunWith(ParallelSuite.class)
#SuiteClasses("**/*Test.class")
public class AllTests {}
This will create as many threads for parallel execution as your JVM reports via availableProcessors. To override this you may set the system property maxParallelTestThreads.
Imagine you have a large project, with several thousands of JUnit tests.
Let's says that running all thoses tests takes 7 minutes.
This looks short when you build your project from an ant/maven script.
But when you are using Eclipse, you cannot run all your test very often, because 7 minutes is too long time.
So here is the question:
When you modify some classes, is there a way to let JUnit runs only tests that may have been impacted by thoses class changes ?
I mean, this sounds feasible using classloader feature : after running each test, it's possible to know which classes have been loaded for this test, and to store somewhere (even in memory) a signature of each class used for this test.
When Junit is launched again, it could, for each test, check if classes used by this test have been modified since the very last run, and then NOT launch the test if it was ok and if no class impacting the test has been changed. (If the test were OK for the last run, it should be OK)
Does someone know if this has been done/implemented already ?
You could try using Infinitest from either Eclipse or IntelliJ. (Edited spelling)