Aurelia CLI app-bundle automatic update gets slow - gulp

Hi I have a web application runnig on the Aurelia CLI.
From what I’ve read in the documentation, the Aurelia CLI runs always “bundled” and never targeting directly source files. By running the “au run –watch” command, Aurelia “listens” to file changes and recreates the app-bundle.js automatically. Sample output from console:
Starting 'readProjectConfiguration'...
Finished 'readProjectConfiguration'
Starting 'processMarkup'...
Starting 'processCSS'...
Starting 'configureEnvironment'...
Finished 'configureEnvironment'
Starting 'buildJavaScript'...
Finished 'processCSS'
Finished 'processMarkup'
Finished 'buildJavaScript'
Starting 'writeBundles'...
Tracing views/references...
Writing app-bundle.js...
Finished 'writeBundles'
Starting 'reload'...
Finished 'reload'
This is cool, but in my case it leads to a poor developer experience. When I come to work in the morning, any change I make is readily updated in the app.bundle, but after working for some time, the “buildJavaScript” process (see console output) takes always longer to finish, after a few hours of work even up to 30-40 seconds! For me, working as a developer and having to test many small changes, this is extremely painful.
I tried (and still do) from time to time to stop the “au run –watch” command and re-execute it again, and initially it gets a bit better, but after some time the problem is there again.
My question would be: is there a workaround for that, or some way to speed this up or to have it served directly from the source files and not the bundled version, or maybe some other solution? Could this be due to a memory leak in Aurelia itself?
UPDATE:
Every once in a while it gets so slow that it actually crashes.
This is what I got today (and other few times) from the console:
==== Details ================================================
[1]: _tickCallback(aka _tickDomainCallback) [internal/process/next_tick.js:~108] [pc=000000C928AFCE81](this=000003B0DF48BE31 <a process with map 0000012166110B71>) {...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory

This is a late answer, but for future reference I think it's important to point out that since the more recent Aurelia CLI releases this problem has been fixed.
The performance issue, together with some major stability issues, have thoroughly discussed in GitHub #293: Error in buildTypeScript: A project cannot be used in two compilations at the same time.
Which means that if you update the Aurelia CLI to v0.30 or higher, you'll experience a significantly better performance and stability.

Related

Injecting into an EXE

I really want to inject my C++ program into another (compiled) program. The way I want to do this is changing the first part of bytes (where the program starts) to goto the binary of my program (pasted into an codecave for example) and when it is finished running to goto back where it went before the injected program started running.
Is this is even possible? and if it is, is it a good/smart idea todo so?
Are there other methods of doing so?
For example:
I wrote a program that will write the current time to a file and then terminates, so if i inject it to Internet Explorer and launch it, it will first write its current time to a file and then start Internet Explorer.
In order to do this, you should start reading the documentation for PE files, which you can download at microsoft.
Doing this takes a lot research and experimenting, which is beyond the scope of stackoverflow. You should also be aware that doing this depends heavily on the executable you try to patch. It may work with your version, but most likely not with another version. There are also techniques against this kind of attack. May be built into the executable as well as in the OS.
Is it possible?
Yes. Of course, but it's not trivial.
Is it smart?
Depends on what you do with it. Sometimes it may be the only way.

How to build Chromium faster?

Following only the instructions here - https://www.chromium.org/developers/how-tos/get-the-code I have been able to successfully build and get a Chromium executable which I can then run.
So, I have been playing around with the code (adding new buttons to the browser etc.) for learning purposes. So each time I make a change (like adding a new button in the settings toolbar) and I use the ninja command to build it takes over 3 hours to finish before I can run the executable. It builds each and every file again I guess.
I have a decently powerful machine (i7, 8GB RAM) running 64-bit Ubuntu. Are there ways to speed up the builds? (At the moment, I have literally just followed the instructions in the above mentioned link and no other optimizations to speed it up.)
Thank you very very much!
If all you're doing is modifying a few files and rebuilding, ninja will only rebuild the objects that were affected by those files. When you run ninja -C ..., the console displays the number of targets that need to be built. If you're modifying only a few files, that should be ~2000 at the high end (modifying popular header files can touch lots of objects). Modifying a single .cpp would result in rebuilding just that object.
Of course, you still have to relink which can take a very long time. To make linking faster, try using a component build, which keeps everything in separate shared libraries rather than one big onw that needs to be relinked for any change. If you're using GN, add is_component_build=true to gn args out/${build_dir}. For GYP, see this page.
You can also peruse faster linux builds and see if any of those tips apply to you. Unfortunately, Chrome is a massive project so builds will naturally be long. However, once you've done the initial build, incremental builds should be on the order of minutes rather than hours.
Follow the recently updated instructions here:
https://chromium.googlesource.com/chromium/src/+/HEAD/docs/windows_build_instructions.md#Faster-builds
In addition to using component builds you can disable nacl, use jumbo builds, turn off symbols for webcore, etc. Jumbo builds are still experimental at this point but they already help build times and they will gradually help more.
Full builds will always take a long time even with jumbo builds, but component builds should let incremental builds be quite fast in many cases.
For building on Linux, you can see how to build faster at: https://chromium.googlesource.com/chromium/src/+/master/docs/linux_build_instructions.md#faster-builds
Most of them require add build argments. To edit build arguments, you can see GN build configuration at: https://www.chromium.org/developers/gn-build-configuration.
You can edit the build arguments on a build directory by:
$ gn args out/mybuild

mxmlc - Warning: Failed to parse corrupt data - once project reaches certain size?

I have a long running project which is compiled as modules for release, but the test suite potentially runs all the tests for every module.
The project is quite large - currently around 1250 tests cases (classes), and pulling in around 4000 classes in total. It's an asunit3 project so all the test cases are listed in one AllTests.as file.
Obviously I don't run all the tests all the time - the suite takes minutes to run, so most of the time I'm running focussed tests but a couple of times a day I run the full suite which includes integration tests and so on.
As of my last few hours of work, I'm no longer able to successfully build and run the whole suite. We have a script that allows us to filter tests using the package name or class name, so I can testpackage['modules'] or testpackage['com'] etc. I can also exclude packages - testallexcept['utils'] and so on.
I'm able to run any and all subsets of the tests, but if I try to test the whole set, I get:
Warning: Failed to parse corrupt data.
If I filter out just a few classes then I'm able to get the swf to compile and open, but it's just a white box and doesn't actually run the tests. If I filter a few more then all is fine. It doesn't appear to matter which ones I filter - as long as I take around 15 test cases out, all is fine (though I haven't found an exact number that is the line between ok / not ok.)
I'm compiling with -benchmark and get the following output:
Initial setup: 34ms
start loading swcs 7ms Running Total: 41ms
Loaded 45 SWCs: 253ms
precompile: 456ms
Files: 4013 Time: 16087ms
Linking... 91ms
SWF Encoding... 833ms
/Users/me/Documents/clients/project/bin/ProjectShellRunner.swf (4888318 bytes)
postcompile: 927ms
Total time: 17312ms
Peak memory usage: 413 MB (Heap: 343, Non-Heap: 70)
mxmlc finished compiling bin/ProjectShellRunner.swf in 18 seconds
As the peak memory usage is over the default heap in mxmlc, I increased it to
VMARGS="-Xmx1024m -Dsun.io.useCanonCaches=false "
This doesn't appear to have helped.
The way asunit3 and projectsprouts is set up pulls all the tests together in one single AllTests.as file. This is now over 2500 lines long and imports all 1250 test cases.
Is there anything I'm missing in terms of hard limits on number of classes, class length, number of imports in one class, etc? Or any settings I'm able to change other than the VM heap for java? I'm using the Flex 4.2 mxmlc compiler.
Obviously I can work around this via a script to run a series of subsets instead of one single suite, but I'd like to understand why this is happening.
Any clues?
Some extra info based on Qs from twitter:
I'm running Mac OS X 10.8.5
mxmlc is running via command line
I've tried forcing it to use the 32 bit runtime - no change
I've switched mxmlc to use headless mode, also no change

Memcache (northscale) socket pool question for Enyim

I'm using Northscale 1.0.0 and need a little help getting it to limp along for long enough to upgrade to the new version. I'm using C# and ASP.NET to work with it using the Enyim libraries. I currently suspect that the application does not have enough connections per the socketPool setting in my app.config. I also noted that the previous developer's code simply treats ANY exception from an attempted Get call to MemCache as if the item isn't in the cache, which (I believe) may be resulting in periodic spikes in calls to the database when the pool gets starved. We've been having oddball load spikes that don't seem to have any relation to server load. I suspect that he is not correctly managing the lifecycle on the connections to Northscale and that we are periodically experiencing starvation in the socket pool as a result, but I'm unable to prove it.
Is there a specific exception I should be looking for when I call the Get method to retrieve items from cache? I'm not really seeing much in the docs that gives me sufficient information on this. Anybody have any sample code on this? I'd even accept java or php code, as I think the .NET libraries were probably based on one of those anyway.
Any ideas?
Thanks,
Will
If you have made the connection correctly to the membase server(formerly Northscale) typically you only get an exception on 'get' when it's not a hit.

Hudson - save artifacts only when less than 90% passes

I am new at this and I was wondering how I can setup that I save the artifacts, only if less than 90% of the tests have passed.
Any idea how I can do this?
thanks
This is not currently possible with Hudson. What is the motivation to avoid archiving artifacts on every build?
How about a rather simple workaround. You create a post build step (or additional build step) that calls your tests from the command line. Be sure to capture all errors so Hudson don't count it as a failure. Than you evaluate your condition and set the error level accordingly. In addition you need to save reports (probably outside hudson) before you set the error level, so they are available even or only when the build fails.
My assumption here is, that it is OK, not to run the tests when building the app fails. However, you can separate the building and testing in two jobs. See here.