KNIME: Saving workflow stuck at Weka Predictor Internals - knime

It's the x-th time that saving my KNIME workflow gets stuck at "Weka Predictor (3.7) - Internals". It is always a Weka node blocking the saving process, eventually I have to force KNIME to quit and lose my changes.
My memory status is roughly 900MB out of 3300 currently, 6000MB being the maximum allowance for my Java VM. KNIME gets 5% of my CPU power, nothing else is running.
Anybody experiencing the same problem? Any advice or solution?

In Files > Preferences > KNIME > KNIME GUI you may want to turn Console View Output Level to "DEBUG" and see if anything interesting / helpful in the output console.

Related

How to solve a stuck Gitlab CI pipeline?

We've been using Gitlab CI for some months, and in the last 1 week, we've been using the specific runner installed on a VPS. Currently, we are using "shell" as the executor.
Today our pipeline got stuck out of sudden, when we looked into the server free RAM, it's only 48MB out of 996 MB, FYI, we're using CentOS 6.
We've been struggling to get the answers, but we're stuck at the moment, and would like to know :
What's causing the pipeline from getting stuck?
is it true because of low free RAM?
Should we use another executor, perhaps SSH or even docker?
What is the best practices to deal with this kind of problem?
We would appreciate any kind of help or directions.
In my case the pipeline was stuck because the only available runner had the option "Can run untagged jobs" set to "No" and the job was really untagged. One can fix this issue by changing the "Can run untagged jobs" option or by adding a tag to the appropriate section of the ".gitlab-ci.yml" file in the repository. In my case it was section default:tags:.
(It seems that your case is much more complicated. However I've came across this issue twice a month, and I've forgotten the decision at the second time. Thus I've came to this page which looks appropriate to save the decision. Hope the answer will help someone else.)
In my case, the pipeline was stuck because of two things:
The tags specified in the .gitlab-ci.yml do not match those in the runner configuration.
If you specify the simulator in the build command, ensure that you write the right version of the simulator.
Once I did these changes, everything worked well!
Good luck.
In my case, It happened on a container that was off for a couple of days for maintenance reasons, I had to clear runner caches, and it worked!
In my case the windows gitlab-runner service was not running. Starting it solved it.

Aurelia CLI app-bundle automatic update gets slow

Hi I have a web application runnig on the Aurelia CLI.
From what I’ve read in the documentation, the Aurelia CLI runs always “bundled” and never targeting directly source files. By running the “au run –watch” command, Aurelia “listens” to file changes and recreates the app-bundle.js automatically. Sample output from console:
Starting 'readProjectConfiguration'...
Finished 'readProjectConfiguration'
Starting 'processMarkup'...
Starting 'processCSS'...
Starting 'configureEnvironment'...
Finished 'configureEnvironment'
Starting 'buildJavaScript'...
Finished 'processCSS'
Finished 'processMarkup'
Finished 'buildJavaScript'
Starting 'writeBundles'...
Tracing views/references...
Writing app-bundle.js...
Finished 'writeBundles'
Starting 'reload'...
Finished 'reload'
This is cool, but in my case it leads to a poor developer experience. When I come to work in the morning, any change I make is readily updated in the app.bundle, but after working for some time, the “buildJavaScript” process (see console output) takes always longer to finish, after a few hours of work even up to 30-40 seconds! For me, working as a developer and having to test many small changes, this is extremely painful.
I tried (and still do) from time to time to stop the “au run –watch” command and re-execute it again, and initially it gets a bit better, but after some time the problem is there again.
My question would be: is there a workaround for that, or some way to speed this up or to have it served directly from the source files and not the bundled version, or maybe some other solution? Could this be due to a memory leak in Aurelia itself?
UPDATE:
Every once in a while it gets so slow that it actually crashes.
This is what I got today (and other few times) from the console:
==== Details ================================================
[1]: _tickCallback(aka _tickDomainCallback) [internal/process/next_tick.js:~108] [pc=000000C928AFCE81](this=000003B0DF48BE31 <a process with map 0000012166110B71>) {...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
This is a late answer, but for future reference I think it's important to point out that since the more recent Aurelia CLI releases this problem has been fixed.
The performance issue, together with some major stability issues, have thoroughly discussed in GitHub #293: Error in buildTypeScript: A project cannot be used in two compilations at the same time.
Which means that if you update the Aurelia CLI to v0.30 or higher, you'll experience a significantly better performance and stability.

How to build Chromium faster?

Following only the instructions here - https://www.chromium.org/developers/how-tos/get-the-code I have been able to successfully build and get a Chromium executable which I can then run.
So, I have been playing around with the code (adding new buttons to the browser etc.) for learning purposes. So each time I make a change (like adding a new button in the settings toolbar) and I use the ninja command to build it takes over 3 hours to finish before I can run the executable. It builds each and every file again I guess.
I have a decently powerful machine (i7, 8GB RAM) running 64-bit Ubuntu. Are there ways to speed up the builds? (At the moment, I have literally just followed the instructions in the above mentioned link and no other optimizations to speed it up.)
Thank you very very much!
If all you're doing is modifying a few files and rebuilding, ninja will only rebuild the objects that were affected by those files. When you run ninja -C ..., the console displays the number of targets that need to be built. If you're modifying only a few files, that should be ~2000 at the high end (modifying popular header files can touch lots of objects). Modifying a single .cpp would result in rebuilding just that object.
Of course, you still have to relink which can take a very long time. To make linking faster, try using a component build, which keeps everything in separate shared libraries rather than one big onw that needs to be relinked for any change. If you're using GN, add is_component_build=true to gn args out/${build_dir}. For GYP, see this page.
You can also peruse faster linux builds and see if any of those tips apply to you. Unfortunately, Chrome is a massive project so builds will naturally be long. However, once you've done the initial build, incremental builds should be on the order of minutes rather than hours.
Follow the recently updated instructions here:
https://chromium.googlesource.com/chromium/src/+/HEAD/docs/windows_build_instructions.md#Faster-builds
In addition to using component builds you can disable nacl, use jumbo builds, turn off symbols for webcore, etc. Jumbo builds are still experimental at this point but they already help build times and they will gradually help more.
Full builds will always take a long time even with jumbo builds, but component builds should let incremental builds be quite fast in many cases.
For building on Linux, you can see how to build faster at: https://chromium.googlesource.com/chromium/src/+/master/docs/linux_build_instructions.md#faster-builds
Most of them require add build argments. To edit build arguments, you can see GN build configuration at: https://www.chromium.org/developers/gn-build-configuration.
You can edit the build arguments on a build directory by:
$ gn args out/mybuild

View plot for Node in KNIME_BATCH_APPLICATION

I have been using KNIME 2.7.4 for running analysis algorithm. I have integrated KNIME with our existing application to run in BATCH mode using the below command.
<<KNIME_ROOT_PATH>>\\plugins\\org.eclipse.equinox.launcher_1.2.0.v20110502.jar -application org.knime.product.KNIME_BATCH_APPLICATION -reset -workflowFile=<<Workflow Archive>> -workflow.variable=<<parameter>>,<<value>>,<<DataType>
Knime provide different kinds of plot which I want to use. However I am running the workflow in batch mode. Is there any option in KNIME where I can specify the Node Id and "View" option as a parameter to KNIME_BATCH_APPLICATION.
Would need suggestion or guidance to achieve this functionality.
I have posted this question in KNIME forum and got the satisfactory answer mentioned below
As per concept of command line execution, this requirement does not fit in. Also there is now way for batch executor to open the view of specific plot node.
Hence there could be two solutions
Solution 1
Write the output of workflow in a file and use any charitng plugin to plot the graph and do the drilldown activity.
Solution 2
Use jFreeChart and write the image using ImageWriter node which can be displayed in any screen.

Catching the dreaded Blue Screen Of Death

It's a simple problem. Sometimes Windows will just halt everything and throws a BSOD. Game over, please reboot to play another game. Or whatever. Annoying but not extremely serious...
What I want is simple. I want to catch the BSOD when it occurs. Why? Just for some additional crash logging. It's okay that the system goes blue but when it happens, I just want to log some additional information or perform one additional action.
Is this even possible? If so, how? And what would be the limitations?
Btw, I don't want to do anything when the system recovers, I want to catch it while it happens. This to allow me one final action. (For example, flushing a file before the system goes down.)
BSOD happens due to an error in the Windows kernel or more commonly in a faulty device driver (that runs in kernel mode). There is very little you can do about it. If it is a driver problem, you can hope the vendor will fix it.
You can configure Windows to a create memory dump upon BSOD which will help you troubleshoot the problem. You can get a pretty good idea about the faulting driver by loading the dump into WinDbg and using the !analyze command.
Knowing which driver is causing the problem will let you look for a new driver, but if that doesn't fix the problem, there is little you can do about it (unless you're very good with a hex editor).
UPDATE: If you want to debug this while it is happening, you need to debug the kernel. A good place to pick up more info is the book Windows Internals by Mark Russinovich. Also, I believe there's a bit of info in the help file for WinDbg and there must be something in the device driver kit as well (but that is beyond my knowledge).
The data is stored in what's called "Minidumps".
You can then use debugging tools to explore those dumps. The process is documented here http://forums.majorgeeks.com/showthread.php?t=35246
You have two ways to figure out what happened:
The first is to upload the dmp file located under C:\Minidump***.dmp to microsoft service as they describe it : http://answers.microsoft.com/en-us/windows/wiki/windows_10-update/blue-screen-of-death-bsod/1939df35-283f-4830-a4dd-e95ee5d8669d
or use their software debugger WinDbg to read the dmp file
NB: You will find several files, you can tell the difference using the name that contain the event date.
The second way is to note the error code from the blue screen and to make a search about it in Google and Microsoft website.
The first method is more accurate and efficient.
Windows can be configured to create a crash dump on blue screens.
Here's more information:
How to read the small memory dump files that Windows creates for debugging (support.microsoft.com)