I'm trying to fit a function using leasqr in Octave. This performs properly most of the time. Sometimes, however, leasqr fails to converge. (I'm not sure why, because the solution it comes up with looks fine).
Untill I can figure out why it's not converging I would like to suppress the output. But whenever leasqr fails to converge I get the following warning:
CONVERGENCE NOT ACHIEVED!
I've tried implementing the answer to this question, but it's not working for me. My code looks like this:
PAGER('/dev/null');
page_screen_output(1);
page_output_immediately(1);
[fx.k1,fx.lambda1,fx.c1,...
fx.k2,fx.lambda2,fx.c2] = peaktrack_expfit(t,Mn,fnr,mode);
How do I suppress these convergence messages?
Related
I have some code which Octave spits me out many
warning: product: automatic broadcasting operation applied
I think this automatic broadcasting could be the problem in my code, which doesn't work yet, but that message is so non informative, it doesn't help me locate the problem at all. I'd rather like Octave to simply fail, error at the specific location of that broadcast, so that I can manually go there, understand why it was broadcasting there and then fix my code. Also, even if my code doesn't not work because of this mistake, but because of some other mistake, in any other programming language I'd also go there and fix it, since I don't like to rely on something automatically differently interpreted, but want to have clean code.
How do I disable that annoying behavior (generally, all the time, everywhere) and make Octave tell me where the mistake is?
Also, even if my code doesn't not work because of this mistake, but because of some other mistake, in any other programming language I'd also go there and fix it, since I don't like to rely on something automatically differently interpreted, but want to have clean code.
The warning for automatic broadcasting was added in Octave 3.6 and has been removed in Octave 4.0. The reason for throwing a warning is that automatic broadcasting was a new feature in 3.6 that could catch users by surprise.
But it is meant to be used like a normal operator, not to be an accident. The fact that using it was throwing a warning made it sound like it was something that needs to be fixed on the code. So don't feel like it.
Newer Octave versions of Octave will not throw that warning by default. You might as well disable the warning now:
warning ("off", "Octave:broadcast");
How do I disable that annoying behaviour (generally, all the time, everywhere) and make Octave tell me where the mistake is?
You can't disable automatic broadcasting, that would make Octave stop working. It would be the equivalent of, for example, disabling addition and expect Octave to continue working normally.
You seem to think that automatic broadcasting is the source of your mistake. That cannot be. Automatic broadcasting does not cause a different result. If you were to disable automatic broadcasting you would simply get an error about nonconformant dimensions.
Therefore, assuming you never intended to make use of broadcasting, your program is not working because of some other mistake happening before automatic broadcasting (usually a function returned a row vector and you expected a column vector, or vice-versa).
However, you are obviously using an old version of Octave and at that time broadcasting was not much used yet. You can make it throw an error instead of a warning and maybe it will still work fine (specially if you don't use Octave packages because they used automatic broadcast more than Octave core). You can do this with:
warning ("error", "Octave:broadcast");
warning('error');
Will set all warnings to be treated as errors.
For more see the documentation on that, there seems to be a way to set only specific warning as error, or maybe have it display the position, which causes the warning.
Note. All commands set octave parameters for a specific session only. There are certain files where such commands can be written so that these options become default.
printStackTrace() acts as if it runs in its own thread after waiting for input. Here's my code:
try {
new Scanner(System.in).nextLine();
throw new Exception();
} catch (Exception e) {
e.printStackTrace();
}
System.out.print("STUFF");
I sometimes get the expected output (with STUFF at the end), but sometimes I get this:
blabla // the scanner input
java.lang.ExceptionSTUFF
at Test.main(Test.java:7)
and sometimes this:
blabla
java.lang.Exception
STUFF at Test.main(Test.java:7)
Replacing the scanner with System.in.read() yields the same results. Removing the line entirely yields the expected result. In the actual program where I noticed this problem, the stack trace was much longer, and the STUFF output always appeared either as expected or as in the second output above - at the beginning of the second line.
What's causing this, and how can I solve it?
printStackTrace, as per the documentation, prints to the standard error stream:
public void printStackTrace() – Prints this throwable and its backtrace to the standard error stream.
...which is System.err. Then you are writing to System.out. Those two are different streams and therefore get flushed to the actual output at different times. Leaving the code as it is, it is equivalent to the problem outlined in this question.
To fix the issue, you could either manually flush your output streams or print your exception to System.out instead of System.err. You could use the variant of printStackTrace that accepts a PrintStream with standard output as a parameter: e.printStackTrace(System.out);
This is actually not a strange behaviour at all; Java is designed to work this way. For most part it's a feature we all love to have which makes our code run more efficiently than we actually wrote it. And what 'it' refers to is that the JVM is designed to re-arrange and optimize our code into better byte-code than we mere mortal developers can bother with even trying to achieve.
You could look at it a little bit like this; Java is kind of a framework we're using through out code that will do what we want it to, in the most efficient way possible (that it's been programmed with at least). The Java API is the API to the Java framework we're using.
And to tie this back to your code; you're initializing two streams, two buffered streams, one is System.out, one is the printStackTrace(). When you execute your code, Java will re-arrange your code and thread it to run as optimal as Java can make it. This means that which ever stream completes first will get to print to console.
Java has no value in what gets printed when, that's a value we humans have; we have a preference in reading things in special orders. That's why Java is a challenge for us developers to write thread-safe code that doesn't care when it gets executed; given the same input it should always return the same output.
Since your System.out stream is faster to print than the stack-trace stream it will probably always print ahead of the stacktrace since they are buffered streams. Buffered streams require time to buffer, something that's both threading and differently time-consuming. Why shouldn't Java give you the stream that's done first and free up that thread and CPU?
Solution:
You should try to counter this by designing your code in a manner where it doesn't matter which gets printed when.
This is the nature of printing things to the console. Everything, standard out, standard error, etc is spooled up to be printed out to the console, but because java is inherently multi-threaded, there's no guarantee for what order these items get added to the queue for printing.
Multi-threading can do funky things!
I have a strange problem with codeblocks, the thing is that when I run my program, it works, but if I try to step into the program and run it step by step, a segmentation fault error is given.
That only happens if I use STL containers. If I do exactly the same thing using arrays, there is no problem.
Did anyone have the same problem or does anyone know how should I solve this?
Edit: The segmentation fault is given right away, just after I Step into, not at some specific point.
You've corrupted your memory causing undefined behaviour.
I'd use memory debugging tool like valgrind to locate the problem.
I am working on a custom SSIS component that has 4 asynchronous outputs. It works just fine but now I have a user request for an enhancement and I am not sure how to handle it. They want to use the component in another context where only 2 of the 4 outputs will be well defined. I foolishly said that this would be trivial for me to support, I planned to just look to see if the two "undefined" streams were even connected, if not then I would skip over that part of the processing.
My problem is that I cannot figure out if an output is connected at run time, I had hoped that the output pipeline or output buffer would be missing. It doesn't look like that is the case; even when they are not hooked up the output and buffer are present.
Does anyone know where I should be looking to see if an output has a downstream consumer or not?
Thanks!
Edit: I was never able to figure out how to do this reliably, so I ended up making this behaviour configurable by the user. It is not automatic like I would have hoped but the difference I found between the BIDS environment and the DTExec environment pushed me to the conclusion that a component probably should not be making assumptions about the component graph it is embedded in.
When I am debugging broken code, after a while the browser announces that the Flash plugin has crashed, and I can't continue debugging my code. Can I prevent the browser from killing Flash?
I am using Firefox.
Going to the debugger on a breakpoint makes the plugin "freeze". This is intentional, it's a breakpoint after all!
However, from the browsers perspective, the plugin seems to be stuck in some kind of infinite loop. The timeout value varies, my Firefox installation is set to 45 seconds.
To change the timeout value go enter about:config in the url field and look for the setting dom.ipc.plugins.timeoutSecs increase this or set it to -1 to disable the timeout altogether.
When the plugin crashes, it does in fact not so, because the browser is "killing" it, but rather the plugin terminates itself when a fatal error occurs. This is necessary to prevent the browser, or even your entire machine from crashing - there is no way to tell what is going to happen after an error like that. And besides: After the first uncaught error, your program will likely not be able to perform even correct code the way you intended, so you won't do any good by continuing a broken debugging session. So it is not a flaw, it is actually a good thing this happens!
However, you can do some things in order to work more effectively (and make your programs better). The most important I can think of right now are:
Learn to use good object-oriented programming techniques and get acquainted with design patterns, if you haven't already done so.
Take extra care to prevent error conditions from happening (e.g. test if an object is null before accessing its properties, give default values to variables when possible, etc.)
Use proper error handling to gracefully catch errors at runtime.
Use unit tests to thoroughly test your code for errors one piece at a time, before debugging in the browser. Getting to know FlexUnit is a good place to start.
EDIT
I should also have said this: A Debugger is a useful tool for stepping through your code to find the source of an error, such as a variable not being properly initialized, or unexpected return values. It is not helpful when trying to find out what's happening after a fatal error has occurred - which will also not help you to fix the code.