Octave disable automatic broadcasting - warnings

I have some code which Octave spits me out many
warning: product: automatic broadcasting operation applied
I think this automatic broadcasting could be the problem in my code, which doesn't work yet, but that message is so non informative, it doesn't help me locate the problem at all. I'd rather like Octave to simply fail, error at the specific location of that broadcast, so that I can manually go there, understand why it was broadcasting there and then fix my code. Also, even if my code doesn't not work because of this mistake, but because of some other mistake, in any other programming language I'd also go there and fix it, since I don't like to rely on something automatically differently interpreted, but want to have clean code.
How do I disable that annoying behavior (generally, all the time, everywhere) and make Octave tell me where the mistake is?

Also, even if my code doesn't not work because of this mistake, but because of some other mistake, in any other programming language I'd also go there and fix it, since I don't like to rely on something automatically differently interpreted, but want to have clean code.
The warning for automatic broadcasting was added in Octave 3.6 and has been removed in Octave 4.0. The reason for throwing a warning is that automatic broadcasting was a new feature in 3.6 that could catch users by surprise.
But it is meant to be used like a normal operator, not to be an accident. The fact that using it was throwing a warning made it sound like it was something that needs to be fixed on the code. So don't feel like it.
Newer Octave versions of Octave will not throw that warning by default. You might as well disable the warning now:
warning ("off", "Octave:broadcast");
How do I disable that annoying behaviour (generally, all the time, everywhere) and make Octave tell me where the mistake is?
You can't disable automatic broadcasting, that would make Octave stop working. It would be the equivalent of, for example, disabling addition and expect Octave to continue working normally.
You seem to think that automatic broadcasting is the source of your mistake. That cannot be. Automatic broadcasting does not cause a different result. If you were to disable automatic broadcasting you would simply get an error about nonconformant dimensions.
Therefore, assuming you never intended to make use of broadcasting, your program is not working because of some other mistake happening before automatic broadcasting (usually a function returned a row vector and you expected a column vector, or vice-versa).
However, you are obviously using an old version of Octave and at that time broadcasting was not much used yet. You can make it throw an error instead of a warning and maybe it will still work fine (specially if you don't use Octave packages because they used automatic broadcast more than Octave core). You can do this with:
warning ("error", "Octave:broadcast");

warning('error');
Will set all warnings to be treated as errors.
For more see the documentation on that, there seems to be a way to set only specific warning as error, or maybe have it display the position, which causes the warning.
Note. All commands set octave parameters for a specific session only. There are certain files where such commands can be written so that these options become default.

Related

Injecting into an EXE

I really want to inject my C++ program into another (compiled) program. The way I want to do this is changing the first part of bytes (where the program starts) to goto the binary of my program (pasted into an codecave for example) and when it is finished running to goto back where it went before the injected program started running.
Is this is even possible? and if it is, is it a good/smart idea todo so?
Are there other methods of doing so?
For example:
I wrote a program that will write the current time to a file and then terminates, so if i inject it to Internet Explorer and launch it, it will first write its current time to a file and then start Internet Explorer.
In order to do this, you should start reading the documentation for PE files, which you can download at microsoft.
Doing this takes a lot research and experimenting, which is beyond the scope of stackoverflow. You should also be aware that doing this depends heavily on the executable you try to patch. It may work with your version, but most likely not with another version. There are also techniques against this kind of attack. May be built into the executable as well as in the OS.
Is it possible?
Yes. Of course, but it's not trivial.
Is it smart?
Depends on what you do with it. Sometimes it may be the only way.

Letting traces around the code

There is some performance issue if I let my traces around the whole code when releasing?
trace("thank you");
Traces are ignored in release builds so there is no performance penalty.
Yes there is a performance issue if you have traces active
Some compilers have specific compiler options stating something similar to "build release client", these might or might not end up with a build where all trace-comments does not exist [are ignored].
Even if you don't have anything listening to trace-statements and they are running, it will first of all do an extra function-call and checking some if-statements, then after that it will be stored inside a log-file on the computer where it is run.
So, you should find out what compiler you have and if that one automatically removes trace-messages when compiling in "release mode". If not, you have to either ignore the performance loss or find ways to work around it

How to prevent the browser from killing the Flash plugin while debugging

When I am debugging broken code, after a while the browser announces that the Flash plugin has crashed, and I can't continue debugging my code. Can I prevent the browser from killing Flash?
I am using Firefox.
Going to the debugger on a breakpoint makes the plugin "freeze". This is intentional, it's a breakpoint after all!
However, from the browsers perspective, the plugin seems to be stuck in some kind of infinite loop. The timeout value varies, my Firefox installation is set to 45 seconds.
To change the timeout value go enter about:config in the url field and look for the setting dom.ipc.plugins.timeoutSecs increase this or set it to -1 to disable the timeout altogether.
When the plugin crashes, it does in fact not so, because the browser is "killing" it, but rather the plugin terminates itself when a fatal error occurs. This is necessary to prevent the browser, or even your entire machine from crashing - there is no way to tell what is going to happen after an error like that. And besides: After the first uncaught error, your program will likely not be able to perform even correct code the way you intended, so you won't do any good by continuing a broken debugging session. So it is not a flaw, it is actually a good thing this happens!
However, you can do some things in order to work more effectively (and make your programs better). The most important I can think of right now are:
Learn to use good object-oriented programming techniques and get acquainted with design patterns, if you haven't already done so.
Take extra care to prevent error conditions from happening (e.g. test if an object is null before accessing its properties, give default values to variables when possible, etc.)
Use proper error handling to gracefully catch errors at runtime.
Use unit tests to thoroughly test your code for errors one piece at a time, before debugging in the browser. Getting to know FlexUnit is a good place to start.
EDIT
I should also have said this: A Debugger is a useful tool for stepping through your code to find the source of an error, such as a variable not being properly initialized, or unexpected return values. It is not helpful when trying to find out what's happening after a fatal error has occurred - which will also not help you to fix the code.

expect script running out of sync?

I'm currently modifying a script used to backup cisco ACE modules' contexts & crypto files. it works absolutely beautifully with one device. however, when i use it on another module, it seems to go completely out of sync and it messes up the script.
From what I can see, the differences are in the presence of a line that the ACE module throws up as so: Warning: Permanently added '[x.x.x.x]' (RSA) to the list of known hosts.\r\r\n this just seems to throw the rest of the script off, even though none of my expect statements are even looking for this!
I've had nothing but nightmares with expect and the way in which it interprets information from ace modules; can anyone shed any light on this issue or provide any advice as to how to make these devices behave when I try to script for them?
If you're handling one connection at a time, you should make sure you fully terminate one before opening the next. The simplest way of doing that is to put:
close
wait
At the end of the (foreach) loop over the things to connect to.
If you were doing multiple connections at once, you'd have to take care to use the -i option to various commands (notably expect, send and close) and make everything work right in addition to fixing the things I mentioned earlier. It can be done, but it's considerably more tricky and not worth it if you don't need the parallelism.

Fix common library functions, or abandon then?

Imagine i have a function with a bug in it:
Pseudo-code:
void Foo(LPVOID o)
{
//implementation details omitted
}
The problem is the user passed null:
Object bar = null;
...
Foo(bar);
Then the function might crash due to a access violation; but it could also happen to work fine. The bug is that the function should have been checking for the invalid case of passing null, but it just never did. It was never issue because developers were trusted to know what they're doing.
If i now change the function to:
Pseudo-code:
void Foo(LPVOID o)
{
if (o == null) throw new EArgumentNullException("o");
//implementation details omitted
}
then people who were happily using the function, and happened to but not get an access violation, now suddenly will begin seeing an EArgumentNullException.
Do i continue to let people using the function improperly, and create a new version of the function? Or do i fix the function to include what it should have originally had?
So now the moral dillema. Do you ever add new sanity checks, safety checks, assertions to exising code? Or do you call the old function abandoned, and have a new one?
Consider a bug so common that Microsoft had to fix it for developers:
MessageBox(GetDesktopWindow, ...);
You never, ever, ever want to make a window model against the desktop. You'll lock up the system. Do you continue to let developers lock up the user's computer? Or do you change the function to:
MessageBox(HWND hWndParent, ...)
{
if (hWndParent == GetDesktopWindow)
throw new Exception("hWndParent cannot be the desktop window. Use NULL instead.");
...
}
In reality Microsoft changed the Window Manager to auto-fix the bad parameter:
MessageBox(HWND hWndParent, ...)
{
if (hWndParent == GetDesktopWindow)
hWndParent = 0;
...
}
In my made up example there is no way to patch the function - if i wasn't given an object, i can't do what i need to do on it.
Do you risk breaking existing code by adding parameter validation? Do you let existing code continue to be wrong, getting incorrect results?
The problem is that not only are you fixing a bug, but you are changing the semantic signature of the method by introducing an error case.
From a software engineering perspective I would advocate that you try to specify methods as best as possible (for instance using pre and post-conditions) but once the method is out there, specification changes are a no-go (or at least you would have to check all occurrences of the method) and a new method would be better.
I'd keep the old function and simply let it create a warning that notifies you of every (possibly) wrong use and then i'd just kick the developer who used it wrong until he uses it properly.
You cannot catch everything. What if someone wrote "MakeLocation("Ian Boyd", "is stupid");"? Would you create a new function or change the function to catch that? No, you would fire the developer (or at least punish him).
Of course this requires that you document what your function requires as input.
This is where having automated tests [Unit testing, Intergration Testing, automated functional testing] are great, they give you the power to change existing code with confidance.
When making changes like this I would suggest finding all usages and ensuring they are behaving how you belive they should.
I myself would make bug fixes to existing function rather them duplicating them 99% of the time. If it changes behavior alot and there are a lot of calls to this function you need to be very sure of your change.
So go ahead make your change, run your unit tests, then your automated functional tests. Fix any errors and your golden!
If your code has a bug in it you should do what you normally do when any bug is reported. One part of that is assessing the impacts of fixing and of not fixing it. Sometimes the right thing to do with a bug is to not fix it because the behaviour it exposes has become accepted. Sometimes the cost of fixing it, or the iconvenience of releasing a fix outside the normal release cycle, stops you releasing a fixed bug for a while. This isn't a moral dilemma, it's an economic question of costs and benefits. If you are disturbed at the thought of having known bugs in your published code, publish a known-bugs list.
One option none of the other respondents seems to have suggested is to wrap the buggy function in another function which imposes the new behaviour that you require. In the world where functions can run to many lines it is sometimes less likely to introduce new bugs to preserve a 99%-correct piece of code and address the change without modifying existing code. Of course, this is not always possible
Two choices:
Give the error checking version a new name and deprecate the old version (one version later have it start issuing warnings (compile time if possible, run time if necessary), two versions later remove it).
[not always possible] Place the newly introduced error check in such a way that it only triggers if the unmodified version would crash or produce undefined behavior. (So that users who were taking care in their code don't get any unpleasant surprises.)
It entirely depends on you, your codebase, and your users.
If you are Microsoft and you have a bug in your API that is used by millions of devs around the world, then you will probably want to just create a new function and update the docs on the old one. If you can, you would also want to update the compiler to give warnings as well. (Though even then you may be able to change the existing system; remember when MS switched VC to the C++ standard and you had to update all of your #include iostreams and add using stds to get simple, existing console apps working again?)
It basically depends on what the function is. If it is something basic that will have massive ripple effects, then it could break a lot of code. If it is just an ancillary function, then you may as well fix it. Of course if you are Microsoft and your other code depends on a bug in one of your functions, then you probably should fix it since that is just plain embarrassing to keep. If other devs rely on the bug (that you created), then you may have an obligation to the users to not break their code that you caused to be buggy.
If you are a small company or independent developer, then sure, go ahead and fix the function. If you only need to update yourself or a few people on the new usage then fixing it is the best solution, especially since it is not even a big deal because all it really requires is an added note to the docs for the function. eg do not pass NULL or an exception is thrown if hWnd is the desktop, etc.
Another option as a sort of compromise would be to create a wrapper function. You could create a small, inline function that checks the args and then calls the existing function. That way you don’t really have to do much in the short term and eventually when people have moved to the new one, you can deprecate or even remove the old one, moving the code to the new once between the checks.
In most scenarios, it is better to fix a buggy function, particularly if you are merely adding argument checks as opposed to completely changing the behavior of the function. It is not really a good idea to facilitate—read encourage—bad coding just because it would break some existing code (especially if the code is free!) Think about it: if someone is creating a new program, then they can do it right from the start instead of relying on a bug. If they are re-compiling an old program that depends on the bug, then they can just update the code. Again, it depends on how messy and convoluted the code is, how many people are affected, and whether or not they pay you, but it is quite common to have to update old code to for example initialize variables that hand’t been, or check for error codes, etc.
To sum up, in your specific example (given the information provided), you should just fix it.