How to prevent the browser from killing the Flash plugin while debugging - actionscript-3

When I am debugging broken code, after a while the browser announces that the Flash plugin has crashed, and I can't continue debugging my code. Can I prevent the browser from killing Flash?
I am using Firefox.

Going to the debugger on a breakpoint makes the plugin "freeze". This is intentional, it's a breakpoint after all!
However, from the browsers perspective, the plugin seems to be stuck in some kind of infinite loop. The timeout value varies, my Firefox installation is set to 45 seconds.
To change the timeout value go enter about:config in the url field and look for the setting dom.ipc.plugins.timeoutSecs increase this or set it to -1 to disable the timeout altogether.

When the plugin crashes, it does in fact not so, because the browser is "killing" it, but rather the plugin terminates itself when a fatal error occurs. This is necessary to prevent the browser, or even your entire machine from crashing - there is no way to tell what is going to happen after an error like that. And besides: After the first uncaught error, your program will likely not be able to perform even correct code the way you intended, so you won't do any good by continuing a broken debugging session. So it is not a flaw, it is actually a good thing this happens!
However, you can do some things in order to work more effectively (and make your programs better). The most important I can think of right now are:
Learn to use good object-oriented programming techniques and get acquainted with design patterns, if you haven't already done so.
Take extra care to prevent error conditions from happening (e.g. test if an object is null before accessing its properties, give default values to variables when possible, etc.)
Use proper error handling to gracefully catch errors at runtime.
Use unit tests to thoroughly test your code for errors one piece at a time, before debugging in the browser. Getting to know FlexUnit is a good place to start.
EDIT
I should also have said this: A Debugger is a useful tool for stepping through your code to find the source of an error, such as a variable not being properly initialized, or unexpected return values. It is not helpful when trying to find out what's happening after a fatal error has occurred - which will also not help you to fix the code.

Related

Octave disable automatic broadcasting

I have some code which Octave spits me out many
warning: product: automatic broadcasting operation applied
I think this automatic broadcasting could be the problem in my code, which doesn't work yet, but that message is so non informative, it doesn't help me locate the problem at all. I'd rather like Octave to simply fail, error at the specific location of that broadcast, so that I can manually go there, understand why it was broadcasting there and then fix my code. Also, even if my code doesn't not work because of this mistake, but because of some other mistake, in any other programming language I'd also go there and fix it, since I don't like to rely on something automatically differently interpreted, but want to have clean code.
How do I disable that annoying behavior (generally, all the time, everywhere) and make Octave tell me where the mistake is?
Also, even if my code doesn't not work because of this mistake, but because of some other mistake, in any other programming language I'd also go there and fix it, since I don't like to rely on something automatically differently interpreted, but want to have clean code.
The warning for automatic broadcasting was added in Octave 3.6 and has been removed in Octave 4.0. The reason for throwing a warning is that automatic broadcasting was a new feature in 3.6 that could catch users by surprise.
But it is meant to be used like a normal operator, not to be an accident. The fact that using it was throwing a warning made it sound like it was something that needs to be fixed on the code. So don't feel like it.
Newer Octave versions of Octave will not throw that warning by default. You might as well disable the warning now:
warning ("off", "Octave:broadcast");
How do I disable that annoying behaviour (generally, all the time, everywhere) and make Octave tell me where the mistake is?
You can't disable automatic broadcasting, that would make Octave stop working. It would be the equivalent of, for example, disabling addition and expect Octave to continue working normally.
You seem to think that automatic broadcasting is the source of your mistake. That cannot be. Automatic broadcasting does not cause a different result. If you were to disable automatic broadcasting you would simply get an error about nonconformant dimensions.
Therefore, assuming you never intended to make use of broadcasting, your program is not working because of some other mistake happening before automatic broadcasting (usually a function returned a row vector and you expected a column vector, or vice-versa).
However, you are obviously using an old version of Octave and at that time broadcasting was not much used yet. You can make it throw an error instead of a warning and maybe it will still work fine (specially if you don't use Octave packages because they used automatic broadcast more than Octave core). You can do this with:
warning ("error", "Octave:broadcast");
warning('error');
Will set all warnings to be treated as errors.
For more see the documentation on that, there seems to be a way to set only specific warning as error, or maybe have it display the position, which causes the warning.
Note. All commands set octave parameters for a specific session only. There are certain files where such commands can be written so that these options become default.

Letting traces around the code

There is some performance issue if I let my traces around the whole code when releasing?
trace("thank you");
Traces are ignored in release builds so there is no performance penalty.
Yes there is a performance issue if you have traces active
Some compilers have specific compiler options stating something similar to "build release client", these might or might not end up with a build where all trace-comments does not exist [are ignored].
Even if you don't have anything listening to trace-statements and they are running, it will first of all do an extra function-call and checking some if-statements, then after that it will be stored inside a log-file on the computer where it is run.
So, you should find out what compiler you have and if that one automatically removes trace-messages when compiling in "release mode". If not, you have to either ignore the performance loss or find ways to work around it

expect script running out of sync?

I'm currently modifying a script used to backup cisco ACE modules' contexts & crypto files. it works absolutely beautifully with one device. however, when i use it on another module, it seems to go completely out of sync and it messes up the script.
From what I can see, the differences are in the presence of a line that the ACE module throws up as so: Warning: Permanently added '[x.x.x.x]' (RSA) to the list of known hosts.\r\r\n this just seems to throw the rest of the script off, even though none of my expect statements are even looking for this!
I've had nothing but nightmares with expect and the way in which it interprets information from ace modules; can anyone shed any light on this issue or provide any advice as to how to make these devices behave when I try to script for them?
If you're handling one connection at a time, you should make sure you fully terminate one before opening the next. The simplest way of doing that is to put:
close
wait
At the end of the (foreach) loop over the things to connect to.
If you were doing multiple connections at once, you'd have to take care to use the -i option to various commands (notably expect, send and close) and make everything work right in addition to fixing the things I mentioned earlier. It can be done, but it's considerably more tricky and not worth it if you don't need the parallelism.

Catching the dreaded Blue Screen Of Death

It's a simple problem. Sometimes Windows will just halt everything and throws a BSOD. Game over, please reboot to play another game. Or whatever. Annoying but not extremely serious...
What I want is simple. I want to catch the BSOD when it occurs. Why? Just for some additional crash logging. It's okay that the system goes blue but when it happens, I just want to log some additional information or perform one additional action.
Is this even possible? If so, how? And what would be the limitations?
Btw, I don't want to do anything when the system recovers, I want to catch it while it happens. This to allow me one final action. (For example, flushing a file before the system goes down.)
BSOD happens due to an error in the Windows kernel or more commonly in a faulty device driver (that runs in kernel mode). There is very little you can do about it. If it is a driver problem, you can hope the vendor will fix it.
You can configure Windows to a create memory dump upon BSOD which will help you troubleshoot the problem. You can get a pretty good idea about the faulting driver by loading the dump into WinDbg and using the !analyze command.
Knowing which driver is causing the problem will let you look for a new driver, but if that doesn't fix the problem, there is little you can do about it (unless you're very good with a hex editor).
UPDATE: If you want to debug this while it is happening, you need to debug the kernel. A good place to pick up more info is the book Windows Internals by Mark Russinovich. Also, I believe there's a bit of info in the help file for WinDbg and there must be something in the device driver kit as well (but that is beyond my knowledge).
The data is stored in what's called "Minidumps".
You can then use debugging tools to explore those dumps. The process is documented here http://forums.majorgeeks.com/showthread.php?t=35246
You have two ways to figure out what happened:
The first is to upload the dmp file located under C:\Minidump***.dmp to microsoft service as they describe it : http://answers.microsoft.com/en-us/windows/wiki/windows_10-update/blue-screen-of-death-bsod/1939df35-283f-4830-a4dd-e95ee5d8669d
or use their software debugger WinDbg to read the dmp file
NB: You will find several files, you can tell the difference using the name that contain the event date.
The second way is to note the error code from the blue screen and to make a search about it in Google and Microsoft website.
The first method is more accurate and efficient.
Windows can be configured to create a crash dump on blue screens.
Here's more information:
How to read the small memory dump files that Windows creates for debugging (support.microsoft.com)

Forcing application to throw specific exceptions

We are replacing the exception handling system in our app in order to conform to Vista certification, but the problem is how to force certain exceptions to be thrown, so that we can check the response.
Unfortunately the whole app was written without taking into consideration proper layering, abstraction or isolation principles, and within the timeframe introducing mocking and unit testing is out of the question :(
My idea is to introduce code which will throw a particular exception, either through a compiler directive or by respecting a value in the config file. We can then just run the app as normal and manually check how the exception is handled.
Just thought I'd put it out there and see if the SO community can think of anything better!
Cheers
Introduce this code:
throw new Exception("test");
If you need the exception to always be there (i.e., not just test code), then hide it behind a command-line parameter.
C:\Users\Dude> myapp.exe /x
I might not have a clue about this but my first thought was to use aspect oriented programming to throw exceptions when certain code is run. spring.net has support for this, though I don't know how well it works for your purpose. I wouldn't take my word on this but it's something to look into.