I'm getting a really tough error in SSIS 2012.
I am just running in SSDT.
I have a script task inside a For...Each block.
It runs fine the first time it is reached.
The second time it is reached, I get a generic "Exception thrown by object of invocation error", attributed to the script, at the script task.
It is a small script, all inside Main(), and with a Try...Catch block.
I am not hitting the Catch, which adds custom text.
It seems like it is behaving as if it never enters the Script...
except
if I actually set a breakpoint in it.... in which case it runs fine,
whether I step line-by-line or just hit F5.
I know this isn't terribly specific, but I'm hoping someone has seen this.
Has anyone seen anything like this before?
As mentioned, I have tried debugging (obviously), but then I don't get any error.
I have tried changing my variable access from the basic to through VariablesDispenser.LockOneForRead, in case it is something with variables that happens before Main().
I think I got all the places the variables are used in the loop, but that didn't help.
Because this was so killer, I'm going ahead and answering it.
It was actually an un-"declared" variable, but in my Catch block.
Copy-paste error :/
I was using a variable as
"Dts.Variables["TaskName"]"
in the Catch block, but I had not selected it in the Script Task window.
I have no idea why it didn't give me the specific "not found in collection" error.
I have certainly run into this before and seen that. :/
Just ran into that and it was a bear to figure out.
What it was was that I had a static variable (actually a singleton class) defined. Evidently, SSIS does NOT re-initialize a program on second and subsequent invocation, but holds the image and simply re-launches at its entry point.
My Singelton class (and I've verified for several static variables now) does NOT get re-initialized. It still exists. The issue was that it was created with the Dts Variable set that existed on first invocation of the script. Since it's "self" values was not null it never re-instantiated.
When I recognized what was happening, it was of course easy to fix, but one gets used to a stand-alone environment where every program instance has its static values null or set with a static initial value. We presume automatically that a new "run" of the program will likewise have its global spaces "clean" .... in point of fact I'm fairly sure that was what I read as part of the C# "contract" that I'd never need to worry about historical cruft in memory spaces for variables.
Well it turns out that that "contract" is about as good as any Microsoft will make you sign.
It's actually a mixed blessing. Knowing that it happens I can use it to save a lot of overhead in scripts invoked in loops ... but as it's not well, or perhaps un- documented I'll need to be careful to have work-arounds and default loading tests if it turns out not to be true in some future release or version.
(Be gentle in your criticism... I'm new to SSIS. Not so new to program paradigms. CICS mainframe programs would re-init global spaces unless you did things in the linkage to signal it not to ... if you're going to re-invent wheels at least look at old wheels).
-- TWZ
Related
I tried this on Access 2016, but I'm pretty sure this has always been like that since the first versions.
Let's use a formula in a report field, in this case Mid("abc",2):
When I show the report, the result is correct:
Now, if for any reason i've got a syntax error in a VBA Module (I cannot exclude that other categories of problems lead to the same result), not related with the function I've used in my formula, the formula goes into an error state, displaying "#Name?" error message.
And this is the result:
Well this is quite scary because it means that a report that is already validated and used can always show an error and omit information because an error is present in an non-related module.
Potentially ALL the report formula could get broken because a bug in a module; in complex reports with a lot of fields, this can go unnoticed till a customer realizes that instead of a crucial information "#Name?" is written.
I would like to prevent this scenario. Is it possible to raise an exception in the case of a broken formula, instead of just showing #Name?
Are there any other possibilities to achieve this level of reliability?
No, there is not, at least not really.
There is really only one way to avoid this: make sure your code compiles. If your code can't compile, then none of the functions can be trusted. Since VBA allows overriding functions, that's a good thing, since you might've declared a function named Mid somewhere in that section that doesn't compile and might expect the code to use that. Not compiling equals "not a clue what's going on, nothing can be trusted".
For simple projects, the fix is to mash that compile button (Debug -> Compile) after every single change and to never accept a failure to compile.
For more complex projects, you can organize code in different files and reference them (see this answer), and if one file fails to compile, that'll only affect code that references anything from that file (make sure to rename the VB project of each file to avoid name conflicts). For some projects, it's sensible to use this to keep data, forms and reports, and modules all in separate files.
I have multiple tcl files getting sourced
source fg_lib.tcl
source stc_lib.tcl
In stc_lib.tcl, there is a function which is only defined in fg_lib.tcl. Can I assume that since fg_lib.tcl is getting sourced, automatically the function will be usable to stc_lib.tcl?
One more question: if a certain function is defined in both the tcl files, depending on the ordering of source as above, which version of the function will be executed? I think function defined in stc_lib.tcl will be, but still would like to clarify.
Thanks,
The source command acts, immediately, as if the content of the file was in the script at the point where the source appears (except for the difference of what info script returns). If both scripts define a procedure foobar, it will be the later script (stc_lib.tcl in your case) that produces the version that is used.
However, if the scripts just define procedures that don't have overlapping names and don't otherwise call the commands they create, the order in which the sources are placed is typically unimportant. The proc command just creates a command; the body of the procedure isn't evaluated until the procedure is called. (This sounds obvious, but it really is exactly like that. The code is exactly what it says it is, and Tcl is all about immediate operational semantics and code that is registered to be run in response to some event.)
Bear in mind that if you are having problems with sources smashing each other, it's probably best to look into putting the code into namespaces or to otherwise find a way to stop entangling things. Writing confusing code is not recommended.
I try to analyze a dll file with my poor assembly skills, so forgive me if I couldn't achieve something very trivial. My problem is that, while debugging the application, I find the code I'm looking for only in debug session, after I stop the debugger, the address is gone. The dll doesn't look to be obfuscated, as many of the code is readable. Take a look at the screenshot. The code I'm looking for is located at address 07D1EBBF in debug376 section. BTW, where did I get this debug376 section?
So my question is, How can I find this function while not debugging?
Thanks
UPDATE
Ok, as I said, as soon as I stop the debugger, the code is vanished. I can't even find it via sequence of bytes (but I can in debug mode). When I start the debugger, the code is not disassembled imediately, I should add a hardware breakpoint at that place and only when the breakpoint will be hit, IDA will show disassembled code. take a look at this screenshot
You see the line of code I'm interested in, which is not visible if the program is not running in debug mode. I'm not sure, but I think it's something like unpacking the code at runtime, which is not visible at design time.
Anyway, any help would be appreciated. I want to know why that code is hidden, until breakpoint hit (it's shown as "db 8Bh" etc) and how to find that address without debugging if possible. BTW, could this be a code from a different module (dll)?
Thanks
UPDATE 2
I found out that debug376 is a segment created at runtime. So simple question: how can I find out where this segment came from :)
So you see the code in the Debugger Window once your program is running and as you seem not to find the verry same opcodes in the raw Hex-Dump once it's not running any more?
What might help you is taking a Memory Snapshot. Pause the program's execution near the instructions you're interested in to make sure they are there, then choose "Take memory snapshot" from the "Debugger" Menu. IDA will then ask you wether to copy only the Data found at the segments that are defined as "loder segments" (those the PE loader creates from the predefined table) or "all segments" that seem to currently belong to the debugged program (including such that might have been created by an unpacking routine, decryptor, whatever). Go for "All segments" and you should be fine seeing memory contents including your debug segments (a segment
created or recognized while debugging) in IDA when not debugging the application.
You can view the list of segements at any time by pressing Shift+F7 or by clicking "Segments" from View > Open subviews.
Keep in mind that the programm your trying to analyze might choose to create the segment some other place the next time it is loaded to make it harder to understand for you what's going on.
UPDATE to match your second Question
When a program is unpacking data from somewhere, it will have to copy stuff somewhere. Windows is a virtual machine that nowadays get's real nasty at you when trying to execute or write code at locations that you're not allowed to. So any program, as long as we're under windows will somehow
Register a Bunch of new memory or overwrite memory it already owns. This is usually done by calling something like malloc or so [Your code looks as if it could have been a verry pointer-intensive language... VB perhaps or something object oriented] it mostly boils down to a call to VirtualAlloc or VirtualAllocEx from Windows's kernel32.dll, see http://msdn.microsoft.com/en-us/library/windows/desktop/aa366887(v=vs.85).aspx for more detail on it's calling convention.
Perhaps set up Windows Exception handling on that and mark the memory range als executable if it wasn't already when calling VirtualAlloc. This would be done by calling VirtualProtect, again from kernel32.dll. See http://msdn.microsoft.com/en-us/library/windows/desktop/aa366898(v=vs.85).aspx and http://msdn.microsoft.com/en-us/library/windows/desktop/aa366786(v=vs.85).aspx for more info on that.
So now, you should take a step trough the programm, starting at its default Entrypoint (OEP) and look for calls tho one of those functions, possibly with the memory protection set to PAGE_EXECUTE or a descendant. After that will possibly come some sort of loop decrypting the memory contents, copying them to their new location. You might want to just step over it, depending on what your interest in the program is by justr placing the cursor after the loop (thick blue line in IDA usually) and clicking "Run to Cursor" from the menu that appears upon right clicking the assembler code.
If that fails, just try placing a Hardware Breakpoint on kernel32.dll's VirtualAlloc and see if you get anything interestin when stepping into the return statement so you end up wherever the execution chain will take you after the Alloc or Protect call.
You need to find the Relative Virtual Address of that code, this will allow you to find it again regardless of the load address (pretty handy with almost all systems using ASLR these days). the RVA is generally calculated as virtual address - base load address = RVA, however, you might also need to account for the section base as well.
The alternative is to use IDA's rebasing tool to rebase the dll to the same address everytime.
AS3
Error: Error #1502: A script has executed for longer than the default timeout period of 15 seconds.
Is there a way to temporarily suppress this on a specific block of code?
I am creating a HUGE dynamic 3d array of objects, 1000x1000x1000 and need the build to actually finish the initializing.
Your best bet would be to try and refactor your code. Perhaps you can make use of this tutorial which deals with the exact problem you are having.
http://www.senocular.com/flash/tutorials/asyncoperations/
Increasing the timeout is one option, however I would also suggest considering an approach that would build your arrays over multiple frames, that is splitting the work up into separate jobs. As long as you give control back to the Flash Player every once in a while, you will not get this exception.
I'm not certain of the specifics of your problem, however you will need to find a way to parallelize or just simply segment your calculations. If your algorithm centers around one major loop, then consider creating a function that takes all of the arguments necessary to record the context of a single iteration. Then, create a simple control loop that will call this function and determine when to wait until the next frame and when not to. Leveraging AS3 closures can also help with this.
Look for the script execution time limit in the "Publish Settings" (Flash). If you're using Flex, maybe this one can be useful: http://livedocs.adobe.com/flex/3/html/help.html?content=compilers_14.html (check default-script-limits, max-recursion-depth, max-execution-time). Oh! It seems there's apparently no way to make it behave in a different way on a specific piece of code (it is a global setting).
I do not approve the increse timeout option. Because for all this time your appllication is just hangs the whole Flash player. And normaly user thinks it is down, and forses it to quit.
check this one out: How to show the current progressBar value of process within a loop in flex-as3?
And then you can even show the progress which would be really more confident for you and for user.
Imagine i have a function with a bug in it:
Pseudo-code:
void Foo(LPVOID o)
{
//implementation details omitted
}
The problem is the user passed null:
Object bar = null;
...
Foo(bar);
Then the function might crash due to a access violation; but it could also happen to work fine. The bug is that the function should have been checking for the invalid case of passing null, but it just never did. It was never issue because developers were trusted to know what they're doing.
If i now change the function to:
Pseudo-code:
void Foo(LPVOID o)
{
if (o == null) throw new EArgumentNullException("o");
//implementation details omitted
}
then people who were happily using the function, and happened to but not get an access violation, now suddenly will begin seeing an EArgumentNullException.
Do i continue to let people using the function improperly, and create a new version of the function? Or do i fix the function to include what it should have originally had?
So now the moral dillema. Do you ever add new sanity checks, safety checks, assertions to exising code? Or do you call the old function abandoned, and have a new one?
Consider a bug so common that Microsoft had to fix it for developers:
MessageBox(GetDesktopWindow, ...);
You never, ever, ever want to make a window model against the desktop. You'll lock up the system. Do you continue to let developers lock up the user's computer? Or do you change the function to:
MessageBox(HWND hWndParent, ...)
{
if (hWndParent == GetDesktopWindow)
throw new Exception("hWndParent cannot be the desktop window. Use NULL instead.");
...
}
In reality Microsoft changed the Window Manager to auto-fix the bad parameter:
MessageBox(HWND hWndParent, ...)
{
if (hWndParent == GetDesktopWindow)
hWndParent = 0;
...
}
In my made up example there is no way to patch the function - if i wasn't given an object, i can't do what i need to do on it.
Do you risk breaking existing code by adding parameter validation? Do you let existing code continue to be wrong, getting incorrect results?
The problem is that not only are you fixing a bug, but you are changing the semantic signature of the method by introducing an error case.
From a software engineering perspective I would advocate that you try to specify methods as best as possible (for instance using pre and post-conditions) but once the method is out there, specification changes are a no-go (or at least you would have to check all occurrences of the method) and a new method would be better.
I'd keep the old function and simply let it create a warning that notifies you of every (possibly) wrong use and then i'd just kick the developer who used it wrong until he uses it properly.
You cannot catch everything. What if someone wrote "MakeLocation("Ian Boyd", "is stupid");"? Would you create a new function or change the function to catch that? No, you would fire the developer (or at least punish him).
Of course this requires that you document what your function requires as input.
This is where having automated tests [Unit testing, Intergration Testing, automated functional testing] are great, they give you the power to change existing code with confidance.
When making changes like this I would suggest finding all usages and ensuring they are behaving how you belive they should.
I myself would make bug fixes to existing function rather them duplicating them 99% of the time. If it changes behavior alot and there are a lot of calls to this function you need to be very sure of your change.
So go ahead make your change, run your unit tests, then your automated functional tests. Fix any errors and your golden!
If your code has a bug in it you should do what you normally do when any bug is reported. One part of that is assessing the impacts of fixing and of not fixing it. Sometimes the right thing to do with a bug is to not fix it because the behaviour it exposes has become accepted. Sometimes the cost of fixing it, or the iconvenience of releasing a fix outside the normal release cycle, stops you releasing a fixed bug for a while. This isn't a moral dilemma, it's an economic question of costs and benefits. If you are disturbed at the thought of having known bugs in your published code, publish a known-bugs list.
One option none of the other respondents seems to have suggested is to wrap the buggy function in another function which imposes the new behaviour that you require. In the world where functions can run to many lines it is sometimes less likely to introduce new bugs to preserve a 99%-correct piece of code and address the change without modifying existing code. Of course, this is not always possible
Two choices:
Give the error checking version a new name and deprecate the old version (one version later have it start issuing warnings (compile time if possible, run time if necessary), two versions later remove it).
[not always possible] Place the newly introduced error check in such a way that it only triggers if the unmodified version would crash or produce undefined behavior. (So that users who were taking care in their code don't get any unpleasant surprises.)
It entirely depends on you, your codebase, and your users.
If you are Microsoft and you have a bug in your API that is used by millions of devs around the world, then you will probably want to just create a new function and update the docs on the old one. If you can, you would also want to update the compiler to give warnings as well. (Though even then you may be able to change the existing system; remember when MS switched VC to the C++ standard and you had to update all of your #include iostreams and add using stds to get simple, existing console apps working again?)
It basically depends on what the function is. If it is something basic that will have massive ripple effects, then it could break a lot of code. If it is just an ancillary function, then you may as well fix it. Of course if you are Microsoft and your other code depends on a bug in one of your functions, then you probably should fix it since that is just plain embarrassing to keep. If other devs rely on the bug (that you created), then you may have an obligation to the users to not break their code that you caused to be buggy.
If you are a small company or independent developer, then sure, go ahead and fix the function. If you only need to update yourself or a few people on the new usage then fixing it is the best solution, especially since it is not even a big deal because all it really requires is an added note to the docs for the function. eg do not pass NULL or an exception is thrown if hWnd is the desktop, etc.
Another option as a sort of compromise would be to create a wrapper function. You could create a small, inline function that checks the args and then calls the existing function. That way you don’t really have to do much in the short term and eventually when people have moved to the new one, you can deprecate or even remove the old one, moving the code to the new once between the checks.
In most scenarios, it is better to fix a buggy function, particularly if you are merely adding argument checks as opposed to completely changing the behavior of the function. It is not really a good idea to facilitate—read encourage—bad coding just because it would break some existing code (especially if the code is free!) Think about it: if someone is creating a new program, then they can do it right from the start instead of relying on a bug. If they are re-compiling an old program that depends on the bug, then they can just update the code. Again, it depends on how messy and convoluted the code is, how many people are affected, and whether or not they pay you, but it is quite common to have to update old code to for example initialize variables that hand’t been, or check for error codes, etc.
To sum up, in your specific example (given the information provided), you should just fix it.