ABBYY Flexicapture 'Internal Program Error' on repeating group - ocr

Something happened to corrupt an ABBYY FlexiCapture project of mine. In the Document Definition Editor, when I expand a couple of my repeating groups, I get:
Internal Program error:
.\DocumentTemplate\TemplateNode.cpp, 626.
Then I can click OK and some of the children elements will load and some will not. This means I cannot update rules in the ones that don't show up.
Is there a known way to fix this? Right now I'm rebuilding the whole document definition.
I'm using the distributed version of the software.

InternalProgramError is usually some kind of unhanded error, thus reference to the CPP file code line. I have no information regarding this specific error, unfortunately, but the fact that it worked before and started to produce the error some time after use indicates that the setup was correct but got corrupted at some point. Otherwise, if you made a change to the template, and started to see that error, undoing that change is the way to go. Usually it is pretty rare to cause these errors in the template modification in the first place, unless you are doing something noncustomary with the UI, or in scripts.

Related

Access, #Name error on report formula if there is *any* error on *any* module

I tried this on Access 2016, but I'm pretty sure this has always been like that since the first versions.
Let's use a formula in a report field, in this case Mid("abc",2):
When I show the report, the result is correct:
Now, if for any reason i've got a syntax error in a VBA Module (I cannot exclude that other categories of problems lead to the same result), not related with the function I've used in my formula, the formula goes into an error state, displaying "#Name?" error message.
And this is the result:
Well this is quite scary because it means that a report that is already validated and used can always show an error and omit information because an error is present in an non-related module.
Potentially ALL the report formula could get broken because a bug in a module; in complex reports with a lot of fields, this can go unnoticed till a customer realizes that instead of a crucial information "#Name?" is written.
I would like to prevent this scenario. Is it possible to raise an exception in the case of a broken formula, instead of just showing #Name?
Are there any other possibilities to achieve this level of reliability?
No, there is not, at least not really.
There is really only one way to avoid this: make sure your code compiles. If your code can't compile, then none of the functions can be trusted. Since VBA allows overriding functions, that's a good thing, since you might've declared a function named Mid somewhere in that section that doesn't compile and might expect the code to use that. Not compiling equals "not a clue what's going on, nothing can be trusted".
For simple projects, the fix is to mash that compile button (Debug -> Compile) after every single change and to never accept a failure to compile.
For more complex projects, you can organize code in different files and reference them (see this answer), and if one file fails to compile, that'll only affect code that references anything from that file (make sure to rename the VB project of each file to avoid name conflicts). For some projects, it's sensible to use this to keep data, forms and reports, and modules all in separate files.

Library with identifier LabelServices is missing (perhaps it was deleted?) / Something went wrong. Please reload

I have posted this on google-apps-script-issues (https://code.google.com/p/google-apps-script-issues/issues/detail?id=4723&can=4&colspec=Stars%20Opened%20ID%20Type%20Status%20Summary%20Component%20Owner) but with no response I am getting desperate as all my sheets and scripts are breaking down. So I hope someone here might have encountered this and have some idea for me how to handle this.
Some of my exiting scripts (including some in spreadsheets) suddenly started throwing the exception 'Library with identifier LabelServices is missing (perhaps it was deleted?)'.
I tried deleting the library reference in some of the scripts that were throwing this error and re-adding it but the same error comes up. I even defined a new library with the same name and try to use it instead. Still the same error.
Most likely a related error, when trying to open project properties to these scripts I see the message 'Something went wrong. Please reload.' and the hourglass never stops. This happens at both standalone scripts and ones embeded in spreadsheets.
Please note that if I add the same library to a new script then there is no problem and the library works perfectly.
I would like to add that even if I remove the library and also any calls or references to the library the problem still persists.
Link to a script where this happens is available at https://script.google.com/d/1ptevXwu25z9Vo88zX0XOyaQk5pE5MmzwDbb-lMglNY1lzz1HBBiAeJxn/edit?usp=drive_web
You can copy it and play with it. You should get the same problem.
Appreciate any help!!!
Andrew. First of all thank you as you triggered something in my mind. I went all the way bottom up and checked each script and a library it was calling. There were obviously dependencies, however nothing that should have been a problem. a couple of libraries even though in development mode seem to cause a problem. I had to create a new version but it was not enough just to update their version in other scripts. In the calling scripts I had to remove all libraries and re-add them one by one while testing whether the project properties shows up or not. I just wish Google would have a utility to clearly highlight what was wrong because I can't figure it out though I went twice through their documentation in regard to libraries and development mode. Since I have about 30 libraries or so, not to mention spreadsheets and apps it took me nearly a whole day... Thanks again!
To all, it seems that there is a problem where sometimes libraries get out of sync (at least that's the best way I can describe it). You may need to create new versions for the libraries (even though these are in development mode. And in each problematic script remove and re-add the libraries (in at lease a couple of transactions - first remove and confirm then add - and better one at the time while checking that your project properties opens properly). This solved the problem for me.

expect script running out of sync?

I'm currently modifying a script used to backup cisco ACE modules' contexts & crypto files. it works absolutely beautifully with one device. however, when i use it on another module, it seems to go completely out of sync and it messes up the script.
From what I can see, the differences are in the presence of a line that the ACE module throws up as so: Warning: Permanently added '[x.x.x.x]' (RSA) to the list of known hosts.\r\r\n this just seems to throw the rest of the script off, even though none of my expect statements are even looking for this!
I've had nothing but nightmares with expect and the way in which it interprets information from ace modules; can anyone shed any light on this issue or provide any advice as to how to make these devices behave when I try to script for them?
If you're handling one connection at a time, you should make sure you fully terminate one before opening the next. The simplest way of doing that is to put:
close
wait
At the end of the (foreach) loop over the things to connect to.
If you were doing multiple connections at once, you'd have to take care to use the -i option to various commands (notably expect, send and close) and make everything work right in addition to fixing the things I mentioned earlier. It can be done, but it's considerably more tricky and not worth it if you don't need the parallelism.

Fix common library functions, or abandon then?

Imagine i have a function with a bug in it:
Pseudo-code:
void Foo(LPVOID o)
{
//implementation details omitted
}
The problem is the user passed null:
Object bar = null;
...
Foo(bar);
Then the function might crash due to a access violation; but it could also happen to work fine. The bug is that the function should have been checking for the invalid case of passing null, but it just never did. It was never issue because developers were trusted to know what they're doing.
If i now change the function to:
Pseudo-code:
void Foo(LPVOID o)
{
if (o == null) throw new EArgumentNullException("o");
//implementation details omitted
}
then people who were happily using the function, and happened to but not get an access violation, now suddenly will begin seeing an EArgumentNullException.
Do i continue to let people using the function improperly, and create a new version of the function? Or do i fix the function to include what it should have originally had?
So now the moral dillema. Do you ever add new sanity checks, safety checks, assertions to exising code? Or do you call the old function abandoned, and have a new one?
Consider a bug so common that Microsoft had to fix it for developers:
MessageBox(GetDesktopWindow, ...);
You never, ever, ever want to make a window model against the desktop. You'll lock up the system. Do you continue to let developers lock up the user's computer? Or do you change the function to:
MessageBox(HWND hWndParent, ...)
{
if (hWndParent == GetDesktopWindow)
throw new Exception("hWndParent cannot be the desktop window. Use NULL instead.");
...
}
In reality Microsoft changed the Window Manager to auto-fix the bad parameter:
MessageBox(HWND hWndParent, ...)
{
if (hWndParent == GetDesktopWindow)
hWndParent = 0;
...
}
In my made up example there is no way to patch the function - if i wasn't given an object, i can't do what i need to do on it.
Do you risk breaking existing code by adding parameter validation? Do you let existing code continue to be wrong, getting incorrect results?
The problem is that not only are you fixing a bug, but you are changing the semantic signature of the method by introducing an error case.
From a software engineering perspective I would advocate that you try to specify methods as best as possible (for instance using pre and post-conditions) but once the method is out there, specification changes are a no-go (or at least you would have to check all occurrences of the method) and a new method would be better.
I'd keep the old function and simply let it create a warning that notifies you of every (possibly) wrong use and then i'd just kick the developer who used it wrong until he uses it properly.
You cannot catch everything. What if someone wrote "MakeLocation("Ian Boyd", "is stupid");"? Would you create a new function or change the function to catch that? No, you would fire the developer (or at least punish him).
Of course this requires that you document what your function requires as input.
This is where having automated tests [Unit testing, Intergration Testing, automated functional testing] are great, they give you the power to change existing code with confidance.
When making changes like this I would suggest finding all usages and ensuring they are behaving how you belive they should.
I myself would make bug fixes to existing function rather them duplicating them 99% of the time. If it changes behavior alot and there are a lot of calls to this function you need to be very sure of your change.
So go ahead make your change, run your unit tests, then your automated functional tests. Fix any errors and your golden!
If your code has a bug in it you should do what you normally do when any bug is reported. One part of that is assessing the impacts of fixing and of not fixing it. Sometimes the right thing to do with a bug is to not fix it because the behaviour it exposes has become accepted. Sometimes the cost of fixing it, or the iconvenience of releasing a fix outside the normal release cycle, stops you releasing a fixed bug for a while. This isn't a moral dilemma, it's an economic question of costs and benefits. If you are disturbed at the thought of having known bugs in your published code, publish a known-bugs list.
One option none of the other respondents seems to have suggested is to wrap the buggy function in another function which imposes the new behaviour that you require. In the world where functions can run to many lines it is sometimes less likely to introduce new bugs to preserve a 99%-correct piece of code and address the change without modifying existing code. Of course, this is not always possible
Two choices:
Give the error checking version a new name and deprecate the old version (one version later have it start issuing warnings (compile time if possible, run time if necessary), two versions later remove it).
[not always possible] Place the newly introduced error check in such a way that it only triggers if the unmodified version would crash or produce undefined behavior. (So that users who were taking care in their code don't get any unpleasant surprises.)
It entirely depends on you, your codebase, and your users.
If you are Microsoft and you have a bug in your API that is used by millions of devs around the world, then you will probably want to just create a new function and update the docs on the old one. If you can, you would also want to update the compiler to give warnings as well. (Though even then you may be able to change the existing system; remember when MS switched VC to the C++ standard and you had to update all of your #include iostreams and add using stds to get simple, existing console apps working again?)
It basically depends on what the function is. If it is something basic that will have massive ripple effects, then it could break a lot of code. If it is just an ancillary function, then you may as well fix it. Of course if you are Microsoft and your other code depends on a bug in one of your functions, then you probably should fix it since that is just plain embarrassing to keep. If other devs rely on the bug (that you created), then you may have an obligation to the users to not break their code that you caused to be buggy.
If you are a small company or independent developer, then sure, go ahead and fix the function. If you only need to update yourself or a few people on the new usage then fixing it is the best solution, especially since it is not even a big deal because all it really requires is an added note to the docs for the function. eg do not pass NULL or an exception is thrown if hWnd is the desktop, etc.
Another option as a sort of compromise would be to create a wrapper function. You could create a small, inline function that checks the args and then calls the existing function. That way you don’t really have to do much in the short term and eventually when people have moved to the new one, you can deprecate or even remove the old one, moving the code to the new once between the checks.
In most scenarios, it is better to fix a buggy function, particularly if you are merely adding argument checks as opposed to completely changing the behavior of the function. It is not really a good idea to facilitate—read encourage—bad coding just because it would break some existing code (especially if the code is free!) Think about it: if someone is creating a new program, then they can do it right from the start instead of relying on a bug. If they are re-compiling an old program that depends on the bug, then they can just update the code. Again, it depends on how messy and convoluted the code is, how many people are affected, and whether or not they pay you, but it is quite common to have to update old code to for example initialize variables that hand’t been, or check for error codes, etc.
To sum up, in your specific example (given the information provided), you should just fix it.

When is it okay to check if a file exists?

File systems are volatile. This means that you can't trust the result of one operation to still be valid for the next one, even if it's the next line of code. You can't just say if (some file exists and I have permissions for it) open the file, and you can't say if (some file does not exist) create the file. There is always the possibility that the result of your if condition will change in between the two parts of your code. The operations are distinct: not atomic.
To make matters worse, the nature of the problem means that if you're tempted to make this check, odds are you're already worried or aware that something you don't control is likely to happen to the file. The nature of development environments make this event less likely to happen during your testing and very difficult to reproduce. So not only do you have a bug, but the bug won't show up while testing.
Therefore under normal circumstances the best course of action is to not even try to check if a file or directory exists. Instead, put your development time into handling exceptions from the file system. You have to handle these exceptions anyway, so this is a much better use of your resources. Even though exceptions are slow, checking the existence of a file requires an extra trip to disk, and disk access is much slower. I even have a well-voted answer to this effect in another question.
But I'm having some doubts. In .Net, for example, if that's really always true, the .Exists() methods wouldn't be in the API in the first place. Also consider scenarios where you expect your program to need to the create file. The first example that comes to mind is for a desktop application. This application installs a default user-config file to it's home directory, and the first time each user starts the application it copies this file to that user's application data folder. It expects the file not to exist on that first startup.
So when is it acceptable to check in advance for the existence (or other attributes, like size and permissions) of a file? Is expecting failure rather than success on the first attempt a good enough rule of thumb?
The File.Exists method exists primarily for testing for the existence of a file when you do not intend to open the file. For example testing for the existence of a locking file whose very existence tells you something but whose contents are immaterial.
If you are going to open the file then you will need to handle any exception regardless of the results of any prior calls to File.Exists. So, in general, there is no real value in calling it in these circumstances. Just use the appropriate FileMode enumeration value in your open method and handle any exceptions, as simple as that.
EDIT: Even though this is couched in terms of the .Net API, it is based on the underlying system API. Both Windows and Unix have system calls (i.e. CreateFile) that use the equivalent of the FileMode enumeration. In fact in .Net (or Mono) the FileMode value is just passed through to the underlying system call.
As a general policy, methods like File.Exists, or properties like WeakReference.Alive or SomeConcurrentQueue.Count are not useful as a means of ensuring that a "good" state exists, but can be useful as a means of determining that a "bad" state exists without doing any unnecessary (and possibly counterproductive) work. Such situations may arise in many scenarios involving locks (and files, since they often include locks). Because all routines that need to lock on a set of resources should, whenever practical, always acquire locks on those resources in a consistent order, it may be necessary to acquire a lock on one resource which is expected to exist before acquiring a resource which may or may not exist. In such a scenario, while it's impossible to avoid the possibility that one might lock the first resource, fail to acquire the second, and then release the first lock without having done any useful work with it, checking for the existence of the second resource before acquiring the lock on the first would minimize unnecessary and useless effort.
It depends on your requirements, but one way is to try to obtain an exclusive open file handle, with some sort of retry mechanism. Once you have that handle, it's going to be hard (or impossible) for another process to delete (or move) that file.
I've used code in .NET similiar to the following to obtain an exclusive file handle, where I expect some other process to be possibly writing the file:
FileInfo fi = new FileInfo(fullFilePath);
int attempts = maxAttempts;
do
{
try
{
// Asking to open for reading with exclusive access...
fs = fi.Open(FileMode.Open, FileAccess.Read, FileShare.None);
}
// Ignore any errors...
catch {}
if (fs != null)
{
break;
}
else
{
Thread.Sleep(100);
}
}
while (--attempts > 0);
One example: You may be able to check for existence of files which you are unable to open (due to, for example, permissions).
Another, possibly better example: You want to check for the existence of a Unix device file. But definitely do not open it; opening it has side effects (e.g., open/close /dev/st0 will rewind the tape)
In *nix environment a well established method for checking if another copy of the program is already running is to create a lock file. So the check for file existence is used to verify this.
I'd only check it if I expect it to be missing (e.g. the application settings) and only if I have to read the file.
If I have to write to the file, it's either a logfile (so I can just append to it or create a new one) or I replace the contents of it, so I might as well recreate it anyway.
If I expect that the file exists, it would be right that an Exception is thrown. Exception handling should then inform the user or perform recovery. My opinion is that this results in cleaner code.
File protection (i.e. not overwriting (possibly important) files) is different, in that case I'd always check whether a file exists, if the framework doesn't do that for me (think SaveFileDialog)
I think the check makes sense when you want to be sure the file was there in the first place. As you said settings files...if there is a file I will try and merge the existing settings instead of blowing them away.
Other cases would be when a user tells me to do something with a file. Yes I know the openFileDialog will check if a file exists (But this is optional). I vaguely remeber back in VB6 this was not the case, so verifying the file existed that they just told me to use was common.
I'd rather not program by exception.
Edit
I didn't miss the point. You might try and access the file, an exception is thrown and then when you go to create the file, the file was already placed there. Which now causes your exception handling code to go on the fritz. So I guess we could then have an exception handler in our exception handler to catch that the file changed yet again...
I'd rather try and prevent exceptions, not use them to control logic.
Edit
Additionally another time to check for attributes such as size is when your waiting for a file operation to finish, yes you never know for sure but with a good algorithim and depending on the system writting the file you might be able to handle a good deal of cases (Had a system running for five years which watched for small files coming over ftp, and it uses a the same api as the file system watcher, and then starts polling waiting for the file to stop changing, before raising an event that the file is ready to be consumed).
This may be too simplistic, but I would think the primary reason for checking for the existence of a file (hence the existence of .Exists()) would be to prevent unintended overwrites of existing files, not to avoid exceptions caused by attempting to access non-existent nor non-accessible files.
EDIT 2
This was, in fact, too simplistic and I recommend you see Stephen Martin's response.
If you're that concerned about somebody else removing the file, perhaps you should implement some sort of locking system. For instance, I used to work on the code for C-News, a Usenet news server. Since a lot of the things it did could happen asynchronously, it would "lock" a file or a directory by making a temp file, and then hard linking it to a file named "LOCK". If the link failed, it would mean that some other version of the program was writing to that directory, otherwise it was yours and you could do what you like.
The nifty thing about this is that most of the program was written in shell and awk, and this was a very portable locking mechanism. Also, the lock file would contain the PID of the owner, so you could look at the existing lock file to see if the owner was still running.
We have a diagnostic tool that has to gather a set of files, installer log included. Depending on different conditions the installer log can be in one of two folders. Even worse, there can be different versions of the log in both of these folders. How does the tool find the right one?
It's quite simple if you check for existence. If only one is present, grab that file. If two exist, find which has the latest modification time and grab that file. That's just normal way of doing things.
While this is a language-agnostic post, it seems you are talking about .NET. Most systems (.NET and others) have more detailed APIs in order to figure out if the file exists when opening the file.
What you should do is make a call to access the file, as it will typically indicate through some sort of error that the file doesn't exist (if it truly doesn't). In .NET, you would have to go through the P/Invoke layer and use the CreateFile API function. If that function returns an error of ERROR_FILE_NOT_FOUND, then you know that the file does not exist. If it returns successfully, then you have a handle that you can use.
The point here is that it is a somewhat atomic operation, which ultimately is what you are looking for.
Then, with the handle, you can pass it to a FileStream constructor and perform your work on the file.
There are a numbers of possible applications you may well be writing that a simple File.Exists is more than adequate for the job. If it's a config file that only your application will use then you do not need to go so overkill in your exception handling.
Whilst the "flaws" you have pointed out in using this method are all valid, it doesn't mean they are not acceptable flaws for some situations.
A variety of apps include built-in web servers. It's common for them to generate self-signed SSL certificates the first time they start up. A straightforward way to implement this would be to check whether the cert exists on startup, and create it if not.
In theory, it could exist for the check, and not exist later. In that case, we'd get an error when we try to listen, but that can be handled quite easily and is not a big deal.
It's also possible that it doesn't exist for the check, and exists later. In that case, it either gets overwritten with a new cert, or writing the new cert fails, depending on your policy. The first is a little annoying, in terms of the cert change causing some alarm, but also not really critical, especially if you do a bit of logging to indicate what is going on.
And, in practice, both cases are extraordinarily unlikely to ever come up.
Like you pointed out its always important what the program should do if the file is missing. In all my applications the user can always delete the config file and the application will create a new one with default values. No Problem. I also ship my applications without config files.
But users tend to delete files and even files they should not delete like serial keys and template files. I always check for these files because without them the application is unable to run at all. I can not create a new serial key from default.
Whats should happen when the file is missing? You can do a file find or exception handler but the real question is : What will happen when the file is missing? Or how important is the file for the application. I check all the time before I try to access any support files for the app. Additional I do error handling if the file is corrupt and can not be loaded.
I think anytime that you know that the file may or may not exist and you want to perform some alternate action based on the existence of the file, you should do the check because in this case it's not an exceptional condition for the file to not exist. This won't absolve you from having to handle exceptions -- from someone else either removing or creating the file between the check and your open -- but it makes the intent of the program clear and doesn't rely on exception handling to perform flow-control logic.
EDIT: An example might be log rotation on start up.
try
{
if (File.Exists("app.log"))
{
RotateLogs();
}
log = File.Open("app.log", FileMode.CreateNew );
}
catch (IOException)
{
...another writer, perhaps?
}
catch (UnauthorizedAccessException)
{
...maybe I should have used runas?
}
To answer my own question (in part), I want to expand on the example I used: a default config file.
Rather than check if it exists at app startup and try to copy the file if the check fails, the thing to do is always try to copy the file. You just do it in such a way that the copy will fail if the file exists rather than replace an existing file. This way all you need to do is catch and ignore any exception thrown if the copy fails because of an existing file.
Your problem could easily be solved with basic computer science... read up on Semaphores.
(I did not mean to sound like a jerk, I was just pointing you to a simple answer for a common problem).
I think the reason for "Exists" is to determine when files are missing without the need for creating all the OS housekeeping data required to access the file or having exceptions being thrown. So it's a file handling optimisation more than anything else.
For a single file, the saving the "Exists" gives is generally insignificant. If you were checking if a file exists many, many times (for example, searching for #include files) then the saving could be significant.
In .Net, the specification for File.Exists doesn't list any exceptions that the method might throw, unlike for example File.Open which lists nine exceptions, so there's certainly less checking going on in the former.
Even if "Exists" returns true, you still need to handle exceptions when opening the file, as the .Net reference suggests.