PSImaging error: Exception has been thrown by the target - ocr

I have the Positronic-IO setup on one of my servers, and am attempting to set it up on a second where I will actually be performing the OCR. I can get it to install, but when attempting to call the Export-ImageText I receive the subject mentioned error.
I recall having a difficult time to get it to work on the first server as well. I guess I should have taken better notes. Does this require a re-start of the server??

Related

Spotfire connection with MySql

I have been trying to use MySql data with Spotfire, Had a connection established and managed to make a working dxp file. Tried to do the same with a second table and it threw this error
The following error message "Streaming result set com.mysql.jdbc.RowDataDynamic#XXXXXX is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries" comes from MySQL.
afterwards I opened my already working dxp file and the error popped again. It seems I somehow have a connection still open and active but spotfire is suppose to manage the connection as far as I know at least
After looking this up online some people say it's a bug, others say I need to close the connection, but I can't find a solution. Please help
Edit: changing the minimum connections number (which probably closes the open connections) makes ot possible for the dxp to run once but then it fails the second time (after closing and opening)

what exceptions can occur with mysql statements?

I'm developing a website in php and codeignitor with three collegues, we're using mysql database.
I know that insert can throw an exception due to constraint violation, connect the server can make exception too if the server is busy.
Now what are other exceptions that might occur ? I tried looking in the web and I'm surprised I didn't find what I want, My webapp is a link-sharing website with tags, votes, flags,comments, and search(by title and tags, no advanced search yet) .
PS
Obviously we're not going to handle errors(like bad sector) so exceptions is what we want here.
Other common errors are:
The various php-generated catchable fatal errors. See here. http://php.net/manual/en/errorfunc.constants.php
php's out of memory error, which you cannot catch.
php's maximum execution time error, also which you cannot catch.
all sorts of MySQL errors.
Many web application software developers create a last-chance error handler. It logs the error message and any available stack trace to a log file and presents a "sorry, that didn't work" page to the user.
As you might guess, it's best not to use MySQL to log errors, because if it's MySQL failing, it won't work.
This is a community wiki page. That means anybody can edit it.

Logging fewer entries into Yii console.log

I have a Yii application with many concurrent console jobs writing to one database. Due to the high concurrency sometimes I get MySQL deadlock errors. Sometimes these can be too many. The console.log file becomes too big, and it translates to more expenses.
I want to prevent logging of specific CDbException instances, or at least suppress them altogether (I am handling the exceptions and can generate more compact log sentences from there).
YII__DEBUG is already commented out.
Can anyone please help me figure out how to do this?
Thanks a lot!!
Regards.
I decided to modify the log statement in yii/framwework/db/CDbCommand.php that was logging the failed SQL. I converted it into a trace statement:
Yii::trace(Yii::t('yii','CDbCommand::{method}() failed: {error}. The SQL statement executed was: {sql}.', array('{method}'=>$method, '{error}'=>$message, '{sql}'=>$this->getText().$par)),CLogger::LEVEL_ERROR,'system.db.CDbCommand');
I am anyway catching the exception and logging a more compact version of the sentence, so it is OK for me to do it.
This was the easiest way I could find. We don't upgrade Yii very often, so if and when we go to the next version I'll probably repeat the change.

What causes mysterious hanging threads in Colfusion -> mysql communication

One of the more interesting "features" in Coldfusion is how it handles external requests. The basic gist of it is that when a query is made to an external source through <cfquery> or or any other external request like that it passes the external request on to a specific driver and at that point CF itself is unable to suspend it. Even if a timeout is specified on the query or in the cfsetting it is flatly ignored for all external requests.
http://www.coldfusionmuse.com/index.cfm/2009/6/9/killing.threads
So with that in mind the issue we've run into is that somehow the communication between our CF server and our mySQL server sometimes goes awry and leaves behind hung threads. They have the following characteristics.
The hung thread shows up in CF and cannot be killed from FusionReactor.
There is no hung thread visible in mySQL, and no active running query (just the usual sleeps).
The database is responding to other calls and appears to be operating correctly.
Max connections have not been reached for the DB nor the user.
It seems to me the only likely candidate is that somehow CF is making a request, mySQL is responding to that request but with an answer which CF ignores and continues to keep the thread open waiting for a response from mySQL. That would explain why the database seems to show no signs of problems, but CF keeps a thread open waiting for the mysterious answer.
Usually these hung threads appear randomly on otherwise working scripts (such as posting a comment on a news article). Even while one thread is hung for that script, other requests for that script will go through, which would imply that the script isn't neccessarily at fault, but rather the condition faced when the script was executed.
We ran some test to determine that it was not a mysql generated max_connections error... we created a user, gave it 1 max connections, tied that connection with a sleep(1000) query and executed another query. Unfortunately, it correctly errored out without generating a hung thread.
So, I'm left at this point with absolutely no clue what is going wrong. Is there some other connection limit or timeout which could be causing the communication between the servers to go awry?
One of the things you should start to look at is the hardware between the two servers. It is possible that you have a router or bridge or NIC that is dropping occasional packets. This can result in the mySQL box thinking it has completed the task while the CF server sits there and waits for a complete response indefinitely, creating a hung thread.
3com has some details on testing for packet loss here: http://support.3com.com/infodeli/tools/netmgt/tncsunix/product/091500/c11ploss.htm#22128
We had a similar problem with a MS SQL server. There, the root cause was a known issue in which, for some reason, the server thinks it's shutting down, and the thread hangs (even though the server is, obviously, not shutting down).
We weren't able to eliminate the problem, but were able to reduce it by turning off pooled DB connections and fiddling with the connection refresh rate. (I think I got that label right -- no access to administrator at my new employment.) Both are in the connection properties in Administrator.
Just a note: The problem isn't entirely with CF. The problem, apparently, affects all Java apps. Which does not, in any way, reduce how annoyed I get by this.
Long story short, but I believe the caused was due to Coldfusion's CF8 image processing. It was just buggy and now in CF9 I have never seen that problem again.

Fatal errors in live servers

I'm writing some client/server software and I'm facing the following design issue. Normally, I use a VERIFY macro very liberally - if something is wrong in an user's machine, I want the software to fail and log the error so it can be fixed. I was never a fan of ignoring any kind of errors.
However, I'm now writing a server. If the server dies, many clients go down, so the server should die as little as possible. Therefore, I don't know how to treat some conditions that I'd treat as fatal exceptions otherwise.
For example, I get a network packet from an user who isn't logged in. Even though it shouldn't happen, I have enough experience to know "impossible" errors do happen from time to time. So I'm pretty sure if I do a fatal error on these cases, the server WILL crash eventually. On the other hand, I could log and ignore the error and continue, but I'm afraid some bugs may go undetected this way.
What would you do in a situation like this one?
If you can recover from the error, than obviously it wasn't fatal. I can't see the benefit of failing if you can log the error and continue execution - the most important thing is that you've captured the error on log. If you can recover and continue to operate as normal, than that is the best course.
You should implement in addition a notification system (server monitoring) that depending on the error level would notify you in varying degrees of urgency so you'd pick up as soon as possible on something time critical. There are generic system like that for servers, such as Nagios and Munin. You should have look at what they do and see if you can take something from them and implement / integrate it into your system.
Regardless, you should try to make sure client instances are as sandboxed as possible. A client thread going down shouldn't take down the entire server - ever (at least in theory).