When using SQL pass-thru queries in MS Access, there is a default time-out of 60 seconds, at which point an instruction is sent to the remote server to cancel the request. Is there anyway to send this command from the keyboard similar to Access' own "Ctrl + Break" operation?
Firstly, understanding how Control-C cancels execution. They probably trap that key sequence, and do something special. I strongly suspect that oracle's client apps (SQL*Plus et al) are calling OCIBreak() behind the scenes, and passing in the handle to the server that they obtained when they executed the query with a previous OCI call.
I also suspect that Access isn't doing anything actively after 60 seconds; that's just the timeout it requests at time of execution query. Even more so, I'm beginning to wonder if Access is even requesting that timeout; everything I've read says that the ODBC driver does not support a query timeout, which makes me think it's just a client-side timeout, but I digress...
So - back to this OCIBreak() call. Here's the bad news: I don't think ODBC implements these calls. To be 100% sure, you'd have to take a look at the ODBC driver for oracle sources, but everything I've read indicates that the API call is not exposed.
For reference, I've been googling with these search terms in combination with "OBDC":
ORA-01013 (error when a user cancelled an operation, or when an operation times out)
OCIBreak (OCI function which cancels a pending operation)
--- EDIT #1 ---
As a side note, I really believe that Access is just giving up, and not sending any type of cancel command when the Pass-Through timeout is exceeded. If you take a look at this kb article, the ODBC Driver doesn't even support a query timeout:
PRB: Connection Timeout and Query Timeout Not Supported with Microsoft Oracle ODBC Driver and OLE DB Provider
After the elapsed time, Access probably just stops listening for results. If you were to ask oracle for a list of queries that are still executing, I strongly suspect you'd still see yours listed.
--- EDIT #2 ---
As far as implementing your own "cancel" -- which isn't really a cancel, more of a "keep the UI responsive regardless of the state of a query" -- the keyword here is going be asynchronous. You're going to want to rewrite your code to execute asynchronously so that it isn't blocking the message pump for your UI. I'd start googling for "async query access" and see what pops up. One SO result came up:
Running asynchronous query in MS Access
as well as a decent starting point at xtremevbtalk.com:
http://www.xtremevbtalk.com/showthread.php?t=82631
In effect, instead of firing off code that blocks execution until either a timeout occurs or a result set is returned, you'll be asking access to kick off the code behind the scenes. You'll then set up an event that fires when something further happens, such as letting the user know that the timeout occurred (timeout failure), populating a grid with results (success), etc...)
Related
My app is working with MySQL database, to connection I'm using FireDAC components. Last time I have got a network problem, I test it and it is looks like (time to time) it losing 4 ping request. My app return me error: "[FireDAC][Phys][MySQL] Lost connection to MySQL server during query". Now the question: setting fdconnection.TFDUpdateOptions.LockWait to true (default is false) will resolve my problem or make new problems?
TFDUpdateOptions.LockWait has no effect on your connection to the database. It determines what happens when a record lock can't be obtained immediately. The documentation says it pretty clearly:
Use the LockWait property to control whether FireDAC should wait while the pessimistic lock is acquired (True), or return the error immediately (False) if the record is already locked. The default value is False.
The LockWait property is used only if LockMode = lmPessimistic.
FireDAC can't wait to get a lock if it loses the connection, as clearly there is no way to either request the lock or determine if it was obtained. Therefore, changing LockWait will not change the lost connection issue, and it may slow many other operations against the data.
The only solution to your lost ping requests is to fix your network connection so it stops dropping packets. Simply randomly changing options on TFDConnection isn't going to fix networking issues.
I'm looking for a little guidance. I have an Access App that connects to a server to run SQL queries. I can test the connection (ping) prior to running any queries, but I am not able to gracefully handle when connection is lost mid-stream (which seems to happen too frequently).
I have err_Handing in-place, but I seem to get a nested collection of error messages, including:
3146 - ODBC Error
3151 - ODBC Connection Error
3704 - Object is Closed
2046 - Quit not available
>> Requires Ctl-Break, or Task Mgr to break loose...
I do account for long-duration queries with: db.QueryTimeout = 0; I don't think these are my my issue at this point.
To begin to address my issues, I recently converted over from Global Variables to TempVars, so my app no longer loses its mind with these handled and un-handled errors. Seems now I have marginally more control, but still my Access App gets hung in Err Msg hell.
My desired response to a LOST connection:
Trap the error condition
Message to the user announcing the situation
Write to log fie to capture current state
Graceful exit from Access
Any suggestions or pointers to begin to address this need?
Thank you!
You can freely use global variables (VBA) and they NEVER lose their values if you use a compiled accDE. So the conversion of such variables to tempVars really is not required. Since an accDE (compiled application) never loses VBA values (even without error handling), then the result is a far more robust and reliable application. And a compiled accDE also prevents the VBA code from becoming un-compiled, and also locks down the design of the application from being tampered with.
As for losing a connection, there is really not a viable solution at this point in time. At application start up you can check for a connection and gracefully exit by using this approach:
ACC2000: How to Trap ODBC Logon Error Messages
http://support.microsoft.com/kb/210319
However, during a session with bound forms and a loss of the network connection cannot really be trapped or dealt with in a graceful manor – the only real solution is to address the loss of connection and prevent it in the first place.
I have an application deployed on GAE having endpoints. Each endpoint make a connection with database , get data and close connection and return data. Normally everything works fine but when there is hike in requests it starts taking more than 60 sec and requests get aborted. Due to this it does not close database connection and there mysql got 1000+ connections and then each requests starts aborting and it shows deadline exceeded error. Is there any solution for this ?
You could wrap the "get data" portion with a try... finally... statement and move the "close connection" portion in the finally section. Then start an "about to exceed deadline" timer before "get data" (something like say 45 seconds) and raise an exception if the timer expires, allowing you to close the connection in the finally portion, which should take care of the orphan open connections (but would not prevent errors in those requests).
If your application tolerates it you could also look into using task queues which have a 10 min deadline, which could help reducing/eliminating the errors in the requests as well.
You can also find some general advice for addressing deadline exceeded errors here: https://cloud.google.com/appengine/articles/deadlineexceedederrors, donno if applicable to your app.
EDIT: actually the suggestion in the first paragraph above doesn't work on GAE as the Python sandbox doesn't allow installing a custom signal handler:
signal.signal(signal.SIGALRM, timer_expired)
AttributeError: 'module' object has no attribute 'signal'
After seeing your code a somehow equivalent solution would be to replace your cursor.fetchall() with a loop of cursor.fetchone() or cursor.fetchmany() to split your operation in smaller pieces:
http://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-fetchone.html. You'd get a start timestamp (with time.time() for example) when entering your request handler. Then inside the loop you'd get another timestamp to measure the time elapsed so far since the start timestamp and you'd break out of the loop and close the DB connection when deadline expiration nears. Again, this won't help with actually replying successfully to the requests if it takes so much time to prepare the replies.
You can use this solution to close connections when deadlines are exceeded:
Dealing with DeadlineExceededErrors
This way you won't have any open connections hanging there forever.
Think about the design of your application -
1.Use the deadline exception handling - Design smells
Because there will be situation(s) where db operation takes more than 60 seconds , If its a simple query then its well and good , but reconsider the design of the application . User Expierence is going to be hurt.
2.Lets change the design to use the endpoints-
https://cloud.google.com/appengine/docs/java/endpoints/
The way to go ,future proof.
3.Using Back-end or Task-queues as descibed in this post
Max Time for computation on Google App Engine
You can set the Timeouts
interactive_timeout
and / or
wait_timeout
based on connection Type they use one of them
I have an app that requires local users to Sync back to the SQL server periodically (event based, including upon Close/Exit).
My users have occasional internet/VPN issues that throw the expected "3146" error.
Problem:
When ODBC error is thrown, my app LOSES its mind (global variables are lost, etc.) and the app becomes utterly unusable. There are many subsequent layers of error messages thrown to my users, occasionally requiring a Ctl-Break to interrupt (or task manager).
Question:
I have an err_handler in every module that provides a structured error message. I am able to trap err_number "3146" in the err_handler module, where I attempt an abrupt "Application.Quit" (to avoid the subsequent err messages). I still get a couple subsequent err messages before the application fully terminates.
Is there a better approach to more gracefully handling "3146" errors?
Looking for some good ideas.
Thanks!
If you are handling the error then there should not be a problem. How you should be handling the error is to not do Application.Quit, you should actually do something about the error. A failed connection is not a reason to blow up your app.
Instead, think about caching data locally so that when the connection can be made you can perform your sync again. When you discover your connection failed, stop trying to connect, abort the syncing process, and tell your users "Hey, we couldn't sync now. You might be having VPN issues. Fix those and try to sync again." And all the while your data are still stored in your accdb so that if they go into work the next day and are hardwired into the network they can then sync successfully.
One of the more interesting "features" in Coldfusion is how it handles external requests. The basic gist of it is that when a query is made to an external source through <cfquery> or or any other external request like that it passes the external request on to a specific driver and at that point CF itself is unable to suspend it. Even if a timeout is specified on the query or in the cfsetting it is flatly ignored for all external requests.
http://www.coldfusionmuse.com/index.cfm/2009/6/9/killing.threads
So with that in mind the issue we've run into is that somehow the communication between our CF server and our mySQL server sometimes goes awry and leaves behind hung threads. They have the following characteristics.
The hung thread shows up in CF and cannot be killed from FusionReactor.
There is no hung thread visible in mySQL, and no active running query (just the usual sleeps).
The database is responding to other calls and appears to be operating correctly.
Max connections have not been reached for the DB nor the user.
It seems to me the only likely candidate is that somehow CF is making a request, mySQL is responding to that request but with an answer which CF ignores and continues to keep the thread open waiting for a response from mySQL. That would explain why the database seems to show no signs of problems, but CF keeps a thread open waiting for the mysterious answer.
Usually these hung threads appear randomly on otherwise working scripts (such as posting a comment on a news article). Even while one thread is hung for that script, other requests for that script will go through, which would imply that the script isn't neccessarily at fault, but rather the condition faced when the script was executed.
We ran some test to determine that it was not a mysql generated max_connections error... we created a user, gave it 1 max connections, tied that connection with a sleep(1000) query and executed another query. Unfortunately, it correctly errored out without generating a hung thread.
So, I'm left at this point with absolutely no clue what is going wrong. Is there some other connection limit or timeout which could be causing the communication between the servers to go awry?
One of the things you should start to look at is the hardware between the two servers. It is possible that you have a router or bridge or NIC that is dropping occasional packets. This can result in the mySQL box thinking it has completed the task while the CF server sits there and waits for a complete response indefinitely, creating a hung thread.
3com has some details on testing for packet loss here: http://support.3com.com/infodeli/tools/netmgt/tncsunix/product/091500/c11ploss.htm#22128
We had a similar problem with a MS SQL server. There, the root cause was a known issue in which, for some reason, the server thinks it's shutting down, and the thread hangs (even though the server is, obviously, not shutting down).
We weren't able to eliminate the problem, but were able to reduce it by turning off pooled DB connections and fiddling with the connection refresh rate. (I think I got that label right -- no access to administrator at my new employment.) Both are in the connection properties in Administrator.
Just a note: The problem isn't entirely with CF. The problem, apparently, affects all Java apps. Which does not, in any way, reduce how annoyed I get by this.
Long story short, but I believe the caused was due to Coldfusion's CF8 image processing. It was just buggy and now in CF9 I have never seen that problem again.