How to increase time of app engine request handler as it abort each request in 60 sec? - mysql

I have an application deployed on GAE having endpoints. Each endpoint make a connection with database , get data and close connection and return data. Normally everything works fine but when there is hike in requests it starts taking more than 60 sec and requests get aborted. Due to this it does not close database connection and there mysql got 1000+ connections and then each requests starts aborting and it shows deadline exceeded error. Is there any solution for this ?

You could wrap the "get data" portion with a try... finally... statement and move the "close connection" portion in the finally section. Then start an "about to exceed deadline" timer before "get data" (something like say 45 seconds) and raise an exception if the timer expires, allowing you to close the connection in the finally portion, which should take care of the orphan open connections (but would not prevent errors in those requests).
If your application tolerates it you could also look into using task queues which have a 10 min deadline, which could help reducing/eliminating the errors in the requests as well.
You can also find some general advice for addressing deadline exceeded errors here: https://cloud.google.com/appengine/articles/deadlineexceedederrors, donno if applicable to your app.
EDIT: actually the suggestion in the first paragraph above doesn't work on GAE as the Python sandbox doesn't allow installing a custom signal handler:
signal.signal(signal.SIGALRM, timer_expired)
AttributeError: 'module' object has no attribute 'signal'
After seeing your code a somehow equivalent solution would be to replace your cursor.fetchall() with a loop of cursor.fetchone() or cursor.fetchmany() to split your operation in smaller pieces:
http://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-fetchone.html. You'd get a start timestamp (with time.time() for example) when entering your request handler. Then inside the loop you'd get another timestamp to measure the time elapsed so far since the start timestamp and you'd break out of the loop and close the DB connection when deadline expiration nears. Again, this won't help with actually replying successfully to the requests if it takes so much time to prepare the replies.

You can use this solution to close connections when deadlines are exceeded:
Dealing with DeadlineExceededErrors
This way you won't have any open connections hanging there forever.

Think about the design of your application -
1.Use the deadline exception handling - Design smells
Because there will be situation(s) where db operation takes more than 60 seconds , If its a simple query then its well and good , but reconsider the design of the application . User Expierence is going to be hurt.
2.Lets change the design to use the endpoints-
https://cloud.google.com/appengine/docs/java/endpoints/
The way to go ,future proof.
3.Using Back-end or Task-queues as descibed in this post
Max Time for computation on Google App Engine

You can set the Timeouts
interactive_timeout
and / or
wait_timeout
based on connection Type they use one of them

Related

c3p0 getNumBusyConnectionsDefaultUser()... What does busy mean?

I know this is a very basic question, but I would like a deeper understanding of what "busy" means. I have done a double check that I close all my connections. I know that in c3p0, "the pool will intercept the call to close() and check the underlying Connection back into the pool." I would expect the number of busy connections to trend to zero, but this does not happen. Any ideas why? How long does a connection stay in the "busy" state? Shouldn't the connection become unbusy when I close it? Thx in advance.
OK... I finally figured this out by watching the DEBUG log statements from mchange. There is a "check for expired resources" every 5 seconds. If you get and use a connection just before that window and then you call "getNumConnectionsDefaultUser()" within that window you may not get an accurate count because the connection could be marked s busy and become unbusy before the window closes.
Essentially (I think), c3p0 does not maintain its own counters based on a change in status, it cycles through the collection of connections every five seconds checking current status.

Handling doctrine 2 connections in long running background scripts

I'm running PHP commandline scripts as rabbitmq consumers which need to connect to a MySQL database. Those scripts run as Symfony2 commands using Doctrine2 ORM, meaning opening and closing the database connection is handled behind the scenes.
The connection is normally closed automatically when the cli command exits - which is by definition not happening for a long time in a background consumer.
This is a problem when the consumer is idle (no incoming messages) longer then the wait_timeout setting in the MySQL server configuration. If no message is consumed longer than that period, the database server will close the connection and the next message will fail with a MySQL server has gone away exception.
I've thought about 2 solutions for the problem:
Open the connection before each message and close the connection manually after handling the message.
Implementing a ping message which runs a dummy SQL query like SELECT 1 FROM table each n minutes and call it using a cronjob.
The problem with the first approach is: If the traffic on that queue is high, there might be a significant overhead for the consumer in opening/closing connections. The second approach just sounds like an ugly hack to deal with the issue, but at least i can use a single connection during high load times.
Are there any better solutions for handling doctrine connections in background scripts?
Here is another Solution. Try to avoid long running Symfony 2 Workers. They will always cause problems due to their long execution time. The kernel isn't made for that.
The solution here is to build a proxy in front of the real Symfony command. So every message will trigger a fresh Symfony kernel. Sound's like a good solution for me.
http://blog.vandenbrand.org/2015/01/09/symfony2-and-rabbitmq-lessons-learned/
My approach is a little bit different. My workers only process one message, then die. I have supervisor configured to create a new worker every time. So, a worker will:
Ask for a new message.
If there are no messages, sleep for 20 seconds. If not, supervisor will think there's something wrong and stop creating the worker.
If there is a message, process it.
Maybe, if processing a message is super fast, sleep for the same reason than 2.
After processing the message, just finish.
This has worked very well using AWS SQS.
Comments are welcomed.
This is a big problem when running PHP-Scripts for too long. For me, the best solution is to restart the script some times. You can see how to do this in this Topic: How to restart PHP script every 1 hour?
You should also run multiple instances of your consumer. Add a counter to any one and terminate them after some runs. Now you need a tool to ensure a consistent amount of worker processes. Something like this: http://kamisama.me/2012/10/12/background-jobs-with-php-and-resque-part-4-managing-worker/

WinInet set session timeout

I am using WinInet in C / C++ application to connect to a ASP.NET Web Service.
I want to increase my SESSION TIMEOUT time.
Currently somehow SESSION Time out is 20 minutes and I want to increase it to 50 minutes.
Which option do it use for the option INTERNET_OPTION_XXXXX in
InternetSetOption(hInstance, INTERNET_OPTION_XXXXX,(LPVOID) &timeout, sizeof(timeout));
Unlike WinHTTP that has WinHttpSetTimeouts there is no equivalent function available at WinINet.
I realise this is an old question however it seems there is no info on how to do this on SO. So I am posting this in case someone wants to know how to set timeouts with WinINet.
Normally you would use INTERNET_OPTION_CONNECT_TIMEOUT,INTERNET_OPTION_RECEIVE_TIMEOUT, or INTERNET_OPTION_SEND_TIMEOUT with InternetSetOption. See here for details on the option flags: https://learn.microsoft.com/en-us/windows/win32/wininet/option-flags
However, there is a bug which it seems MS has not fixed in about 20 years. The above timeout flags simply don't work.
So the way to work around this is to create second worker thread to watch for the connection request. The second request will kill the main connection request if it doesn't receive response from server in the timeout set. See this MS KB article for details and example:
https://mskb.pkisolutions.com/kb/224318

MS Access cancel execution of pass-thru query keyboard shortcut

When using SQL pass-thru queries in MS Access, there is a default time-out of 60 seconds, at which point an instruction is sent to the remote server to cancel the request. Is there anyway to send this command from the keyboard similar to Access' own "Ctrl + Break" operation?
Firstly, understanding how Control-C cancels execution. They probably trap that key sequence, and do something special. I strongly suspect that oracle's client apps (SQL*Plus et al) are calling OCIBreak() behind the scenes, and passing in the handle to the server that they obtained when they executed the query with a previous OCI call.
I also suspect that Access isn't doing anything actively after 60 seconds; that's just the timeout it requests at time of execution query. Even more so, I'm beginning to wonder if Access is even requesting that timeout; everything I've read says that the ODBC driver does not support a query timeout, which makes me think it's just a client-side timeout, but I digress...
So - back to this OCIBreak() call. Here's the bad news: I don't think ODBC implements these calls. To be 100% sure, you'd have to take a look at the ODBC driver for oracle sources, but everything I've read indicates that the API call is not exposed.
For reference, I've been googling with these search terms in combination with "OBDC":
ORA-01013 (error when a user cancelled an operation, or when an operation times out)
OCIBreak (OCI function which cancels a pending operation)
--- EDIT #1 ---
As a side note, I really believe that Access is just giving up, and not sending any type of cancel command when the Pass-Through timeout is exceeded. If you take a look at this kb article, the ODBC Driver doesn't even support a query timeout:
PRB: Connection Timeout and Query Timeout Not Supported with Microsoft Oracle ODBC Driver and OLE DB Provider
After the elapsed time, Access probably just stops listening for results. If you were to ask oracle for a list of queries that are still executing, I strongly suspect you'd still see yours listed.
--- EDIT #2 ---
As far as implementing your own "cancel" -- which isn't really a cancel, more of a "keep the UI responsive regardless of the state of a query" -- the keyword here is going be asynchronous. You're going to want to rewrite your code to execute asynchronously so that it isn't blocking the message pump for your UI. I'd start googling for "async query access" and see what pops up. One SO result came up:
Running asynchronous query in MS Access
as well as a decent starting point at xtremevbtalk.com:
http://www.xtremevbtalk.com/showthread.php?t=82631
In effect, instead of firing off code that blocks execution until either a timeout occurs or a result set is returned, you'll be asking access to kick off the code behind the scenes. You'll then set up an event that fires when something further happens, such as letting the user know that the timeout occurred (timeout failure), populating a grid with results (success), etc...)

MySQL odbc timeout from R

I'm using R to read some data from a MySQL database using the RODBC package. The data is then processed and some results are sent back to the database. The problem is that the server closes the connection after about a minute due to inactivity, which is the time needed to process the data locally. It's a shared server, so the host won't bump up the timeout time.
I think there are two possibilities to get around this:
Open a connection before every database transaction and close it immediately after
Send some small 'ping' command to the server every 30 seconds or so to let the server know that I'm still there.
I can implement the first fairly easily, but it seems pretty slow to constantly open and close connections. Does anyone know an efficient command for the second? Or is a better way altogether?
The first solution is the one I prefer. It's really hard to do the latter with a single threaded program like R. If R is busy running analysis there's no way for it to handle the ping. Unless you are doing hundreds of reads/writes the method of opening and closing the connection should not introduce an extreme amount of overhead.