I am developing a back-end with Node.js and MySQL. Sometimes, there are a huge number of queries to be done on the database > 50,000, and I'm using connection pooling. My question is what happens to a query after it is rejected due to the pool being exhausted? Will it be queued until a connection becomes available and then executed? Or it will simply never be executed?
There are indeed similar questions but the answers didn't highlight my point, they just recommended increasing the limit size.
Maybe.
There are options you can set to modify the behavior. See https://github.com/mysqljs/mysql#pool-options
The request may wait in a queue for a free connection, or not. This is based on the option waitForConnections. If you set this option to false, the request returns an error immediately instead of waiting.
If more than queueLimit requests are already waiting, the new request returns an error immediately. The default value is 0, which means there is no limit to the queue.
The request will wait for a maximum of acquireTimeout milliseconds, then if it still didn't get a free connection, returns an error.
P.S.: I don't use Node.js, I just read this in the documentation.
Related
My app is working with MySQL database, to connection I'm using FireDAC components. Last time I have got a network problem, I test it and it is looks like (time to time) it losing 4 ping request. My app return me error: "[FireDAC][Phys][MySQL] Lost connection to MySQL server during query". Now the question: setting fdconnection.TFDUpdateOptions.LockWait to true (default is false) will resolve my problem or make new problems?
TFDUpdateOptions.LockWait has no effect on your connection to the database. It determines what happens when a record lock can't be obtained immediately. The documentation says it pretty clearly:
Use the LockWait property to control whether FireDAC should wait while the pessimistic lock is acquired (True), or return the error immediately (False) if the record is already locked. The default value is False.
The LockWait property is used only if LockMode = lmPessimistic.
FireDAC can't wait to get a lock if it loses the connection, as clearly there is no way to either request the lock or determine if it was obtained. Therefore, changing LockWait will not change the lost connection issue, and it may slow many other operations against the data.
The only solution to your lost ping requests is to fix your network connection so it stops dropping packets. Simply randomly changing options on TFDConnection isn't going to fix networking issues.
I'm trying to understand whether it is possible to achieve the following:
I have multiple instances of an application server running behind a round-robin load balancer. The client expects GET after POST/PUT semantics, in particular the client will make a POST request, wait for the response and immediately make a GET request expecting the response to reflect the change made by the POST request, e.g:
> Request: POST /some/endpoint
< Response: 201 CREATED
< Location: /some/endpoint/123
> Request: GET /some/endpoint/123
< Response must not be 404 Not Found
It is not guaranteed that both requests are handled by the same application server. Each application server has a pool of connections to the DB. Each request will commit a transaction before responding to the client.
Thus the database will on one connection see an INSERT statement, followed by a COMMIT. One another connection, it will see a SELECT statement. Temporally, the SELECT will be strictly after the commit, however there may only be a tiny delay in the order of milliseconds.
The application server I have in mind uses Java, Spring, and Hibernate. The database is MySQL 5.7.11 managed by Amazon RDS in a multiple availability zone setup.
I'm trying to understand whether this behavior can be achieved and how so. There is a similar question, but the answer suggesting to lock the table does not seem right for an application that must handle concurrent requests.
Under ordinary circumstances, you will not have any issue with this sequence of requests, since your MySQL will have committed the changes to the database by the time the 201 response has been sent back. Therefore, any subsequent statements will see the created / updated record.
What could be the extraordinary circumstances under which the subsequent select will not find the updated / inserted record?
Another process commits an update or delete statement that changes or removes the given record. There is not too much you can do about this, since it is part of the normal operation. If you do not want such thing to happen, then you have to implement application level locking of data.
The subsequent GET request is routed not only to a different application server, but that one uses (or is forced to use) a different database instance, which does not have the most updated state of that record. I would envisage this to happen if either application or database server level there is a severe failure, or routing of the request goes really bad (routed to a data center at a different geographical location). These should not happen too frequently.
If you're using MyISAM tables, you might be seeing the effects of 'concurrent inserts' (see 8.11.3 in the mysql manual). You can avoid them by either setting the concurrent_insert system variable to 0, or by using the HIGH_PRIORITY keyword on the INSERT.
I have written a web server using Delphi and the Indy TIdHttpServer component. I am managing a pool of TAdoConnection connections to a MySql database. When a request comes in I query my pool for available database connections. If one is not available then a new TAdoConnection is created and added to the pool.
Problems occur when a connection becomes "stale" (i.e. it has not been used in quite some time). I think in this instance the query results in the "MySql has gone away" error.
Does anyone have a method for getting around this? Or would I have manage it myself by one of the following:
Writing a thread that will periodically "refresh" all connections.
Keeping track of the last active query, and if too old pass up using the connection and instead free it.
Two suggestions:
store a 'last used' time stamp with every pooled connection, and if a connection is requested check if the connection is too old - in this case, create a new one
add a validateObject() method which issues a no-op SQL query to detect if the connection is still healthy
a background thread which cleans up the pool in regular intervals: removing idle connections allows to reduce the pool size back to a minimum after peak usage
For some suggestions, see this article about the Apache Commons Pool Framework: http://www.javaworld.com/article/2071834/build-ci-sdlc/pool-resources-using-apache-s-commons-pool-framework.html
When using SQL pass-thru queries in MS Access, there is a default time-out of 60 seconds, at which point an instruction is sent to the remote server to cancel the request. Is there anyway to send this command from the keyboard similar to Access' own "Ctrl + Break" operation?
Firstly, understanding how Control-C cancels execution. They probably trap that key sequence, and do something special. I strongly suspect that oracle's client apps (SQL*Plus et al) are calling OCIBreak() behind the scenes, and passing in the handle to the server that they obtained when they executed the query with a previous OCI call.
I also suspect that Access isn't doing anything actively after 60 seconds; that's just the timeout it requests at time of execution query. Even more so, I'm beginning to wonder if Access is even requesting that timeout; everything I've read says that the ODBC driver does not support a query timeout, which makes me think it's just a client-side timeout, but I digress...
So - back to this OCIBreak() call. Here's the bad news: I don't think ODBC implements these calls. To be 100% sure, you'd have to take a look at the ODBC driver for oracle sources, but everything I've read indicates that the API call is not exposed.
For reference, I've been googling with these search terms in combination with "OBDC":
ORA-01013 (error when a user cancelled an operation, or when an operation times out)
OCIBreak (OCI function which cancels a pending operation)
--- EDIT #1 ---
As a side note, I really believe that Access is just giving up, and not sending any type of cancel command when the Pass-Through timeout is exceeded. If you take a look at this kb article, the ODBC Driver doesn't even support a query timeout:
PRB: Connection Timeout and Query Timeout Not Supported with Microsoft Oracle ODBC Driver and OLE DB Provider
After the elapsed time, Access probably just stops listening for results. If you were to ask oracle for a list of queries that are still executing, I strongly suspect you'd still see yours listed.
--- EDIT #2 ---
As far as implementing your own "cancel" -- which isn't really a cancel, more of a "keep the UI responsive regardless of the state of a query" -- the keyword here is going be asynchronous. You're going to want to rewrite your code to execute asynchronously so that it isn't blocking the message pump for your UI. I'd start googling for "async query access" and see what pops up. One SO result came up:
Running asynchronous query in MS Access
as well as a decent starting point at xtremevbtalk.com:
http://www.xtremevbtalk.com/showthread.php?t=82631
In effect, instead of firing off code that blocks execution until either a timeout occurs or a result set is returned, you'll be asking access to kick off the code behind the scenes. You'll then set up an event that fires when something further happens, such as letting the user know that the timeout occurred (timeout failure), populating a grid with results (success), etc...)
One of the more interesting "features" in Coldfusion is how it handles external requests. The basic gist of it is that when a query is made to an external source through <cfquery> or or any other external request like that it passes the external request on to a specific driver and at that point CF itself is unable to suspend it. Even if a timeout is specified on the query or in the cfsetting it is flatly ignored for all external requests.
http://www.coldfusionmuse.com/index.cfm/2009/6/9/killing.threads
So with that in mind the issue we've run into is that somehow the communication between our CF server and our mySQL server sometimes goes awry and leaves behind hung threads. They have the following characteristics.
The hung thread shows up in CF and cannot be killed from FusionReactor.
There is no hung thread visible in mySQL, and no active running query (just the usual sleeps).
The database is responding to other calls and appears to be operating correctly.
Max connections have not been reached for the DB nor the user.
It seems to me the only likely candidate is that somehow CF is making a request, mySQL is responding to that request but with an answer which CF ignores and continues to keep the thread open waiting for a response from mySQL. That would explain why the database seems to show no signs of problems, but CF keeps a thread open waiting for the mysterious answer.
Usually these hung threads appear randomly on otherwise working scripts (such as posting a comment on a news article). Even while one thread is hung for that script, other requests for that script will go through, which would imply that the script isn't neccessarily at fault, but rather the condition faced when the script was executed.
We ran some test to determine that it was not a mysql generated max_connections error... we created a user, gave it 1 max connections, tied that connection with a sleep(1000) query and executed another query. Unfortunately, it correctly errored out without generating a hung thread.
So, I'm left at this point with absolutely no clue what is going wrong. Is there some other connection limit or timeout which could be causing the communication between the servers to go awry?
One of the things you should start to look at is the hardware between the two servers. It is possible that you have a router or bridge or NIC that is dropping occasional packets. This can result in the mySQL box thinking it has completed the task while the CF server sits there and waits for a complete response indefinitely, creating a hung thread.
3com has some details on testing for packet loss here: http://support.3com.com/infodeli/tools/netmgt/tncsunix/product/091500/c11ploss.htm#22128
We had a similar problem with a MS SQL server. There, the root cause was a known issue in which, for some reason, the server thinks it's shutting down, and the thread hangs (even though the server is, obviously, not shutting down).
We weren't able to eliminate the problem, but were able to reduce it by turning off pooled DB connections and fiddling with the connection refresh rate. (I think I got that label right -- no access to administrator at my new employment.) Both are in the connection properties in Administrator.
Just a note: The problem isn't entirely with CF. The problem, apparently, affects all Java apps. Which does not, in any way, reduce how annoyed I get by this.
Long story short, but I believe the caused was due to Coldfusion's CF8 image processing. It was just buggy and now in CF9 I have never seen that problem again.