How to properly handle connection persistent things within node-mysql connection pooling - mysql

If I use node-mysql's connection pooling feature, wouldn't this pose a problem in my app because connections are re-used after connection.end()? Here's why I'm concerned: When you define a variable in SQL, or start a transaction, the variable or transaction is persistent until the connection is closed. Since the connection is never actually closed, but instead re-used, the variables and transactions can seep into another routine that doesn't expect the variable or expect the transaction to exist; it expects a fresh connection.
Variables could pose a big problem; a routine could never safely assume a variable to be undefined because the connection with wear and tear.
Transactions could pose an even bigger issue if one routine were to ever fail to rollback or commit a transaction before calling end(). If the next routine that were to use this connection doesn't deal with transactions, then all queries would be appended to the transaction and halted, never to be executed. I could just be careful when writing my app that such an error never occurs, but mistakes happen, and if it does happen, it'd be extremely difficult to debug (I wouldn't know which routine is mishandling connections, bad bad).
So these are some of my concerns when thinking about using pooling. Surely I'm not the only person to have thought of these issues, so please shed some light on how to properly handle pooled connections, if you'd be so kind. :)
Thank you very much.

All transactions will happen in the context of a single connection. After you end or terminate the connection, you can no longer operate on that transaction. The general pattern is to open, query, close and unless you have a long running transaction, you won't have any problems of queries or variables bleeding over into other connections.
In the event of a long running transaction, you will have to manage that connection object to make sure it exists and is only used by code that operates within that transaction.

Related

c3p0 seems to close active connections

I set unreturnedConnectionTimeout to release stale connections. I assumed that this was only going to close connections without any activity but it looks like this just closes every connection after the specified time.
Is this a bug or is this 'as designed'?
The manual states:
unreturnedConnectionTimeout defines a limit (in seconds) to how long a
Connection may remain checked out. If set to a nozero value,
unreturned, checked-out Connections that exceed this limit will be
summarily destroyed, and then replaced in the pool. Obviously, you
must take care to set this parameter to a value large enough that all
intended operations on checked out Connections have time to complete.
You can use this parameter to merely workaround unreliable client apps
that fail to close() Connections
From this I conclude that activity is not influencing the throwing away of connections. To me that sounds strange. Why throw away active connections?
Thanks,
Milo
i'm the author of c3p0, and of the paragraph you quote.
unreturnedConnectionTimeout is exactly what its name and documentation state: a timeout for unreturned Connections. it was implemented reluctantly, in response to user feedback, because it would never be necessary or useful if clients reliably check-in the Connections they check out. when it was implemented, I added a second unsolicited config param, debugUnreturnedConnectionStackTraces, to encourage developers to fix client applications rather than rely lazily on unreturnedConnectionTimeout.
there is nothing strange about the definition of unreturnedConnectionTimeout. generally, applications that use a Connection pool do not keep Connections checked out for long periods of time. doing so defies the purpose of a Connection pool, which is to allow applications to acquire Connections on an as-needed basis without a large performance penalty. the alternative to a Connection pool is for an application to check out Connections and retain them for long-periods of time, so they are always available for use. but maintaining long-lived Connections turns out to be complicated, so most applications delegate this to a pooling library like c3p0.
i understand that you have a preexisting application that maintains long-lived Connections, that you cannot easily modify. you would like a hybrid architecture between applications that maintain long-lived Connections directly and applications that delegate to a pool. in particular, what you want is for a library that helps you maintain the long-lived Connections that your application is already designed to retain.
c3p0 is not that library, unfortunately. c3p0 (like most Connection pooling libraries) considers checked-out Connections to be the client's property, and does no maintenance work on them until they are checked back in. there are two exceptions to this: unreturnedConnectionTimeout will close() Connections out from underneath clients if they have been checked out for too long, and c3p0 will invisibly test checked-out Connection when Exceptions occur, in order to determine whether Connections that have experienced Exceptions are suitable for return to the pool or else must be destroyed on check-in.
unreturnedConnectionTimeout is not the parameter you want. you would like something that automatically closes Connections when they are inactive for a period of time, but that permits Connections to be checked out indefinitely. such a parameter might be called inactiveConnectionTimout, and is a feature that could conceivably be added to c3p0, but has not been. it probably will not be, because few application hold checked-out Connections for long periods, and c3p0 is full of features that help you observe failures once Connections are checked-in, or when Connections transition between checked-out and checked in.
in your (pretty unusual) case, this means there is a feature you would like that simply is not provided by the library. i am sorry about that!
The unreturnedConnections can be active, it depends on how long it takes to execute eg. the query on the database. You should set the timeout for it to the value bigger then the longest operation you can expect with your application. Sometimes if you know that the value should be enough and c3p0 is still closing active connections than it means that the connection leaked somewhere (maybe not closed properly).

Should I close mySQL connection in-between method calls?

So this question is a matter of good idea/bad idea. I am using a MySQL connection many times in a short amount of time. I have created my own method calls to update values, insert, delete, etc.
I am reusing the same connection for each of these methods, but I am opening and closing the connection at each call. The problem being that I need to check to make sure that the connection is not open before I try to open it again.
So, the question is: Is there danger in just leaving the MySQL connection open in between method calls? I'd like to just leave it open and possibly improve speed while I am at it.
Thanks for any advice!
Generally speaking, no you shouldn't be closing it if in the same class / library / code scope you're just going to open it again.
This is dependant on the tolling / connection library you're using. if you're using connection pooling some library's will not actually close the connection (immediately) but return it to the pool.
The only comment I'll make about reusing a connection is that if you're using variables that are connection specific those variables will still be valid for the same connection and may cause problems later if another query uses one of them and it has a value from a past query that is no longer reliant - however this would also raise questions about the suitability of the variable in the first place.
Opening a connection is something is within MySQL is fairly light (compared with other databases) however you shouldn't be creating extra work if you can avoid it.

should I reuse mysql connect

I have program that constantly query a mysql server. Each time I access the server I made a connection then query it. I wonder if I can actually save time by reuse the same connection and only reconnect when the connection is closed. Given that I can fit many of my queries within the duration of connection timeout and my program has only one thread.
yes - this is a good idea.
remember to use the timeout so you don't leave connections open permanently.
also, remember to close it when the program exits. (even after exceptions)
Yes by all means, re-use the connection!
If you are also doing updates/delete/inserts through that connection make sure you commit (or rollback) you transactions properly so that once you are "done" with the connection, it is left in a clean state.
Another option would be to use a connection pooler.
Yes, you should reuse the connection, within reason. Don't leave a connection open indefinitely, but you can batch your queries together so that you get everything done, and then close it immediately afterwards.
Leaving a connection open too long means that under high traffic you might hit the maximum number of possible connections to your server.
Reconnecting often is just slow, causes a lot of unnecessary chatter, and is simply a waste.
Instead, you should look into using mysql_pconnect function which will create persistent connection to the database. You can read about it here:
http://php.net/manual/en/function.mysql-pconnect.php

mysql connections. Should I keep it alive or start a new connection before each transaction?

I'm doing my first foray with mysql and I have a doubt about how to handle the connection(s) my applications has.
What I am doing now is opening a connection and keeping it alive until I terminate my program. I do a mysql_ping() every now and then and the connection is started with MYSQL_OPT_RECONNECT.
The other option (I can think of), would be to start a new connection before doing anything that requires my connection to the database and closing it after I'm done with it.
What are the pros and cons of these two approaches?
what are the "side effects" of a long connection?
What is the most used method of handling this?
Cheers ;)
Some extra details
At this point I am keeping the connection alive and I ping it every now and again to now it's status and reconnect if needed.
In spite of this, when there is some consistent concurrency with queries happening in quick succession, I get a "Server has gone away" message and after a while the connection is re-established.
I'm left wondering if this is a side effect of a prolonged connection or if this is just a case of bad mysql server configuration.
Any ideas?
In general there is quite some amount of overhead incurred when opening a connection. Depending on how often you expect this to happen it might be ok, but if you are writing any kind of application that executes more than just a very few commands per program run, I would recommend a connection pool (for server type apps) or at least a single or very few connections from your standalone app to be kept open for some time and reused for multiple transactions.
That way you have better control over how many connections get opened at the application level, even before the database server gets involved. This is a service an application server offers you, but it can also be rolled up rather easily if you want to keep it smaller.
Apart from performance reasons a pool is also a good idea to be prepared for peaks in demand. When a lot of requests come in and each of them tries to open a separate connection to the database - or as you suggested even more (per transaction) - you are quickly going to run out of resources. Keep in mind that every connection consumes memory inside MySQL!
Also you want to make sure to use a non-root user to connect, because if you don't (I think it is tied to the MySQL SUPER privilege), you might find yourself locked out. MySQL reserves at least one connection for an administrator for problem fixing, but if your app connects with that privilege, all connections would already be used up when you try to put out the fire manually.
Unless you are worried about having too many connections open (i.e. over 1,000), you she leave the connection open. There is overhead in connecting/reconnecting that will only slow things down. If you know you are going to need the connection to stay open for a while, run this query instead of pinging periodically:
SET SESSION wait_timeout=#
Where # is the number of seconds to leave an idle connection open.
What kind of application are you writing? If it's a webscript: keep it open. If it's an executable, pool your connections (if necessary, most of the times a singleton will do).

how to solve lock_wait_timeout, subsequent rollback and data disappeareance from mysql 5.1.38

i am using a toplink with struts 2 and toplink for a high usage app, the app always access a single table with multiple read and writes per second. This causes a lock_wait_timeout error and the transaction rolls back, causing the data just entered to disappear from the front end. (Mysql's autocommit has been set to one). The exception has been caught and sent to an error page in the app but still a rollback occurs (it has to be a toplink exception as mysql does not have the rollback feature turned on). The raw data files, ibdata01 show the entry in it when opened in an editor. As this happend infreqeuntly have not been able to replicate in test conditions.
Can anyone be kind enough to provide some sort of way out of this dilemma? What sort of approach should such a high access (constant read and writes from the same table all the time)? Any help would be greatly appreciated.
What is the nature of your concurrent reads/updates? Are you updating the same rows constantly from different sessions? What do you expect to happen when two sessions update the same row at the same time?
If it is just reads conflicting with updates, consider reducing your transaction isolation on your database.
If you have multiple write conflicting, then you may consider using pessimistic locking to ensure each transaction succeeds. But either way, you will have lot of contention, so may reconsider your data model or application's usage of the data.
See,
http://en.wikibooks.org/wiki/Java_Persistence/Locking
lock_wait_timeouts are a fact of life for transactional databases. the normal response should usually be to trap the error and attempt to re-run the transaction. not many developers seem to understand this, so it bears repeating: if you get a lock_wait_timeout error and you still want to commit the transaction, then run it again.
other things to look out for are:
persistent connections and not
explicitly COMMIT'ing your
transactions leads to long-running
transactions that result in
unnecessary locks.
since you
have auto-commit off, if you log in
from the mysql CLI (or any other
interactive query tool) and start
running queries you stand a
significant chance of locking rows
and not releasing them in a timely
manner.