Constant MySQL Connection or Connect when needed - mysql

I am building a little daemon which periodically (every 30 seconds) checks for new data and enters it in a local MySQL Database.
I was just wondering whether it was better to create a connection to the database when the application launches and always use that connection throughout the application until it is closed, or if it should only open a connection when there is new data, close it after the data has been added and then repeat this when there is new data 30 seconds later?
Thank you.

I would recommend that you do whatever you find easiest to code. Don't waste any time trying to solve what will most likely be a non-problem.
If it turns out there is any difficulty with contention, connection limits or other such things you can fix it later.

That depends.
In your case the performance won't matter, since you won't be performing thousands of queries/logins per second and the new connection/login overhead is in (tens of) milliseconds.
If you use a single connection, you have to make sure your daemon handles sudden disconnections from the MySQL side and is able to recover from there. Also if you ever move your application so your application would be on a different server than the MySQL, then many firewalls can drop prolonged connections every now and then.
If you create a new connection every time and then disconnect when finished, things like firewalls cleaning up old connections won't bite you so easily.

Related

Creating a pool of connections vs 1 permanent in MySQL

So this is more of a generic question but an important one for me and perhaps future googlers.
Since one can create one connection and keep it alive as long as whatever process is connected to it is alive too, and librarys can keep it healthy (upon failing etc) why would one use a pool.
I can not understand where the performance enhancement comes into to play.
The querys are just getting queued the same way they would with one connection.
There is no 'parallel' processing.
Also assuming the process and the DB are in the same server there is no time lost sending the request over the network. In addition no time is lost connecting, and ending the connection with either option.
I can only see the demerits such as making sure the data selected are not currently getting updated by another connection, thus receiving old data etc.
I wanted to boost the performance of my MySQL DB and make it more scalable as I stumbled upon the pool vs 1 permanent connection argument without being sure if I should change or not.

Safely keeping MySQL connections alive

I'm working on a node.js application that connects to a MySQL server. The following likely isn't node.js-specific, though.
Currently, my code initializes a MySQL connection at the application start-up, and then it uses that connection every time it needs to make a query.
The issue I'm facing with my approach is that the connection tends to close after a period of time. I'm not sure how long that period of time is, but it seems to be at least several hours. I'm also not sure whether it's caused by inactivity.
In any case, I'm wondering what would be a better approach for managing MySQL connections long-term. Of possible approaches, I've considered:
Simply checking before each query to see whether the connection is still valid. If not, reconnect before executing the query.
Pooling MySQL connections. Would this be overkill for a fairly small application?
Periodically (every hour or so), execute a query, in case this is occurring due to inactivity. However, this doesn't remedy the situation in possible cases not caused by inactivity.
Connect and disconnect before/after queries. Bad idea because of the overhead involved.
I'm leaning toward using one of the first two options, as you might imagine. Which of the options would be most reliable and efficient?
The best practice for Node.js seems to be to use a connection pool. See Node.js MySQL Needing Persistent Connection.
The default timeout for idle connections is 28800 seconds, and is configurable with the wait_timeout variable. See http://dev.mysql.com/doc/refman/5.6/en/server-system-variables.html#sysvar_wait_timeout

should I reuse mysql connect

I have program that constantly query a mysql server. Each time I access the server I made a connection then query it. I wonder if I can actually save time by reuse the same connection and only reconnect when the connection is closed. Given that I can fit many of my queries within the duration of connection timeout and my program has only one thread.
yes - this is a good idea.
remember to use the timeout so you don't leave connections open permanently.
also, remember to close it when the program exits. (even after exceptions)
Yes by all means, re-use the connection!
If you are also doing updates/delete/inserts through that connection make sure you commit (or rollback) you transactions properly so that once you are "done" with the connection, it is left in a clean state.
Another option would be to use a connection pooler.
Yes, you should reuse the connection, within reason. Don't leave a connection open indefinitely, but you can batch your queries together so that you get everything done, and then close it immediately afterwards.
Leaving a connection open too long means that under high traffic you might hit the maximum number of possible connections to your server.
Reconnecting often is just slow, causes a lot of unnecessary chatter, and is simply a waste.
Instead, you should look into using mysql_pconnect function which will create persistent connection to the database. You can read about it here:
http://php.net/manual/en/function.mysql-pconnect.php

mysql connections. Should I keep it alive or start a new connection before each transaction?

I'm doing my first foray with mysql and I have a doubt about how to handle the connection(s) my applications has.
What I am doing now is opening a connection and keeping it alive until I terminate my program. I do a mysql_ping() every now and then and the connection is started with MYSQL_OPT_RECONNECT.
The other option (I can think of), would be to start a new connection before doing anything that requires my connection to the database and closing it after I'm done with it.
What are the pros and cons of these two approaches?
what are the "side effects" of a long connection?
What is the most used method of handling this?
Cheers ;)
Some extra details
At this point I am keeping the connection alive and I ping it every now and again to now it's status and reconnect if needed.
In spite of this, when there is some consistent concurrency with queries happening in quick succession, I get a "Server has gone away" message and after a while the connection is re-established.
I'm left wondering if this is a side effect of a prolonged connection or if this is just a case of bad mysql server configuration.
Any ideas?
In general there is quite some amount of overhead incurred when opening a connection. Depending on how often you expect this to happen it might be ok, but if you are writing any kind of application that executes more than just a very few commands per program run, I would recommend a connection pool (for server type apps) or at least a single or very few connections from your standalone app to be kept open for some time and reused for multiple transactions.
That way you have better control over how many connections get opened at the application level, even before the database server gets involved. This is a service an application server offers you, but it can also be rolled up rather easily if you want to keep it smaller.
Apart from performance reasons a pool is also a good idea to be prepared for peaks in demand. When a lot of requests come in and each of them tries to open a separate connection to the database - or as you suggested even more (per transaction) - you are quickly going to run out of resources. Keep in mind that every connection consumes memory inside MySQL!
Also you want to make sure to use a non-root user to connect, because if you don't (I think it is tied to the MySQL SUPER privilege), you might find yourself locked out. MySQL reserves at least one connection for an administrator for problem fixing, but if your app connects with that privilege, all connections would already be used up when you try to put out the fire manually.
Unless you are worried about having too many connections open (i.e. over 1,000), you she leave the connection open. There is overhead in connecting/reconnecting that will only slow things down. If you know you are going to need the connection to stay open for a while, run this query instead of pinging periodically:
SET SESSION wait_timeout=#
Where # is the number of seconds to leave an idle connection open.
What kind of application are you writing? If it's a webscript: keep it open. If it's an executable, pool your connections (if necessary, most of the times a singleton will do).

MySQL odbc timeout from R

I'm using R to read some data from a MySQL database using the RODBC package. The data is then processed and some results are sent back to the database. The problem is that the server closes the connection after about a minute due to inactivity, which is the time needed to process the data locally. It's a shared server, so the host won't bump up the timeout time.
I think there are two possibilities to get around this:
Open a connection before every database transaction and close it immediately after
Send some small 'ping' command to the server every 30 seconds or so to let the server know that I'm still there.
I can implement the first fairly easily, but it seems pretty slow to constantly open and close connections. Does anyone know an efficient command for the second? Or is a better way altogether?
The first solution is the one I prefer. It's really hard to do the latter with a single threaded program like R. If R is busy running analysis there's no way for it to handle the ping. Unless you are doing hundreds of reads/writes the method of opening and closing the connection should not introduce an extreme amount of overhead.