Is there anyway I could keep the DBI session active even after the script exits?
http://mysqlresources.com/documentation/perl-dbi/connect
Basically I need to call the perl (DBI) script multiple times with different parameters (decide pass/fail after it completes). Each time its called Perl is making new connection to Mysql and destroys while exiting which itself is adding considerable amount of delay.
Just wondering if there is any way I could store and use the session for future?
Your connection and its associated socket is process specific, so there's no way of keeping it alive after your process terminates.
You should be able to better tune your server so that connecting is faster. A common issue is doing a reverse IP lookup by enabling the skip-name-resolve configuration parameter in my.cnf.
Barring that, what you might do is use either MySQL Proxy to keep a pool of warm connections, or to combine all your various operations into a single script that can run several stages without terminating.
Related
I am creating a rest api that uses mysql as data base. My confusion is that should i connect to database in every request and release the connection at the end of the operation. Or should i connect the database at the start of the server and make it globally available and forget about releasing the connection
I would caution that neither option is quite wise.
The advantage of creating one connection for each request is that those connections can interact with your database in parallel, this is great when you have a lot of requests coming through.
The disadvantage (and the reason you might just create one connection on startup and share it) is obviously the setup cost of establishing a new connection each time.
One option to look into is connection pooling https://en.wikipedia.org/wiki/Connection_pool.
At a high level you can establish a pool of open connections on startup. When you need to make a request remove one of those connections from the pool, use it, and return it when done.
There are a number of useful Node packages that implement this abstraction, you should be able to find one if you look.
I'm experiencing a problem with Rails and my MySQL RDS Instance. I have my rails app connected to it through our database.yml file with a pool of 10 (now 5) connections. The other day another user of the database tried running a stored procedure but it would not execute. It was stuck just hanging around waiting to execute. The user looked at the processes and noticed that our rails user had around 30 idle processes so they killed some of those. The stored procedure kicked off then and ran without issue.
We are on an r3.xlarge instance and had ~100 total processes at the time of problem. This doesn't seem alarmingly high to me and I'm not sure why the procedure wouldn't execute without freeing up some of the processes. I guess my question is, is there a way to tell my rails app to release some of these idle connections after x seconds, or a way to control these connections better? I can write a cron which frees them up, but I'd love to do it the rails/best way.
Thanks for any help!
It seems to me that you may have hit the maximum connections limit on the MySQL instance. You can run select ##max_connections on your MySQL to find out the limit.
I don't know of a way to force Rails to close its allocated db connections. Each server process may use up to the pool size connections (i.e. 10 or 5 in your case) to the db for its threads. The distinction between threads and processes is important: if you for example have multiple workers serving your rails app running as separate processes (e.g. puma can be configured like that), then each of the process may allocate up to 5 or 10 connections. If you use background processes (sidekiq etc.), they also may use up to this amount of connections.
The ConnectionPool also provides a reaper that can be used to free allocated db connections from dead threads but unless your app is having some larger troubles, this usually will not help (your threads are more probably idle than dead).
So, I'd give a general advice to try to estimate the maximum number of connections that all your rails processes might need and if it is near or above the MySQL connection limit, either lower the connection pool size or decrease the number of possibly run Rails processes (workers).
If you need more help, please specify what application server do you use to run your Rails app and how it is configured and the same also for any background job workers.
Try setting reaping_frequency in your database.yml file:
reaping_frequency: frequency in seconds to periodically run the Reaper,
which attempts to find and recover connections from dead threads,
which can occur if a programmer forgets to close a connection at the
end of a thread or a thread dies unexpectedly. Regardless of this
setting, the Reaper will be invoked before every blocking wait.
(Default nil, which means don't schedule the Reaper)
Above documentation from: http://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/ConnectionPool.html
I am using the Sequel gem to connect to a database. The DB server is remote, though, so I have to log in over SSH first.
My Ruby script is set up to, every five minutes, SSH in, ping the database, then close the SSH connection. (SSH is handled by Net::SSH::Gateway.)
But I recently got a "too many connections" error on MySQL. When checking the MySQL process list, I found a bunch of sleeping connections from the Ruby script. So I added a db.disconnect line to my script to disconnect from the database before closing the SSH connection, and that seemed to fix it.
My question is, aren't database connections closed automatically? Why were there a bunch of sleeping SQL connections?
It's hard to say exactly what is going on since you didn't provide a link to the script you are using. Based on the limited information provided, you are probably creating new Sequel::Database objects every five minutes. A Sequel::Database object is designed to provide persistent connections to the database, and is usually created during application start and stored in a constant.
In general, you should create a single Sequel::Database object and just send a simple query every 5 minutes. Alternatively, you should provide a block to the method that creates your Sequel::Database object so that it is automatically closed when the block returns.
I'm seeing a few of these errors during high load times:
mysql_connect() [<a
href='function.mysql-connect'>function.mysql-connect</a>]: [2002] Resource
temporarily unavailable (trying to connect via
unix:///var/lib/mysql/mysql.sock)
From what I can tell the mysql server isn't hitting its max connections limit, but there's something else stopping it from serving the query. What other limits would MySQL be hitting?
I'm running RHEL 6.2 64bit with MySQL 5.5.21
Let's assume your system is currently Unix-based (as given in your problem statement). If this is correct, here's the set of issues you may be running into:
You've run out of memory available to MySQL.
This is the most likely problem you're facing. Each connection in MySQL's connection pool requires memory to function, and if this resource is exhausted, no further connections can be made. Of course, the memory footprints and maximum packet sizes of various operations can be tuned in your equivalent to my.cnf if you discover this to be an issue.
Here's an additional thread that can help there, but you may also consider using simpler profiling tools like top to get a good ballpark estimate of what's going on.
You've run out of file descriptors available to your MySQL user account.
Another common issue: if you're trying to service requests that require file IO above the 1,024 boundary (by default), you will run into cases where the operation simply fails. This is because most systems specify a soft and hard limit on the number of open file descriptors each user can have available at one time, and walking over this threshold can cause problems.
This will usually have a series of glaringly obvious signs expressed in your log files. Check /var/log/messages and your comparable directories (for example, /var/log/mysql to see if you can find anything interesting.
You've run into a livelock or deadlock scenario where your thread is unsatisfiable.
Corollary to memory and file descriptor exhaustion, threads can time out if you've overstepped the computational load your system is capable of handling. It won't throw this error message, but this is something to watch out for in the future.
Your system is running out of PIDs available to fork.
Another common scenario: fork only has so many PIDs available for its use at any given time. If your system is simply overforked, it will cease to be able to service requests.
The easiest check for this is to see if any other services can connect through to the machine. For example, trying to SSH into the box and discovering that you cannot is a big clue.
An upstream proxy or connection manager has run out of resources and ceased servicing requests.
If you have any service layer between your client and MySQL, it bears inspecting to see if it has crashed, hung, or otherwise become unstable. The advice above applies.
Your port mapper has exhausted itself after 65,536 connections.
Unlikely, but again, a possible exhaustion case. Checking the trivial service connection as above is, ehm, also the best port of call here.
In short: this is a resource exhaustion scenario, inclusive of the server simply being "down". You're going to have to profile your system further to see what you're blocking on. All the error message gives us in this case is the fact the resource is unavailable to the client -- we'd need to see more information about the server to determine a more adequate remedy.
I still haven't found which limits it was hitting, but I did manage to work around the problem. There was a problem with our session table (in vbulletin) which uses the MEMORY engine. The indexes for this table were HASH and thus when vbulletin purged this table once an hour it would lock the table just long enough to hold up other queries and push mysql to the limit of its resources.
By changing the indexes to BTREE this allowed MySQL to delete the rows from the session table a lot quicker and avoid any limits there were reached previously. The errors only started when we upgraded our master db server to MySQL 5.5, so I'm guessing MEMORY tables are handled differently in the latest release.
See http://www.mysqlperformanceblog.com/2008/02/01/performance-gotcha-of-mysql-memory-tables/ for information on speed increases from using BTREE indexes over HASH For MEMORY.
Geez, this could be so many things. It could be that the socket buffer space is exhausted. It could be that mysql is not accepting connections as fast as they are coming in and the backlog limit is reached (though I'd expect that to give you a "Connection Refused" error, I don't know for sure that's what you'll get for a Unix domain socket). It could be any of the things #MrGomez pointed out.
Since you are running Apache and MySQL on the same server and this is a problem under high load, it could well be that Apache is starving the system of some resource and you're just not seeing (noticing?) the dropped/failed incoming connections/requests in your logs.
Are you using connection pooling? If not, I'd start there.
I'd also look for errors in the Apache logs and syslog around the same time as the mysql_connect error and see what else turns up. I'd especially recommend getting MySQL moved over to its own separate dedicated server.
In my case, I was working with JSON data types with PDO (PHP Driver).
I was using fetch to retrieve one item but forgot to add LIMIT 1 to the query. Adding it solved the problem.
I am writing a db logging ruby gem which will simply take out a job from a Beanstalk queue and write it in the DB.
That is one process on Server A puts a job (that it wants to log) in the Beanstalk queue on Server B, and my logging process on Server B takes it out and writes it to the mysql DB on Server B.
I want to know if this is worth it?
Is putting a job in the Beanstalk queue faster than writing to the DB. Or can my process that wants to log to DB directly write it to DB instead of using the logging process.
Note that both the beanstalk server and DB are on another server.
Beanstalk internally makes a socket call from Server A to Server B.
I believe mysql would need to do the same as well?
So therefore is mysql to another server going to be slower than putting in the beanstalk queue.
It'll be much faster, primarily because Beanstalkd jobs, by default, are stored in-memory and are lost if, for example, you lose power on your server, whereas MySQL is a strongly ACID-compliant relational database, and hence will go to a lot of effort and flush each of your logs to disk.
I think you'll find that, after your do some benchmarking with a lot of logs being made by your system, that disk I/O will be your limiting factor, rather than the speed of TCP/IP sockets. Your current system's advantage is that when server A files a log on Server B's beanstalkd instance it takes up very little of Server A's time, and Server B can periodically flush our many logs at once from beanstalkd to MySQL, making the process more efficient. The disadvantage is that, the more you batch up the logs, the more logs you will lose in the event of a software / power failure, unless you use beanstalkd's "-b" parameter which makes jobs durable by writing them to disk (and hence making the process slower).
Of course, the only way to truly settle this question is to benchmark!