I'm running a rails 2.3.5 application, which supports me to pool mysql connections to my database. But I remember reading that my mongrel servers are single threaded. What's the point of having a connection pool, to a single threaded application? Is there a way to multi-thread my app?
Also, do connection pools understand, that ruby 1.8 has "green" threads?
Cheers!
Manage Connections
The major benefit of connection pooling for a single-thread server like Mongrel/Passenger/etc is that the connection is established/maintained in a Rack handler outside the main Rails request processing. This allows for a connection to be established once vs. many times as it's used in different ways. The goal is to re-use the established connection and minimize the number of connections. This should prevent having to reconnect within a given request processing cycle and possibly even between requests (if I recall correctly).
Multiple Concurrent Connections
Although most use cases (Mongrel/Passenger) are single threaded and can only use a single connection at a time - there is JRuby and environments/app servers that have full multi-threaded support. Rails has been thread safe since 2.2
TL;DR:
Pool establishes connection automatically. Some people do use multiple concurrent db connections from pool.
Related
i have a database that thousands of users need to connect to (via ODBC) for very brief periods (it's a subscription licensing database for a win32 desktop app). They connect, get their approval to run and disconnect).
max_connections is set to 1000 but am not seeing the re-use i would expect server side. i.e. server currently has about 800 processes/connections sleeping (and another 200 connected to real data in other databases on the same server) .... yet a new attempt by a client app was rejected 'too many connections'.
What am i missing?
have increased the max_connections for now to 1500 but if that just means another 500 sleeping connections it's not a long term solution. pretty sure clients are disconnecting properly but am adding some diagnostics to the win32 app just in case.
MariaDB 10.3.11
with MySQL ODBC 5.3 ANSI Driver
It's normal to see a lot of sessions "Sleeping". That means the client is connected, but not executing a query at this moment. The client is likely doing other tasks, before or after running an SQL query. Just like if you are logged into a server with ssh, most of the time you're just sitting at the shell prompt not running any program.
It's up to you to design your clients to wait to connect until they need data, then disconnect promptly after getting their data. It's pretty common in apps that they connect to the database at startup, and remain connected. It's also pretty common in some frameworks to make multiple connections at startup, and treat them as a pool that can be used by multiple threads of the client app. It's your app, so you should configure this as needed.
Another thing to try is to enable the thread pool in the MariaDB server. See https://mariadb.com/kb/en/thread-pool-in-mariadb/
This is different from a client-side connection pool. The thread pool allows many thousands of clients to think they're connected, without allocating a full-blown thread in the MariaDB server for every single connection. When a client has something to query, at that time it is given one of the threads. When that client is done, it may continue to maintain a connection, but the thread in the MariaDB server is reallocated to a different client's request.
This is good for "bursty" workloads by many clients, and it sounds like your case might be a good candidate.
I am creating a rest api that uses mysql as data base. My confusion is that should i connect to database in every request and release the connection at the end of the operation. Or should i connect the database at the start of the server and make it globally available and forget about releasing the connection
I would caution that neither option is quite wise.
The advantage of creating one connection for each request is that those connections can interact with your database in parallel, this is great when you have a lot of requests coming through.
The disadvantage (and the reason you might just create one connection on startup and share it) is obviously the setup cost of establishing a new connection each time.
One option to look into is connection pooling https://en.wikipedia.org/wiki/Connection_pool.
At a high level you can establish a pool of open connections on startup. When you need to make a request remove one of those connections from the pool, use it, and return it when done.
There are a number of useful Node packages that implement this abstraction, you should be able to find one if you look.
I am connecting to a remote MySQL database in my node.js server code for a web app. Is there any advantage to using a connection pool when I only have a single instance of a node.js application server running?
Connection pools are per instance of application. When you connect to the db, you are doing it from that particular instance and hence the pool is in the scope of that instance. The advantage of creating a pool is that you don't create / close connections very often, as this is, in general, a very expensive process. Rather, you maintain a set of connections open, in idle state, ready to be used if there is a need.
Update
In node there is async.parallel() construct which allows you to launch a set of tasks in async manner. Immagine that those tasks represent each one a single query. If you have a single connection to use, each process should use that same one, and it will quickly become a bottelneck. Instead, if you have a pool of available connections, each task can use a separate connection until the pool is completely used. Check this for more detailed reference.
I have a rails activerecord project that has been scaled out to serve approximately 60-100k requests per minute. We use AWS and it takes about 5 xlarge c4 ec2 instances to serve this many requests. We have optimized the system to serve 99.99% of those requests off of a redis cache rendering our mysql DB barely used.
This is great and all but we keep running into connection limits for mysql. Amazon RDS apparently limits the number of connections we can have and it seems silly for upping our RDS instance size just so that we can have a larger number of sleeping connections. We literally profiled the RDS server and it maybe gets 10-50 queries a day depending on how many times we update the system.
Is there any way to keep the activerecord connection pool from reserving connections?
We tried simply lowering the connection pool and it helped in lowering connections, but then we started getting:
(ActiveRecord::ConnectionTimeoutError) "could not obtain a database connection within 5 seconds
What I would like to achieve is for the rails project to stop trying to pre-allocate connections and only open then up when they are necessary. I'm not well-versed enough in the rails framework and activerecord to understand how the system is reserving connections and why we are getting ConnectionTimeoutErrors even though the application isn't even making any DB calls.
I know it is very basic. But want to clear some concept of mysql connections. I have following scenario.
Db server and web servers are on different locations.
Web server is running an article based web site.
Articles data is stored in db server.
Web server is delivering 100 articles/pages per second.
My questions are as follows:
Single connection between web server and db server can handle it ?
How many connections would be created by default ?
If I suppose connections as pipes, what is i/o capacity of each connection ?
Is there any relationship between db server's RAM, processor, OS and number of connections ?
Thanks in advance
Single connection between web server and db server can handle it?
A single connection can pass several requests, but not at the same time, and only for a single client process. So the best scaling approach might be one connection pool per web server process, where threads can obtain an established connection to perform their request, and return the connection to the pool once they are done.
How many connections would be created by default ?
Depends on the client language, the MySQL connector implementation, the frameworks you use, the web server configuration, and probably a bunch of other details like this. So your best bet is to simply look at the list of network connections to the MySQL service, e.g. using lsof or netstat.
If I suppose connections as pipes, what is i/o capacity of each connection ?
The limiting factor will probably be shared resources, like the network bandwidth or processing capabilities at either end. So the number of connections shouldn't have a large impact on the data transfer rate. The overhead to establish a connection is significant, though, which is why I suggested reducing the number of connections using pooling.
Is there any relationship between db server's RAM, processor, OS and number of connections ?
Might be, if some application makes choices based on these parameters, but in general I'd consider this rather unlikely.