Hows does Drupal 6 interact with MySQL for connections and transactions? Does connection pooling get used? How are transactions handled? At what level are these things managed by Drupal vs being handed off to be handled by MySQL?
I did a good amount of searching on the web and within Stack Overflow, but mainly, I only found articles for tweaking Drupal performance and scaling needs.
From Acquia support team,
The number of connections would vary based on activity but you can boil it down as you mention here, one request per user request. There is no concept of connection pooling or persistent connections in Drupal.
Sometimes it helps to get a handle on
the Database Abstraction Layer (how
Drupal talks to the database) and the
bootstrap process (See
http://api.drupal.org/api/drupal/includes--bootstrap.inc/6)
for a more detailed walk of how it
works.
Related
I am using this
var mysql = require('mysql');
in my node.js app. I am interested to make my app perform the fastest. I have many functions that connect to SQL. There is 2 approaches I am familiar with
For every request, I make a new connection and then execute the query and the close the connection.
Open the connection and make it a global variable, and then never close it. Then for every request that comes in, it just uses the opened connection saved globally.
Which is generally better to use? Also for number 2, if the server closes unexpectedly, then the sql connection doesn't close. Is that bad?
Thanks
Approach 2 is faster, but to avoid the potential problem of connections dropping without unexpectedly, you'll have to implement testing mechanism for every segment that queries the database (ex: count the number of returned rows).
To take this approach further, you can define connections bank or pool. Where you can deal with connection testing and distributions. The basic idea is to have many connections to the database and just inject healthy connections to consumers (functions, or objects that query the database). As Andrew mentions in the comments You can check this question: node.js + mysql connection pooling
Since the database is an essential asset to a project, if this is not a homework or learning project, it might not be a bad idea to explore 3rd party libraries, where a lot of the connections and security details is covered and automated.
So I have developped this website with Symfony3 and Doctrine. I have one major concern about performance with MySQL and more specifically the number of simultaneous open connexions.
For the moment, one to five users are online on the website. What happens if, let's say, 1,500 users connect within one minute? Does Symfony3 or Doctrine handle this kind of situations? How can I be sure the website doesn't go down providing me with the Too many connections MySQL error?
And if I go up to 5,000? And 10,000? The server has 4GB of RAM and a 2.40Ghz mono-core processor but I wouldn't worry about the hardware as I'm more concerned about MySQL.
These situations already happened in the past but I was running the website with Wordpress and W3 Total Cache plugin. Should I consider using a cache manager such as memcached or else?
In short, I'm concerned about the website becoming unavailable in case of sudden high trafic (and thought of the MysQL Too many connections error in first but I might be missing something even more important).
Thanks for lightening me out on this one as I'm not fully aware about performance issues with Symfony.
I believe it does open one connection per visitor. Regardless of whether it does or not however neither Symfony or Doctrine has a magic bullet to handle every load/connection scenario.
Why don't you use a load testing tool (there are many) and see how it actually pans out? In my experience predicting a bottleneck is useless, as they will always crop up where you least expect it.
For example, the MySQL connection limit is only one part of the optimisation puzzle. It's no good just worrying about connection limits, you need to respond to web requests as quickly and efficiently as possible to free up MySQL connection resources (and other resources your app is using). So if your server is slow you will run out of connections (or some other resource) almost immediately under significant load, regardless of MySQL connection limits.
That said, those server specifications seem a little low for 5-10k users per minute. I wouldn't expect a machine like that to handle that kind of load without some serious optimisation/caching/etc.
The symfony performance page is a good starter, and there is also a good article on caching - there's a ton of available material on the subject. Good luck! :)
If you use php-fpm it depends on pm.max_children in fpm/pool.d/www.conf.
pm.max_children refers to the maximum number of concurrent PHP-FPM processes allowed to exist in such a pool. If the volume of incoming requests requires the creation of more PHP-FPM processes than the number allowed by the max_children limit, those additional requests are backlogged in a queue to await service.
So when pm.max_children > max_connections (my.cnf) and active users > max_connections you will get "Too many connections".
I'm using a multi-tenant architecture folowing the article Dynamic DataSource Routing, but creating new tenants (datasources) dynamically (on user registration).
Everything is running ok, but I'm worried with scalabillity. The app is read heavy and today we have 10 tenants but we will open the app to public and this number will increase a lot.
Each user datasource is created using the following code:
BasicDataSource ds = new org.apache.commons.dbcp.BasicDataSource();
ds.setDriverClassName(Driver.class.getName());
ds.setUsername(dsUser);
ds.setPassword(dsPassword);
ds.setPoolPreparedStatements(true);
ds.setMaxActive(5);
ds.setMaxIdle(2);
ds.setValidationQuery("SELECT 1+1");
ds.setTestOnBorrow(true);
It means it is creating at least 2 and a maximum of 5 connections per user.
How much connections and schemas does this architecture support by MySQL server (4 CPUs 2.3Mhz/8GB Ram/80GB SSD) and how can I improve it by changing datasource parameters or mysql configuration?
I know this answer depends of a lot of additional information, just ask in the comments.
In most cases you will not have more than 300 connections/second. That is if you add good caching mechanisms like memcached. if you are having more than 1000 connections/sec you should consider persistent connections and connection pools.
I'm currently developing plugins for bukkit and a lot of them need a database connection. Now I'm thinking about if could be better to have just one plugin that handles the connection for all plugins.
The question behind that is if it is good or not to keep a connection up even if there are no queries for some minutes (that may happen). Otherwise I would need to establish a new connection for each query?
It is a good idea to have one class/plugin for handling database, but the connection state should not be open all the time,make sure the connection is opened only for the time taken by the query.
Many applications use connection pools to have a number of connections readily available to run queries over. It reduces the number of protocol re-negotiations that the database driver has to do. This is especially useful for applications that need fast access times to the underlying data, yet have larger downtimes between requests. E-Commerce applications like webshops are a good example.
I have been researching this for a while but got no convinced answer.
From mysql tutorial, the default connections number is less than two hundred, and it says max_connection_num can be set to 2000 in Linux box as long as you have enough resource. I think this number is far from enough in real world deployment as there might be millions people visit your website at the same time.
There are couple of articles talking about how to optimize to reduce time cost by each query. But none of them tells me how this issue is root caused. I think there must be some mechanism like queue to prevent massive connections from happening simultaneously. otherwise you will finally get "too connection" exception.
anyone has some expertise in this area? thank you.
There are several options.
Connection pooling
As you mentionned: queuing. If too many clients connect at the same time, then the application layer should handle this exception, put the request to sleep for a short period of time and try again. Requests lasting more than a couple of seconds should usually be banned in such a high traffic environment.
Load balancing through replication and/or clustering
Normally, your application is supposed to reuse connections already established. However, the language you chose to implement your application introduces limitations. If you use Java or .Net you can have pool of connections. For PHP it is not the case, you can check this discussion
If you exceed the max_connection_num, you do get a too many connections error. But if you really have 1 million users at your web server at the exact same time, you can't handle that with one server anyway, 1 million concurrent connections really requires a very big farm to handle.
However, the clients to your database is a webapp, that webapp usually connects to the database through abstractions called a connection pool, which does limit the number of connections to the database on the client side as long as all the database connections goes through that same pool.