I want to set up a MySQL database for a social networking website for my college.
My app can have at most 10,000 users. What is the maximum number of concurrent MySQL connections possible?
As per the MySQL docs: http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_max_user_connections
maximum range: 4,294,967,295 (e.g. 2**32 - 1)
You'd probably run out of memory, file handles, and network sockets, on your server long before you got anywhere close to that limit.
You might have 10,000 users total, but that's not the same as concurrent users. In this context, concurrent scripts being run.
For example, if your visitor visits index.php, and it makes a database query to get some user details, that request might live for 250ms. You can limit how long those MySQL connections live even further by opening and closing them only when you are querying, instead of leaving it open for the duration of the script.
While it is hard to make any type of formula to predict how many connections would be open at a time, I'd venture the following:
You probably won't have more than 500 active users at any given time with a user base of 10,000 users.
Of those 500 concurrent users, there will probably at most be 10-20 concurrent requests being made at a time.
That means, you are really only establishing about 10-20 concurrent requests.
As others mentioned, you have nothing to worry about in that department.
I can assure you that raw speed ultimately lies in the non-standard use of Indexes for blazing speed using large tables.
Related
So I have been writting this NodeJS Quiz App. At first we are not expecting that much users, but as time goes by we are looking at about ~50000 potential registered users.
Considering the DB of choice iss MySql, and also considering not all registered users are going to be logged in at the same time, what is the maximum number of concurrent MySQL connections possible?
Also, how much RAM and CPU would I need to host this app?
When you are using node.js, you can simply use connection pooling, there is no obvious downside to pooling but you are establishing much less connection (and less overhead).
In addition, it all goes down to how much query is actually executed and how much execution time they need, here are some make up numbers:
Let's say if 50k users all go online at the same time, and each user makes 1 interaction per 5s and takes 10ms DB time to execute, then 50k * 10ms / 5s = 100 concurrent connection needed.
Of course you need to factor in some real numbers, and read-to-write ratio matters a lot (read can potentially run in parallel), and cache can help a lot if reading makes up large number of queries.
All in all, you are unlikely to need much if all you do is some simple select and insert per interaction.
PS This post gave some interesting number: with connection pooling, 5k connection = 100k concurrent user. Of course he/she didn't mention his/her load type, but you can see that (concurrent / connection threads) can be up to tens and hundred when connection pool is in place.
you can set mySQL the max global connection up to 10,00,000.
and for hardware requirement below link can help you.
https://www.percona.com/blog/2019/02/25/mysql-challenge-100k-connections/
Hope you get your answers if not let me know.
I made a program that receives user input and stores it on a MySQL database. I want to implement this program on several computers so users can upload information to the same database simoultaneously. The database is very simple, it has just seven columns and the user will only enter four of them.
There would be around two-three hundred computers uploading information (not always at the same time but it can happen). How reliable is this? Is that even possible?
It's my first script ever so I appreciate if you could point me in the right direction. Thanks in advance.
Having simultaneous connections from the same script depends on how you're processing the requests. The typical choices are by forking a new Python process (usually handled by a webserver), or by handling all the requests with a single process.
If you're forking processes (new process each request):
A single MySQL connection should be perfectly fine (since the total number of active connections will be equal to the number of requests you're handling).
You typically shouldn't worry about multiple connections since a single MySQL connection (and the server), can handle loads much higher than that (completely dependent upon the hardware of course). In which case, as #GeorgeDaniel said, it's more important that you focus on controlling how many active processes you have and making sure they don't strain your computer.
If you're running a single process:
Yet again, a single MySQL connection should be fast enough for all of those requests. If you want, you can look into grouping the inserts together, as well as multiple connections.
MySQL is fast and should be able to easily handle 200+ simultaneous connections that are writing/reading, regardless of how many active connections you have open. And yet again, the performance you get from MySQL is completely dependent upon your hardware.
Yes, it is possible to have up to that many number of mySQL connectins. It depends on a few variables. The maximum number of connections MySQL can support depends on the quality of the thread library on a given platform, the amount of RAM available, how much RAM is used for each connection, the workload from each connection, and the desired response time.
The number of connections permitted is controlled by the max_connections system variable. The default value is 151 to improve performance when MySQL is used with the Apache Web server.
The important part is to properly handle the connections and closing them appropriately. You do not want redundant connections occurring, as it can cause slow-down issues in the long run. Make sure when coding that you properly close connections.
I would like to know how many database requests per page view (that is, every page that an user browses will start multiple requests to retrieve data from the database) should be made in order to have an "optimum" performance when I am using shared or dedicated hosting servers whose hardware is the most "commonly" provided (for example that that offer HostMonster or Bluehost providers). For both cases, I would like to know that when
I use MySQL or another database system
The database size is 1, 10, 100, 1000 Megabyte
I don't or I do use cache optimization
Users browsing pages are 10, 100, 1000, 10000 per second
In few word, under what conditions (considering the above cases) the server will begin to slow down and the user experience will be affected in a negative way? I appreciate some statistics...
P.S.: At this time I am using Ruby on Rails 3, so it is "easy" to increase requests!
I've had Facebook apps hosted on a shared host that did about a million pages per month without too many issues. I generally did 5-8 queries per page request. The number of queries isn't usually the issue, it's how long each query takes. You can have a small data set that is poorly indexed and you'll start having issue. The hosting provider usually kills your query after a certain length of time.
If you are causing the CPU on the server to spike, for whatever reason, then they may start killing processes on you. That is usually the issue.
One database connection is equal to one web request (in case, of course, your client reads the database on each request). By using a connection pool these connections are pre-created, but they are still used one-per-request.
Now to some numbers - if you google for "Tomcat concurrent connections" or "Apache concurrent connections", you'll see that they support without any problem 16000-20000 concurrent connections.
On the other hand, the MySQL administrator best practices say that the maximum number of concurrent database connections is 4096.
On a quick search, I could not find any information about PostgreSQL.
Q1: is there a software limit to concurrent connections in PostgreSQL, and is the one of MySQL indeed 4096
Q2. Do I miss something, or MySQL (or any db imposing a max concurrent connections limit) will appear as a bottleneck, provided the hardware and the OS allow a large number of concurrent connections?
Update: Q3 how exactly a higher connection count is negative to performance?
Q2: You can have far more users on your web site than connections to your database because each user doesn't hold a connection open. Users only require a connection every so often and then only for a short time. Your web app connection pool will generally have far fewer than the 4096 limit.
Think of a restaurant analogy. A restaurant may have 100 customers (users) but only 5 waiters (connections). It works because customers only require a waiter for a short time every so often.
The time when it goes wrong is when all 100 customers put their hand up and say 'check please', or when all 16,000 users hit the 'submit order' button at the same time.
Q1: you set a configuration paramter called max_connections. It can be set well above 4096, but you are definitely advised to keep it much lower than that for performance reasons.
Q2: you usually don't need that many connections, and things will be much faster if you limit the number of concurrent queries on your database. You can use something like pgbouncer in transaction mode to interleave many transactions over fewer connections.
The Wikipedia Study Case
30 000 HTTP requests/s during peak-time
3 Gbit/s of data traffic
3 data centers: Tampa, Amsterdam, Seoul
350 servers, ranging between 1x P4 to 2x Xeon Quad-
Core, 0.5 - 16 GB of memory
...managed by ~ 6 people
This is a little bit off-topic of your questions. But I think you could find this useful. you don't always kick the DB for each request. a correct caching strategy is almost always the best performance improvement you can apply to your web app. lot of static content could remain in cache until it explicitly change. this is how Wikipedia does it.
From the link you provided to "MySQL administrator best practices"
"Note: connections take memory and your OS might not be able to handle a lot of connections. MySQL binaries for Linux/x86 allow you to have up to 4096 concurrent connections, but self compiled binaries often have less of a limit."
So 4096 seems like the current maximum. Bear in mind that the limit is per server and you can have multiple slave servers that can be used to serve queries.
http://dev.mysql.com/doc/refman/5.0/en/replication-solutions-scaleout.html
I have read every possible answer to this question and searched via Google in order to find the correct answer to the following question, but I am rather a novice and don't seem to get a clear understanding.
A lot I've read has to do with web servers, but I don't have a web server, but an intranet database.
I have a MySQL dsatabase in a Windows server at work.
I will have many users accessing this database constantly to perform simple queries and writting back to it new records.
The read/write will not be that heavy (chances are 50-100 users will do so exactly at the same time, even if 1000's could be connected).
The GUI will be either via Excel forms and/or Access.
What I need to know is the maximum number of active connections I can have at any given time to the database.
I know I can change the number on Mysql Admin however I really need to know what will really work...
I don't want to put 1000 users if the system will really handle 100 correctly (after that, although connected, the performance will be too slow, for example)
Any ideas or own experiences will be appreciated
This depends mainly on your server hardware (RAM, cpu, networking) and server load for other processes if not dedicated to the database. I think you won't have an absolute answer and the best way is testing.
I think something like 1000 should work ok, as long as you use 64 bit MySQL server. With 32 bit, too many connections may create virtual memory pressure - a connection has an own thread, and every thread needs a stack, so the stack memory will reduce possible size of the buffer pool and other buffers.
MySQL generally does not slow down if you have many idle connections, however special commands e.g "show processlist" or "kill", that enumerate every connection will be somewhat slower.
If idle connection stays idle for too long (idle time exceeds wait_timeout parameter), it is dropped by the server. If this is the case in your possible scenario, you might want to increase wait_timeout (its default value is 8 hours)