Are sysbench threads similar to thread_pool_size? - mysql

I tested a mysql cluster using sysbench to figure out a sweet spot to set maximum threads to. In my endevours I came across the threads option in sysbench.
--threads=N
I also came across the thread_pool_size in Mysql Thread pool operations.
thread_pool_size: The number of thread groups in the thread pool. This is the most important parameter controlling thread pool performance.
So the question that plagues me is are the threads for sysbench similar to the thread_pool_size for mysql?
Here is an example of a command that I used.
sysbench oltp_read_write.lua --threads=26 --time=30 --mysql-user='root' --mysql-password='password' --table-size=10000 --mysql-host=10.100.100.64 --mysql-port=6033 run
This is an image to show my current configuration:
CNFfiles

OUCH!
thread_cache_size is the number of "threads" to hang onto. It is a simpleminded pooling. It is a number not bytes!! 10 is a reasonable number. Anything more than max_connections is unnecessary.
max_connections refers to "concurrent" connections, not total over time. The default of 151 is fine for most systems. 1000 is "high" but is warranted for some systems; 10K is too high.
Check these:
SHOW GLOBAL STATUS LIKE 'Max_used_connections';
SHOW GLOBAL STATUS LIKE 'Threads_running';
The former is a high-water mark (since startup). If it is close to max_connections, then maybe max_connections should be increased.
The latter says how many of the current connections are actually doing anything. If it is over 100, the connections are stumbling over each other. We will need more details to discuss what to do next. (1 is common; a 'busy' system might say no more than 10, and change rapidly.)

Sysbench is a client of MySQL. It can start a number of threads, one per connection.
When not using a thread pool in MySQL Server, every client connection starts its own thread. So there's a one-to-one correspondence between sysbench threads and MySQL Server threads.
It's typical that a client connection is not running a query every second. Normally a client application runs other code in between waiting for queries. So on the MySQL Server side, some threads exist, but they aren't doing anything. This appears as "Sleep" in the processlist.
It's pretty common to have hundreds of client connections open, but only one or two dozen of these connections doing any query at any given moment. The others are all sleeping.
As a metaphor, I would compare this to customers in a bank, where they approach a teller's window and do transactions. The customer blocks others from using the same teller, even if the customer is signing a form or something else that is not talking directly to the teller.
When using a thread pool, threads are handled differently in MySQL Server. The thread pool feature exists so that a smaller number of threads in the MySQL Server can be shared by a greater number of client connections. The threads in MySQL Server are no longer corresponding one-to-one with client connections. They switch when a client connection requests to execute an SQL query. This is done to reduce resource usage when your clients open a large number of connections.
A metaphor for this is a restaurant where a single server can handle a whole section with customers. The customers only need attention from time to time, and the server can therefore keep track of multiple tables of customers.
In the case of sysbench, this is probably not a typical workload. The client threads are running SQL queries more rapidly than a typical application. If you try to use a thread pool in this case, you might have more client requests than the number of threads in the thread pool, and in this case the client requests might queue up.
In the restaurant metaphor, this would be the infrequent times when more than one table wants something at the same time. Then all but one of the tables must wait, but hopefully not for long since most customer requests are brief.
Using the thread pool in MySQL Server while testing with sysbench might not be the best way to measure the maximum throughput of queries.

Related

Difference between connections in use and threads connected in mysql?

Can anyone please tell me what the difference is between the number of connections in use and threads connected to a cluster in MySQL?
I am currently using a single cluster and a pooling library, but intuitively I feel the number of connections and the threads have to be the same as from what I understand one thread has to be spawned for every connection.
However in the metrics, I am observing for my application, I see the number of threads connected is twice the number of connections in use at some points of time, how is this possible?

simultaneous connections to a mysql database

I made a program that receives user input and stores it on a MySQL database. I want to implement this program on several computers so users can upload information to the same database simoultaneously. The database is very simple, it has just seven columns and the user will only enter four of them.
There would be around two-three hundred computers uploading information (not always at the same time but it can happen). How reliable is this? Is that even possible?
It's my first script ever so I appreciate if you could point me in the right direction. Thanks in advance.
Having simultaneous connections from the same script depends on how you're processing the requests. The typical choices are by forking a new Python process (usually handled by a webserver), or by handling all the requests with a single process.
If you're forking processes (new process each request):
A single MySQL connection should be perfectly fine (since the total number of active connections will be equal to the number of requests you're handling).
You typically shouldn't worry about multiple connections since a single MySQL connection (and the server), can handle loads much higher than that (completely dependent upon the hardware of course). In which case, as #GeorgeDaniel said, it's more important that you focus on controlling how many active processes you have and making sure they don't strain your computer.
If you're running a single process:
Yet again, a single MySQL connection should be fast enough for all of those requests. If you want, you can look into grouping the inserts together, as well as multiple connections.
MySQL is fast and should be able to easily handle 200+ simultaneous connections that are writing/reading, regardless of how many active connections you have open. And yet again, the performance you get from MySQL is completely dependent upon your hardware.
Yes, it is possible to have up to that many number of mySQL connectins. It depends on a few variables. The maximum number of connections MySQL can support depends on the quality of the thread library on a given platform, the amount of RAM available, how much RAM is used for each connection, the workload from each connection, and the desired response time.
The number of connections permitted is controlled by the max_connections system variable. The default value is 151 to improve performance when MySQL is used with the Apache Web server.
The important part is to properly handle the connections and closing them appropriately. You do not want redundant connections occurring, as it can cause slow-down issues in the long run. Make sure when coding that you properly close connections.

MySQL default max_connections

MySQL default max_connections = 151. How many connections should I use for 1000 users per application?
I would think that if anything, you should probably decrease this number. Do you think that 15% of your users are logged in at the same time and all using the database? That seems like a very high percentage to assume. If your application does not hold on to database connections for more time than needed, then you likely need much less than 150 connections available. As soon as the database communication is done, your app should release the connection. If you are using a connection pool, then openning and closing a connection is very fast. Using this approach, you can have two users logged in at the same time and it is likely that they will not need more than one connection between them because it is rare that they both are simultaneously executing some DB operation.

Database concurrent connections in regard to web (http) requests and scalability

One database connection is equal to one web request (in case, of course, your client reads the database on each request). By using a connection pool these connections are pre-created, but they are still used one-per-request.
Now to some numbers - if you google for "Tomcat concurrent connections" or "Apache concurrent connections", you'll see that they support without any problem 16000-20000 concurrent connections.
On the other hand, the MySQL administrator best practices say that the maximum number of concurrent database connections is 4096.
On a quick search, I could not find any information about PostgreSQL.
Q1: is there a software limit to concurrent connections in PostgreSQL, and is the one of MySQL indeed 4096
Q2. Do I miss something, or MySQL (or any db imposing a max concurrent connections limit) will appear as a bottleneck, provided the hardware and the OS allow a large number of concurrent connections?
Update: Q3 how exactly a higher connection count is negative to performance?
Q2: You can have far more users on your web site than connections to your database because each user doesn't hold a connection open. Users only require a connection every so often and then only for a short time. Your web app connection pool will generally have far fewer than the 4096 limit.
Think of a restaurant analogy. A restaurant may have 100 customers (users) but only 5 waiters (connections). It works because customers only require a waiter for a short time every so often.
The time when it goes wrong is when all 100 customers put their hand up and say 'check please', or when all 16,000 users hit the 'submit order' button at the same time.
Q1: you set a configuration paramter called max_connections. It can be set well above 4096, but you are definitely advised to keep it much lower than that for performance reasons.
Q2: you usually don't need that many connections, and things will be much faster if you limit the number of concurrent queries on your database. You can use something like pgbouncer in transaction mode to interleave many transactions over fewer connections.
The Wikipedia Study Case
30 000 HTTP requests/s during peak-time
3 Gbit/s of data traffic
3 data centers: Tampa, Amsterdam, Seoul
350 servers, ranging between 1x P4 to 2x Xeon Quad-
Core, 0.5 - 16 GB of memory
...managed by ~ 6 people
This is a little bit off-topic of your questions. But I think you could find this useful. you don't always kick the DB for each request. a correct caching strategy is almost always the best performance improvement you can apply to your web app. lot of static content could remain in cache until it explicitly change. this is how Wikipedia does it.
From the link you provided to "MySQL administrator best practices"
"Note: connections take memory and your OS might not be able to handle a lot of connections. MySQL binaries for Linux/x86 allow you to have up to 4096 concurrent connections, but self compiled binaries often have less of a limit."
So 4096 seems like the current maximum. Bear in mind that the limit is per server and you can have multiple slave servers that can be used to serve queries.
http://dev.mysql.com/doc/refman/5.0/en/replication-solutions-scaleout.html

Does MySQL packet size cause slowdown?

I have written a program which uses a MySQL database, and transaction between the database server (a very powerful one) and the client is happening over a ADSL connection (1 Mbit/s).
But I have a very very slow connection between each client and the server. Only approximately 3-4 KB/s data is send through the server. Neither the server nor the clients use the Internet for other purposes, just my program uses the Internet.
I can't figure out why? Is the reason MySQL server packet size?
Any suggestions?
Try using mytop to identify the server low performance cause.
Another one: you may be using SELECT COUNT(*) FROM .. for large InnoDB tables which causes a table scan.
And can you test for some other services whether the exchange data rate between the machines is OK? Even if the even if the output bandwidth is lower for ADSL users 3-4 kB might not be the reason of low performance.
The effective transfer rate is often heavily limited by the number of roundtrips between client and server. Without seeing your code it is sort of difficult to tell, but you should check the number of requests happening.
If you have a single request that results in many records being returned, you should see a better usage of bandwidth than with a higher number of requests which only deliver a few rows each.
In the latter case the actual result transfer is probably quite fast, but the latencies involved in the "control communications" (i. e. the statements themselves, login requests etc.) will add up, effectively lowering overall throughput.
As for the packet size: When it is very small, there is more overhead in the communications, increasing the aforementioned effect. The server's default max_allowed_packet size if 1MB if memory serves, but that should be fine with your connection.
You first have to debug both connections.
What is your upload speed if you upload a file with WinSCP ot equivalent to the MySQL server? It should be near 90 KB/s with ADSL 1 Mbit/s.