Mysql server database limit [closed] - mysql

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Is it possible to have 20,000 databases on a mysql server. They will not be accessed at the same time and there sizes should not be larger then 10mb. Lets just say that 5000 of them will be open at one time to various different sites. Could the server process that many queries with that many databases?

Manual:
MySQL has no limit on the number of databases. The underlying file system may have a limit on the number of tables. Individual storage engines may impose engine-specific constraints. InnoDB permits up to 4 billion tables.
However, there are other limitations, that may affect your setup:
Memory size, as MySQL will hold some information about each db in RAM
Disk space for transaction logs and cache
Number of simultaneous connections that can be handled by OS: each connections eats CPU, RAM, HDD.
Check out E.7.6. Windows Platform Limitations - there is quite long list of things there.

Related

The performance test of MySQL and TiDB by our DBA shows that a standalone TiDB is not as good in performance as MySQL [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Our DBA deployed a standalone TiDB and a standalone MySQL to respectively handle about one million tables but it seemed that TiDB could not perform as good as MySQL, why? If it's because the data size is too small, how much data should I put in the database to ensure a better performance in TiDB than MySQL?
TiDB is designed for scenarios where sharding is used because the capacity of a MySQL standalone is limited, and where strong consistency and complete distributed transactions are required. One of the advantages of TiDB is pushing down computing to the storage nodes to execute concurrent computing.
TiDB is not suitable for tables of small size (such as below ten million level), because its strength in concurrency cannot be shown with small size data and limited Region. A typical example is the counter table, in which records of a few lines are updated high frequently. In TiDB, these lines become several Key-Value pairs in the storage engine, and then settle into a Region located on a single node. The overhead of background replication to guarantee strong consistency and operations from TiDB to TiKV leads to a poorer performance than a MySQL standalone.

Is reading/writing to mysql database periodically CPU intensive? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I will be writing a program in Delphi that will be reading and writing to a MySQL database tables on a regular basis like every 5 seconds. Is this going to be CPU intensive? or get to a point where computer will freeze completely? I know reading and writing to and from a hardrive nonstop can freeze everything on your computer. I am not really sure about MySQL database.
Databases are designed to handle many transactions frequently, but it really depends on what the queries you are using. A simple SELECT on a couple rows is unlikely to cause an issue, but large scale updates targeting many tables or multiple joins can slow performance. It all depends on what your queries are.
This all depends on the computer and the complexity of the query.
As David has said, it really does depend on the hardware and queries you are processing.
I would suggest measuring the processing time of each query to determine whether the writing processes will be stacking over the other 5 second interval queries.
You can find information on how to measure your MySQL processes here.

MySQL high CPU usage myisam [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
Mysql CPU high usage (80%-150%). This is shop (10 000 products), max 15 online.
my.cnf
http://pastebin.com/xJNuWTWT
log mysqltunner:
http://pastebin.com/HxWwucE2
console:
Firstly
Some other things you can check would be to run the following query while the CPU load is high:
SHOW PROCESSLIST;
I can suggest you to turn off the persistant connections.
And then to check the MySQL users, just to make sure it's not possible for anyone to be connecting from a remote server
And at the same time I would like to say you want to turn on the MySQL Slow Query Log to keep an eye on any queries that are taking a long time, and use that to make sure you don't have any queries locking up key tables for too long.
http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html
Kindly get over here too :
http://dev.mysql.com/doc/refman/5.0/en/memory-use.html
MySQL is only one part of the performance problem. Mysql only is not dedicated to high load traffic website or high load data website. You should find cache solution to deal with your problem. 10000 products is sufficiently large to slow down your website especially if there is no cache, if the server is not a dedicated server but with standard virtual hosting, etc.
To summarize, you should re-build the hardware architecture to take in consideration large database, user perfomance, dynamic pages.

Joomla 300 concurrent users [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've an issue on a website made in Joomla 2.5 with Ja teline IV template
that has 300 concurrent user,
it is a soccer magazine, so the article are updated often,
also minute by minute during the match.
I've a server of 16gb ram and quad core processor, but the website freeze when 300 users are accessing to the website.
I've done all the frontend optimization, but, I the last optimization could be enable caching.
My issues are:
- caching enabled also for logged in users
- caching timing, if I have that type of article, I can enable cache
expiring to 1 minute? It is also a good option? Could optimize the performance.
Can you suggest me what to do? Other possible optimization?
16gb should be enough to handle 300 concurrent users...smells to mysql server not fine tuned.
Run this script for you Mysql server
https://github.com/rackerhacker/MySQLTuner-perl
You have to find the bottleneck: processor? memory? disk? database? network?
After you find the issue, you have to choose the right solution: bigger processor, more memory, faster disk, db index, memory caching, network caching, etc etc

How many MySQL database can be created in a single domain in Plesk control panel? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Do you have any experience about this question? I have currently 1900 MySQL databases in a single domain in my plesk control panel and I wonder if my MySQL server gets overloaded or out-of-service due to such high number of databases in the system.
Do you have any suggestions? Each database is for a user in my service by the way.
MySQL itself doesn't place any restrictions on the number of databases you can have, and I doubt Plesk does either, I'm sure it just displays all the databases present on the MySQL server.
However, your host may have a limit (which you'd have to ask them about), or if you start getting a huge number of databases, you may actually run into a filesystem limit. As the MySQL documentation says, each database is stored as a directory, so you could hypothetically hit the filesystem's upper limit for how many subdirectories are allowed.
ive got well over 5000 databases running on alinux based plesk cluster (one db one web server) and its running fine, though i have had to increase open files limits due to the huge amounts of files. i cant run the mysql tuning primer any more though, well i can but it takes about 4 hours