Max tables in a MySQL database - mysql

Is it bad to have too many tables in a database? I have about 160 tables in one database. Is it better to split it into several database rather than using a single database? Single database is more convenient for me.

There are no server limits on the number of tables in a MySQL database. You will definitely have no problems with 160 tables, and you don't need to split them into multiple databases.
You will not gain performance by splitting your tables into multiple databases. If performance remains an issue, you could consider using per-table tablespaces in order to place some sets of tables on different physical disks.

according to MySQL reference manual:
MySQL has no limit on the number of tables. The underlying file system may have a limit on the number of files that represent tables. Individual storage engines may impose engine-specific constraints. InnoDB permits up to 4 billion tables.

160 tables isn't radically huge.
16,000 might be...probably would be...more unreasonable - such databases exist in ERP or CRM systems (even into the 40-50K tables range, but many of those tables are not actually used, or are only barely used).
Even so, the typical DBMS will 'handle' such large databases, but there is more strain on the system catalog than usual in such systems.

160 is still ok. it makes SQL command more faster than making too many contents under a single table. In my case I have 8,545,214 tables in a single Mysql database. I dont want to store millions of user in a single database that's why I used multiple table to store each post a user done. it makes mysql more faster than searching on a single table with millions of rows.

WordPress Multisite creates dozens of tables for every new subsite in the same database.
So you can be so good at only 160 tables.
It might be an issue to manage them with PhpMyAdmin or other software to see and scroll through the tables. But if you work with the code it should not be a problem.

Related

What data quantity is considered as too big for MySQL?

I am looking for a free SQL database able to handle my data model. The project is a production database working in a local network not connected to the internet without any replication. The number of application connected at the same times would be less than 10.
The data volume forecast for the next 5 years are:
3 tables of 100 millions rows
2 tables of 500 millions rows
20 tables with less than 10k rows
My first idea was to use MySQL, but I have found around the web several articles saying that MySQL is not designed for big database. But, what is the meaning of big in this case?
Is there someone to tell me if MySQL is able to handle my data model?
I read that Postgres would be a good alternative, but require a lot of hours for tuning to be efficient with big tables.
I don't think so that my project would use NOSQL database.
I would know if someone has some experience to share with regarding MySQL.
UPDATE
The database will be accessed by C# software (max 10 at the same times) and web application (2-3 at the same times),
It is important to mention that only few update will be done on the big tables, only insert query. Delete statements will be only done few times on the 20 small tables.
The big tables are very often used for select statement, but the most often in the way to know if an entry exists, not to return grouped and ordered batch of data.
I work for Percona, a company that provides consulting and other services for MySQL solutions.
For what it's worth, we have worked with many customers who are successful using MySQL with very large databases. Terrabytes of data, tens of thousands of tables, tables with billions of rows, transaction load of tens of thousands of requests per second. You may get some more insight by reading some of our customer case studies.
You describe the number of tables and the number of rows, but nothing about how you will query these tables. Certainly one could query a table of only a few hundred rows in a way that would not scale well. But this can be said of any database, not just MySQL.
Likewise, one could query a table that is terrabytes in size in an efficient way. It all depends on how you need to query it.
You also have to set specific goals for performance. If you want queries to run in milliseconds, that's challenging but doable with high-end hardware. If it's adequate for your queries to run in a couple of seconds, you can be a lot more relaxed about the scalability.
The point is that MySQL is not a constraining factor in these cases, any more than any other choice of database is a constraining factor.
Re your comments.
MySQL has referential integrity checks in its default storage engine, InnoDB. The claim that "MySQL has no integrity checks" is a myth often repeated over the years.
I think you need to stop reading superficial or outdated articles about MySQL, and read some more complete and current documentation.
MySQLPerformanceBlog.com
High Performance MySQL, 3rd edition
MySQL 5.6 manual
MySQL has a two important (and significantly different) database engines - MyISAM and InnoDB. A limits depends on usage - MyISAM is nontransactional - there is relative fast import, but it is too simple (without own memory cache) and JOINs on tables higher than 100MB can be slow (due too simple MySQL planner - hash joins is supported from 5.6). InnoDB is transactional and is very fast on operations based on primary key - but import is slower.
Current versions of MySQL has not good planner as Postgres has (there is progress) - so complex queries are usually much better on PostgreSQL - and really simple queries are better on MySQL.
Complexity of PostgreSQL configuration is myth. It is much more simple than MySQL InnoDB configuration - you have to set only five parameters: max_connection, shared_buffers, work_mem, maintenance_work_mem and effective_cache_size. Almost all is related to available memory for Postgres on server. Usually work for 5 minutes. On my experience a databases to 100GB is usually without any problems on Postgres (probably on MySQL too). There are two important factors - how speed you expect and how much memory and how fast IO you have.
With large databases you have to have a experience and knowledges for any database technology. All is fast when you are in memory, and when ratio database size/memory is higher, then much more work you have to do to get good results.
First of all, MySQLs table size is only limited by the allowed file size limit of your OS which is I. The terra bytes on any modern OS. That would pose no problems. Most important are questions like this:
What kind of queries will you run?
Are the large table records updated frequently or basically archives for history data?
What is your hardware budget?
What is the kind of query speed you need?
Are you familiar with table partitioning, archive tables, config tuning?
How fast do you need to write (expected inserts per second)
What language will you use to connect to the db (Java, .net, Ruby etc)
What platform are you most familiar with?
Will you run queries which might cause table scans such like '%something%' which would have to go through every single row and take forever
MySQL is used by Facebook, google, twitter and others with large tables and 100,000,000 is not much in the age of social media. MySQL has very little drawbacks (even though I prefer postgresql in most cases) like altering large tables by adding a new index for example. That might send your company in a couple days forced vacation if you don't have a replica in the meantime. Is there a reason why NoSQL is not an option? Sometimes hybrid approaches are a good choice like having your relational business logic in MySQL and huge statistical tables in a NoSQL database like MongoDb which can scale by adding new servers in minutes (MySQL can too but it's more complicated). Now MongoDB can have a indexed column which can be searched by in blistering speed.
Bejond the bottom line: you need to answer the above questions first to make a very informed decision. If you have huge tables and only search on indexed keys almost any database will do - if you expect many changes to the structure down the road you want to use a different approach.
Edit:
Based on your update you just posted I doubt you would run into problems.

Side effect of large number of MySQL tables in a database

Is it OK to keep 10000+ tables in a MySQL database?
I'm making a messaging/chat script, so I'm thinking about partitioning data's over several tables as it will be a huge amount of data after some days.
IS IT OK?
Or it has some effect?
Well, as a table can hold millions of rows so I was thinking maybe a database can hold large number of tables too
or, the question could be like, how does Facebook stores their huge amount of daily chat messages?
I'm a newbie in MySQL, please help
MySQL has no limit on the number of tables. The underlying file system may have a limit on the number of files that represent tables. Individual storage engines may impose engine-specific constraints. InnoDB permits up to 4 billion tables.
Even so, the typical DBMS will 'handle' such large databases, but there is more strain on the system catalog than usual in such systems.
I have about huge tables in one database with no ill effects, other than displaying the table list in phpMyAdmin taking a while
It's possible, but I would avoid it unless you have a really good use case for it. It raises all kinds of scalability and maintainability issues. Your table size is mainly limited by available disk space.
If you really need to do it...
You'll need to increase the maximum number of file descriptors that your OS will allow to have open, since MyISAM tables use two file descriptors per table. (If you're using Linux then read the section about ulimit in the man page for bash for how to do this).
Also, there's a MySQL config value called table_cache that limits the number of allowed tables. You'll need to make sure that's large enough to support the number of tables you need.
You won't want to use the standard "flush tables" anymore (unless you're the kind of person that likes to watch paint dry) so you'll need to flush each table individually (e.g. before shutdown).
Again, I would avoid using so many tables. You're probably better off making your schema support what you need in a handful of tables, and consider archiving, warehousing (or deleting!) old data if you're concerned about storing too much data.

performance effect of joining tables from different databases

I have a web site using a database named lets say "site1". I am planning to put another site on the same server which will also use some of the tables from "site1".
So should I use three different databases like "site1" (for first site specific data), "site2" (for second site specific data), and "general" (for common tables). In which there will be join statements between databases general and site1 and site2. Or should I put all tables in one database?
Which is the best practice to do?
How performances differ in each situation?
I am using MySQL. So how is the situation especially for MySQL?
Thanks in advance...
From the performance point of view, there won't be ANY difference. Just keep your indexes in place and you will not notice whether you are using single DB or multiple DBs.
Apart from performance, there are 2 small implications that I can think of:
1. You can not have foreign keys across DBs.
2. Partitioning tables in DB based on their usage or based on applications can help you manage permissions in easy way.
I can speak from recent personal experience. I have some old mysql queries in some PHP code that worked fine with a relatively small database, but as it grew the query got slower and slower.
I have freeradius running mysql in its own database along with another management php app that I wrote. The freeradius table is > 1.5 million rows. I was attempting to join tables from my app's database to the freeradius database. I can say for sure 1.5 million rows is too many. Running some queries locked up my app altogether. I ended up having to re-write portions of my php app to do things differently (ie not joining 2 tables from different database). I also indexed the radius accounting table on some key fields and optimized some queries (mysql EXPLAIN statement is wonderful to help with this). Things are MUCH faster now.
I will definitely be hesitant to join 2 tables from different databases in the future unless really really necessary.

Maximum Row in DBMS

Is there any limit to maximum row of table in DBMS (specially MySQL)?
I want create table for saving logfile and it's row increase so fast I want know what shoud I do to prevent any problem.
I don't think there is an official limit, it will depend on maximum index sizes and filesystem restrictions.
From mySQL 5.0 Features:
Support for large databases. We use MySQL Server with databases that contain 50 million records. We also know of users who use MySQL Server with 200,000 tables and about 5,000,000,000 rows.
You should periodically move log rows out to a historical database for data mining and purge them from the transactional database. It's a common practice.
There's probably some sort of limitation, dependent on the engine used and the table structure. I've got a table with appx 45 million entries in a database I administrate, I've heard of (much) higher numbers.

Best approach to relating databases or tables?

What I have:
A MySQL database running on Ubuntu that maintains a
large table of articles (similar to
wordpress).
Need to relate a given article to
another set of data. This set of data
will be fairly large.
There maybe various sets of data that
will be related.
The query:
Is it better to contain these various large sets of data within the same database of articles, which will have a large set of traffic on it?
or
Is it better to create different databases (on the same server) that
relate by a primary key to the main database with the articles?
Put them all in the same DB initially, until you find that there is a performance issue. Much easier than prematurely optimising.
Modern RDBMS are very good at optimising data access.
If you need to connect frequently and read both of the records, you should put in a the same database. The server then won't have to run permission checks twice for each of your databases.
If you have serious traffic, you should consider using persistent connection for that query.
If you don't need to read them together frequently, consider to put on different machine. As the high traffic for the bigger database won't cause slow downs on the other.
Different databases on the same server gives you all the problems of a distributed architecture without any of the benefits of scaling out. One database per server is the way to go.
When you say 'same database' and 'different databases related' don't you mean 'same table' vs 'different tables'?
if that's the question, i'd say:
one table for articles
if these 'other sets of data' are all of the same structure, put them all in the same table. if not, one table per kind of data.
everything on the same database
if you grow big enough to make database size a performance issue (after many million records and lots of queries a second), consider table partitioning or maybe replacing the biggest table with a key/value store (couchDB, mongoDB, redis, tokyo cabinet, [etc][6]), which can be a little faster than MySQL but a lot easier to distribute for performance.
[6]:key-value store