Is there a reason to use two databases? - mysql

Is it because of size?
By the way, what is the limit of the size?

There are many reasons to use two databases, some that come to mind:
Size (the limit of which is controlled by the operating system, filesystem, and database server)
Separation of types of data. Think of a database like a book -- you wouldn't write a book that spans multiple subjects, and you shouldn't (necessarily) have a database with multiple subjects. Just so all of the data is somehow related, you could keep it together (i.e. all the tables have something to do with one website or application).
Import / Export - it might be easier to import data into your application if you can drop and restore a whole database, rather than import individual rows into a database table.

Separate applications, or services. I can't see any reason to use separate databases for a single app/service.
(note: replication, even multimaster, isn't a separate database. Neither is Sharding.)
I believe some on here are confusing Database with a Database Instance.
Example:
A phone book is a prime example of a Database.
Replication:
having 2 copies of the same phone book does not mean you have 2 databases. It means you have 2 copies of 1 database, and that you can hand 1 to someone else so you can both look up different things at the same time thereby accomplishing more work at once.
Sharding:
You could tear those phonebooks apart at the end of the white pages and the beginning of the yellow and hand them to 2 more people. You could further tear them at each letter and when you need susan summers ask the person with that section of the book to look for her.

suppose you wanted to publish or reuse some external database, and keep it separate from your primary database. This would be a good reason to use 2 databases... You can drop and reimport the external database at any time without affecting your database, and vice versa...

You can use two databases the same reason most banks have two ATMs, for reliability. You can swap one in if the other fails, but to do it quickly requires setup, such as a cname and controlling your own DNS server.
You can also do writes on one database, if the writes have complex triggers on them, and use some synching between databases to keep the second one updates, which is used for selects.
You can use two databases for load sharing, for example, use round-robin to split up the load so one isn't overloaded.

I sometimes have separate database because they handle different concerns. I.E. a Reporting database or an Authentication Database.

Replication

Making your system scalable by devide your database system to different physical location
Provide redundancy/replication as backup and seamless uptime.a

As Ben mentioned, Replication is one reason. Another is load balancing.
For example, Hotmail uses many database servers and customer data is broken up across the databases.
To have all of their customers' data on one server would not only require massive storage requirements, but the response times would be horrible.
In other cases, the data may be separated by function. You may well have two sets of data which are either not connected, or at least very loosely so, and in such cases, it may make sense to separate that data from the rest.

Also consider IO needs. Writing to one, reading from the other. One with immediate transactional needs, others where "transactions" can be queued, one instance at high priority, the other at "idle" priority, &c. It is very obvious however with the correct hardware and tablespace/filesystem layouts most of these situations can be achieved in a singular DB.

I think SQLite databases on the iPhone is limited to a size of 50 megabytes, but you can open several databases.

Related

mySQL performance one huge database vs small many

I am developing a site that has many subdomains in it.
It has blogging module, management system, and many more. I have shared this question in various sites but couldn't get a proper reply.
Question is should I use one database for all the modules, this means my database would have nearly 100 tables. Is this approach be appropriate or should I create separate database for every module?
Well, it does not really matter.
If you use innodb with single data file (innodb_file_per_table setting is not enabled), then all data will be stored in a single file anyway.
With innodb separate file per table mode or with myisam table engine, the only difference between one or multiple databases is really the directory where the database files are stored. Unless the directories (databases) are located in different storage devices with different speeds, their performance will be the same.
There can be 2 reasons to keep some tables in a different database:
Security: mysql does not support role based access control. Therefore if there is a group of tables that should be accessible by a certain group of users only, then the access control is more manageable if those tables are in a different database.
If some of the modules you mentioned happen to use the same table name, then you will have to move them to a separate database or you need to modify the code and table names to avoid errors.
There is no right or wrong way to design a system. Just advantages and disadvantages to the various techniques. I normally work in Oracle and SQL Server so I had to look up some terms for MySQL. According to my research, in MySQL a database is synonymous with a schema which changes things. I'd consider these things when planning the physical design for any vendor:
Security - Do all subdomains need read/write to each other? How are the users secured? Choosing one or many schemas can impact how easy schema and user security is to manage and control.
Growth - Do some subdomains grow at a faster rate than others? If yes, I'd consider separating them to allow for the different growth rates.
Organization - Is it easier to identify the different subdomains in practice if they're separated? If you don't separate them, use a strong naming convention so you can easily identify objects within one subdomain.
Linking - How easy is it to access one schema/database from another?
Hope this helps.

5 separate database or 5 tables in 1 database?

Let's say I want to build a gaming website and I have many game sections. They ALL have a lot of data that needs to be stored. Is it better to make one database with a table representing each game or have a database represent each section of the game? I'm pretty much expecting a "depends" kind of answer.
Managing 5 different databases is going to be a headache. I would suggest using one database with 5 different tables. Aside from anything else, I wouldn't be surprised to find you've got some common info between the 5 - e.g. user identity.
Note that your idea of "a lot of data" may well not be the same as the database's... databases are generally written to cope with huge globs of data.
Depends.
Just kidding. If this is one project and the data are in any way related to each other I would always opt for one database absent a specific and convincing reason for doing otherwise. Why? Because I can't ever remember thinking to myself "Boy, I sure wish it were harder to see that information."
While there is not enough information in your question to give a good answer, I would say that unless you foresee needing data from two games at the same time for the same user (or query), there is no reason to combine databases.
You should probably have a single database for anything common, and then create independent databases for anything unique. Databases, like code, tend to end up evolving in different directions for different applications. Keeping them together may lead you to break things or to be more conservative in your changes.
In addition, some databases are optimized, managed, and backed-up at a database level rather than a table level. Since they may have different performance characteristics and usage profiles, a one-size-fit-all solution may not be scalable.
If you use an ORM framework, you get access to multiple databases (almost) for free while still avoiding code replication. So unless you have joint queries, I don't think it's worth it to pay the risk of shared databases.
Of course, if you pay someone to host your databases, it may be cheaper to use a single database, but that's really a business question, not software.
If you do choose to use a single database, do yourself a favour and make sure the code for each game only knows about specific tables. It would make it easier for you to maintain things later or separate into multiple databases.
One database.
Most of the stuff you are reasonably going to want to store is going to be text, or primitive data types such as integers. You might fancy throwing your binary content into blobs, but that's a crazy plan on a media-heavy website when the web server will serve files over HTTP for free.
I pulled lead programming duties on a web-site for a major games publisher. We managed to cover a vast portion of their current and previous content, in three European languages.
At no point did we ever consider having multiple databases to store all of this, despite the fact that each title was replete with video and image resources.
I cannot imagine why a multiple database configuration would suit your needs here, either in development or outside of it. The amount of synchronisation you'll have to pull and capacity for error is immense. Trying to pull data that pertains to all of them from all of them will be a nightmare.
Every site-wide update you migrate will be n times as hard and error prone, where n is the number of databases you eventually plump for.
Seriously, one database - and that's about as far from your anticipated depends answer as you're going to get.
If the different games don't share any data it would make sense to use separate databases. On the other hand it would make sense to use one database if the structure of the games' data is the same--you would have to make changes in every game database separately otherwise.
Update: In case of doubt you should always use one database because it's easier to manage in the most cases. Just if you're sure that the applications are completely separate and have completely different structures you should use more databases. The only real advantage is more clarity.
Generally speaking, "one database per application" tends to be a good rule of thumb.
If you're building one site that has many sections for talking about different games (or different types of games), then that's a single application, so one database is likely the way to go. I'm not positive, but I think this is probably the situation you're asking about.
If, on the other hand, your "one site" is a battle.net-type matching service for a collection of five distinct games, then the site itself is one application and each of the five games is a separate application, so you'd probably want six databases since you have a total of six largely-independent applications. Again, though, my impression is that this is not the situation you're asking about.
If you are going to be storing the same data for each game, it would make sense to use 1 database to store all the information. There would be no sense in replicating table structures across different databases, likewise there would be no sense in creating 5 tables for 5 games if they are all storing the same information.
I'm not sure this is correct, but I think you want to do one database with 5 tables because (along with other reasons) of the alternative's impact on connection pooling (if, for example, you're using ADO.Net). In the ADO.Net connection pool, connections are keyed by the connection string, so with five different databases you might end up with 20 connections to each database instead of 100 connections to one database, which would potentially affect the flexibility of the allocation of connections.
If anybody knows better or has additional info, please add it here, as I'm not sure if what I'm saying is accurate.
What's your idea of "a lot of data"? The only reason that you'd need to split this across multiple databases is if you are trying to save some money with shared hosting (i.e. getting cheap shared hosts and splitting it across servers), or if you feel each database will be in the 500GB+ range and do not have access to appropriate storage.
Note that both of these reasons have nothing to do with architecture, and entirely based on monetary concerns during scaling.
But since you haven't created the site yet, you're putting the cart before the horse. It is very unlikely that a brand new site would use anywhere near this level of storage, so just create 1 database.
Some companies have single databases in the 1,000+ TB range ... there is basically no upper bound on database size.
The number of databases you want to create depends not on the number of your games, but on the data stored in the databases, or, better say, how do you exchange these data between the databases.
If it is export and import, then do separate databases.
If it is normal relationships (with foreign keys and cross-queries), then leave it in one database.
If the databases are not related to each other, then they are separate databases, of course.
In one of my projects, I distinguished between the internal and external data (which were stored in separate databases).
The difference was quite simple:
External database stored only the facts you cannot change or undo. That was phone calls, SMS messages and incoming payments in our case.
Internal database stored the things that are usually stored: users, passwords etc.
The external database used only the natural PRIMARY KEY's, that were the phone numbers, bank transaction id's etc.
The databases were given with completely different rights and exchanging data between them was a matter of import and export, not relationships.
This made sure that nothing would happen with actual data: it is easy to relink a payment to a user, but it's very hard to restore a payment if it's lost.
I can pass on my experience with a similar situation.
We had 4 "Common" databases and about 30 "Specific" databases, separated for the same space concerns. The downside is that the space concerns were just projecting dBase shortcomings onto SQL Server. We ended up with all these databases on SQL Server Enterprise that were well under the maximum size allowed by the Desktop edition.
From a database perspective with respect to separation of concerns, the 4 Common databases could've been 2. The 30 Specific databases could've been 3 (or even 1 with enough manipulation / generalization). It was inefficient code (both stored procs and data access layer code) and table schema that dictated the multitude of databases; in the end it had nothing at all to do with space.
I would consolidate as much as possible early and keep your design & implementation flexible enough to extract components if necessary. In short, plan for several databases but implement as one.
Remember, especially on web sites. If you have multiple databases, you often lose the performance benefits of query caching and connection pooling. Stick to one.
Defenitively, one database
One place I worked had many databases, a common one for the stuff all clients used and client specifc ones for customizing by client. What ended up happening was that since the clients asked for the changes, they woudl end up inthe client database instead of common and thus there would be 27 ways of doing essentially the same thing becasue there was no refactoring from client-specific to "hey this is something other clients will need to do as well" so let's put it in common. So one database tends to lead to less reinventing the wheel.
Security Model
If each game will have a distinct set of permissions/roles specific to that game, split it out.
Query Performance /Complexity
I'd suggest keeping them in a single database if you need to frequently query across the data between the games.
Scalability
Another consideration is your scalability plans. If the games get extremely popular, you might want to buy separate database hardware for each game. Separating them into different databases from the start would make that easier.
Data Size
The size of the data should not be a factor in this decision.
Just to add a little. When you have millions and millions of players in one game and your game is realtime and you have tens of thousand simultaneous players online and you have to at least keep some essential data as up-to-date in DB as possible (say, player's virtual money). Then you will want to separate tables into independent DBs even though they are all "connected".
It really depends. And scaling will be painful whatever you may try to do to avoid it being painful. But if you really expect A LOT of players and updates and data I would advise on thinking twice, thrice and more before settling on a "one DB for several projects" solution.
Yes it will be difficult to manage several DBs probably. But you will have to do this anyway.
Really depends :)..
Ask yourself these questions:
Could there be a resuability (users table) that I may want to think about?
Is it worth seperating these entities or are they pretty much the same?
Do any of these entities share specific events / needs?
Is it worth my time and effort to build 5 different database systems (remember if you are writing the games that would mean different connection strings and also present more security, etc).
Or you could create one database OnlineGames and have a table that stores the game name and a category:
PacMan Arcade
Zelda Role playing
etc etc..
It really depends on what your intentions are...

Can I set up a filtered, star-pattern database replication?

We have a client that needs to set up N local databases, each one containing one site's data, and then have a master corporate database containing the union of all N databases. Changes in an individual site database need to be propagated to the master database, and changes in the master database need to be propagated to the appropriate individual site database.
We've been using MySQL replication for a client that needs two databases that are kept simultaneously up to date. That's a bidirectional replication. If we tried exactly the same approach here we would wind up with all N local databases equivalent to the master database, and that's not what we want. Not only should each individual site not be able to see data from the other sites, sending that data N times from the master instead of just once is probably a huge waste.
What are my options for accomplishing this new star pattern with MySQL? I know we can replicate only certain tables, but is there a way to filter the replication by records?
Are there any tools that would help or competing RDBMSes that would be better to look at?
SymmetricDS would work for this. It is web-enabled, database independent, data synchronization/replication software. It uses web and database technologies to replicate tables between relational databases in near real time. The software was designed to scale for a large number of databases, work across low-bandwidth connections, and withstand periods of network outage.
We have used it to synchronize 1000+ MySQL retail store databases to an Oracle corporate database.
I've done this before, and AFAIK this is the easiest way. You should look in to using Microsoft SQL Server Merge Replication, and using Row Filtering. Your row filtering would be set up to have a column that states what individual site destination it should go to.
For example, your tables might look like this:
ID_column | column2 | destination
The data in the column might look like this:
12345 | 'data' | 'site1'
You would then set your merge replication "subscriber" site1 to filter on column 'destination' and value 'site1'.
This article will probably help:
Filtering Published Data for Merge Replication
There is also an article on msdn called "Enhancing Merge Replication Performance" which may help - and also you will need to learn the basics of setting up publishers and subscribers in SQL Server merge replication.
Good luck!
Might be worth a look at mysql-table-sync from maatkit which lets you sync tables with an optional --where clause.
If you need unidirectional replication, then use multiple copies of databases replicated in center of star and custom "bridge" application to move data further to the final one
Just a random pointer: Oracle lite supports this. I've evaluated it once for a similar task, however it needs something installed on all clients which was not an option.
A rough architecture overview can be found here
Short answer no, you should redesign.
Long answer yes, but it's pretty crazy and will be a real pain to setup and manage.
One way would be to roundrobin the main database's replication among the sites. Use a script to replicate for say 30 seconds from a site record how far it got and then go on the the next site. You may wish to look at replicate-do-db and friends to limit what is replicated.
Another option that I'm unsure would work is to have N mysqls in the main office that replicates from each of the site offices, and then use the federated storage engine to provide a common view from the main database into the per-site slaves. The site slaves can replicate from the main database and pick up whichever changes they need.
Sounds like you need some specialist assistance - and I'm probably not it.
How 'real-time' does this replication need to be?
Some sort of ETL process (or processes) is possibly an option. we use MS SSIS and Oracle in-house; SSIS seems to be fairly good for ETL type work (but I don't work on that specific coal face so I can't really say).
How volatile is the data? Would you say the data is mostly operational / transactional?
What sort of data volumes are you talking about?
Is the central master also used as a local DB for the office where it is located? if it is you might want to change that - have head office work just like a remote office - that way you can treat all offices the same; you'll often run into problems / anomalies if different sites are treated differently.
it sounds like you would be better served by stepping outside of a direct database structure for this.
I don't have a detailed answer for you, but this is the high level of what I would do:
I would select from each database a list of changes during the past (reasonable time frame), construct the insert and delete statements that would unify all of the data on the 'big' database, and then separate smaller sets of insert and delete statements for each of the specific databases.
I would then run these.
There is a potential for 'merge' issues with this setup if there is any overlap with data coming in and out.
There is also the issue of data being lost or duplicated because your time frame were not constructed properly.

MySQL Databases. How Many for a Web App?

I'm building a web app. This app will use MySQL to store all the information associated with each user. However, it will also use MySQL to store sys admin type stuff like error logs, event logs, various temporary tokens, etc. This second set of information will probably be larger than the first set, and it's not as important. If I lost all my error logs, the site would go on without a hiccup.
I am torn on whether to have multiple databases for these different types of information, or stuff it all into a single database, in multiple tables.
The reason to keep it all in one, is that I only have to open up one connection. I've noticed a measurable time penalty for connection opening, particularly using remote mysql servers.
What do you guys do?
Fisrt,i must say, i think storing all your event logs, error logs in db is a very bad idea, instead you may want to store them on the filesystem.
You will only need error logs or event logs if something in your web app goes unexpected. Then you download the file, and examine it, thats all. No need to store it on the db. It will slow down your db and your web app.
As an answer to your question, if you really want to do that, you should seperate them, and you should find a way to keep your page running even your event og and error log databases are loaded and responding slowly.
Going with two distinct database (one for your application's "core" data, and another one for "technical" data) might not be a bad idea, at least if you expect your application to have a lot of users :
it'll allow you to put one DB on one server, and the other DB on a second server
and you can think about scaling a bit more, later : more servers for the "core" data, and still only one for the "technical" data -- or the opposite
if the "technical" data is not as important, you can (more easily) have two distinct backup processes / policies
having two distinct databases, and two distinct servers, also means you can have heavy calculations on the technical data, without impacting the DB server that hosts the "core" data -- and those calculations can be useful, on logs, or stuff like that.
as a sidenote : if you don't need that kind of "reporting" calculations, maybe storing those data to a DB is not useful, and files would do perfectly ?
Maybe opening two connections means a bit more time -- but that difference is probably rather negligible, is it not ?
I've worked a couple of times on applications that would use two database :
One "master" / "write" database, that would be used only for writes
and one "slave" database (a replication of the first one, to several slave servers), that would be used for reads
This way, yes, we sometimes open two connections -- bu one server alone would not have been able to handle the load...
Use connection pooling anyway. So the time to get a connection is not a problem. But if you have 2 connections, transaction handling become more complicated. On the other hand, sometimes it's handy to have 2 connections: if something goes wrong on the business transaction, you can rollback transaction and still log the failure on the admin transaction. But I would still stick to one database.
I would only use one databse - mostly for the reason you supply: You only need one connection to reach both logging and user stored data.
Depending on your programming language, some frameworks (J2EE as an example) provide connection pooling. With two databases you would need two pools. In PHP on the other hand, the performance come in to perspective when setting up a connection (or two).
I see no reason for two databases. It'd be perfectly acceptable to have tables that are devoted to "technical" and "business"data, but the logical separation should be sufficient.
Physical separation doesn't seem necessary to me, unless you mean an application and data warehouse star schema. In that case, it's either real-time updates or, more typically, a nightly batch ETL.
It makes no difference to mysql in any way whether you use separate "datbases", they are simply catalogues.
It may make setting permissions easier, this is a legitimate reason to do it. Other than that, it is exactly the same as keeping the tables in the same db (except you can have several tables with the same name ... but please don't)
Putting them on separate servers might be a good idea however, as you probably don't want your core critical (user info, for example) data mixed in with your high-volume, unimportant data. This is particularly true for old audit data, debug logs etc.
Also short-lived data, such as search results, sessions etc, could be placed on a different server - it presumably has no high availability[1] requirement.
Having said that, if you don't need to do this, dump it all on one server where it's easier to manage (backup, provide high availibilty, manage security etc).
It is not generally possible to take a consistent snapshot of data on >1 server. This is a good reason to only have one (or one that you care about for backup purposes)
[1] Of the data, not the database.
In MySQL, InnoDB has an option of storing all tables of a certain database in one file, or having one file per table.
Having one file per table is somewhat recommended anyway, and if you do that, it makes difference on the database storage level if you have one database or several.
With connection pooling, one database or several is probably not going to matter either.
So, in my opinion, the question is if you'd ever consider separating the "other half" of the database to a separate server - with the separate server having perhaps a very different hardware configuration, such as no RAID. If so, consider using separate databases. If not, use a single database.

Should I split the data between multiple databases or keep them in a single one?

I'm creating a multi-user/company web application in PHP & MySQL. I'm interested to know what the best practice is with regards to structuring my database(s).
There will be hundreds of companies and thousands of users of this web app so this needs to be robust. Each company won't be able to see other companies data, just their own. We will be storing mainly text data and will probably only be a few MB per company.
Currently the database contains 14 tables (for one sample company).
Is it better to put the data for all companies and their users in a single database and create a unique companyID for each one?
or:
Is it better to put each company's data in its own database and create a new database and table set for each new company that I add?
What are the pluses and minuses to each approach?
Thanks,
Stephen
If a single web app is being used by all the different companies, unless you have a very specific need or reason to use separate databases (it doesn't sound like you do), then you should definitely use a single database.
Your application will be responsible for only showing the correct information to the correct authenticated users.
Multiple databases would be a nightmare to maintain. For each new company you'd have to create and administer each one. If you make a change to one schema, you'll have to do it to your 14+.
Thousands of users and thousands of apps shouldn't pose a problem at all as long as you're using something that is a real database and not Access or something silly like that.
Multi-tenant
Pluses
Relatively easy to develop: only change database code in one place.
Lets you easily create queries which use data for multiple tenants.
Straightforward to add new tenants: no code needs to change.
Transforming a multi-tenant to a single-tenant setup is easy, should you need to change your design.
Minuses
Risk of data leak between tenants if coding is sloppy. Tenant view filters can in some cases be employed to reduce this risk. This method is based on using different database user accounts for different tenants.
If you break the code, all tenants will be affected.
Single-tenant
Pluses
If you have very different requirements for different tenants, several different database models can be beneficial. This is the best case for using a single tenant setup.
If you code sloppily, there's practically no risk of data leak between tenants (tenant A will not be able to access tenant B's data). In addition, if you accidentally destroy the schema of one tenant through a botched update, other tenants will remain unaffected.
Less SQL code when you don't need to take tenant ID values into account in your queries
Minuses
Database schemas tend to differentiate with time, often resulting in a nightmare. Using a database compare tool, you can alleviate this problem, but potentially many schemas need to be compared.
Including data from several databases in one query is typically complex, and often requires prepared statements.
Developing is hard, since you need to make the same changes to multiple schemas.
The same database entity can appear in many databases with different ID keys, resulting in confusion.
Transforming a single-tenant to a multi-tenant setup is very hard, should you need to change your design.
A single database is the relational way. One aspect from this perspective is that databases gather statistics about database usage and make heavy use of this. If you split things up you will be shooting yourself in the foot as the statistics will be fragmented.