Database: 70+ Columns or Multiple Tables? - mysql

Building a database system for my local Medical Association.
What we have is a list with something like 70+ fields of information for each member of the association. Stuff like name, surname, home address, office address, phone numbers, specialty +++ many small details.
At the moment i've built one table with all the information related to the docs + other tables for related stuff like subscription payments, requests, penalties etc.
I'm quite new to database design, and while it works, I find my design ugly. It is logical, as all information in each row is unique and belongs to just one person, but i'm sure there would be a better way to do it.
How would you go for it? Should I do multiple 1:1 tables, 1 for each subject (basic info, contact info, education, etc) or just keep it as it is? One table with 70+ columns.

I wouldn't worry about 70 columns in a table. This is not a problem for MySQL.
MySQL can support many more columns. InnoDB's hard limit on the number of columns in a table is 1000.
Read this blog about Understanding the Maximum Number of Columns in a MySQL Table for details.
It's more convenient to put all the attributes into a table that belong with that table. It will take more coding work to support separating columns into multiple tables if you feel you need to do that.
If some of the columns are not applicable, use NULL. NULL takes almost no storage in MySQL, so you won't be "wasting" any space by having a lot of columns most of which are NULL.
The only downside is that you may find yourself adding more columns as time goes on, and that could be inconvenient if the table grows large and access to the table is blocked while you are altering it. In that case, learn to use pt-online-schema-change, a free tool that allows you to alter tables while continuing to use them.

1:1 is rarely wise. But it may be advisable if you have "too many" columns, especially if they are "too bulky".
Do use suitable datatypes for the columns -- ints, dates, etc.
Do use suitable VARCHAR sizes, not blindly VARCHAR(255) or TEXT. (This will help later in certain subtle limits and in performance.)
Study the data. If, for example, only half the persons have a "subscription", then the 5 columns relating to a subscription can (should) be moved to a separate table. Probably the subscription table would have a person_id for linking to the main table. This is potentially 1:many (one person to many subscriptions) if that is relevant.
By splitting off some columns, you avoid lots of NULLs. Nulls are not harmful; it just seems sloppy if there are lots of nulls.
If you are talking about only a few hundred rows, then you are unlikely to encounter significant performance issues regardless of how you structure the tables.
"phone numbers" used to come in "home" and "work". Now there is "fax", "cell", "emergency contact", and even multiple numbers of any category. That is very likely a 1-person-to-many-numbers.
Selectively "normalize" the data. You said "local". It may be worth "normalizing" (city, state, zip) into a separate table. Or it may not be worth the effort. I argue that you should no normalize phone numbers.
Do not have an "array" of things splayed across columns. Definitely use a separate table when such occurs.
Do think about 1:1, 1:many, and many:many when building "entity" tables and their "relationships".
If you have a million rows, these tips become more important and need to be pondered carefully.

Related

MySQL: database structure choice - big data - duplicate data or bridging

We have a 90GB MySQL database with some very big tables (more than 100M rows). We know this is not the best DB engine but this is not something we can change at this point.
Planning for a serious refactoring (performance and standardization), we are thinking on several approaches on how to restructure our tables.
The data flow / storage is currently done in this way:
We have one table called articles, one connection table called article_authors and one table authors
One single author can have 1..n firstnames, 1..n lastnames, 1..n emails
Every author has a unique parent (unique_author), except if that author is the parent
The possible data query scenarios are as follows:
Get the author firstname, lastname and email for a given article
Get the unique authors.id for an author called John Smith
Get all articles from the author called John Smith
The current DB schema looks like this:
EDIT: The main problem with this structure is that we always duplicate similar given_names and last_names.
We are now hesitating between two different structures:
Large number of tables, data are split and there are connections with IDs. No duplicates in the main tables: articles and authors. Not sure how this will impact the performance as we would need to use several joins in order to retrieve data, example:
Data is split among a reasonable number of tables with duplicate entries in the table article_authors (author firstname, lastname and email alternatives) in order to reduce the number of tables and the application code complexity. One author could have 10 alternatives, so we will have 10 entries for the same author in the article_authors table:
The current schema is probably the best. The middle table is a many-to-many mapping table, correct? That can be made more efficient by following the tips here: http://mysql.rjweb.org/doc.php/index_cookbook_mysql#many_to_many_mapping_table
Rewrite #1 smells like "over-normalization". A big waste.
Rewrite #2 has some merit. Let's talk about phone_number instead of last_name because it is rather common for a person to have multiple phone_numbers (home, work, mobile, fax), but unlikely to have multiple names. (Well, OK, there are pseudonyms for some authors).
It is not practical to put a bunch of phone numbers in a cell; it is much better to have a separate table of phone numbers linked back to whoever they belong to. This would be 1:many. (Ignore the case of two people sharing the same phone number -- due to sharing a house, or due to working at the same company. Let the number show up twice.)
I don't see why you want to split firstname and lastname. What is the "firstname" of "J. K. Rowling"? I suggest that it is not useful to split names into first and last.
A single author would have a unique "id". MEDIUMINT UNSIGNED AUTO_INCREMENT is good for such. "J. K. Rowling" and "JK Rowling" can both link to the same id.
More
I think it is very important to have a unique id for each author. The id can be then used for linking to books, etc.
You have pointed out that it is challenging to map different spellings into a single id. I think this should be essentially a separate task with separate table(s). And it is this task that you are asking about.
That is, split the database split, and split the tasks in your mind, into:
one set of tables containing stuff to help deduce the correct author_id from the inconsistent information provided from the outside.
one set of tables where author_id is known to be unique.
(It does not matter whether this is one versus two DATABASEs, in the MySQL sense.)
The mental split helps you focus on the two different tasks, plus it prevents some schema constraints and confusion. None of your proposed schemas does the clean split I am proposing.
Your main question seems to be about the first set of tables -- how do turn strings of text ("JK Rawling") into a specific id. At this point, the question is first about algorithms, and only secondly about the schema.
That is, the tables should be designed to support the algorithm, not to drive it. Furthermore, when a new provider comes along with some strange new text format, you may need to modify the schema - possibly adding a special table for that provider's data. So, don't worry about making the perfect schema this early in the game; plan on running ALTER TABLE and CREATE TABLE next month or even next year.
If a provider is consistent in spelling, then a table with (provider_id, full_author_name, author_id) is probably a good first cut. But that does not handle variations of spelling, new authors, and new providers. We are getting into gray areas where human intervention will quickly be needed. Even worse is the issue of two authors with the same name.
So, design the algorithm with the assumption that simple data is easily and efficiently available from a database. From that, the schema design will somewhat easily flow.
Another tip here... Some degree of "brute force" is OK for the hard-to-match cases. Most of the time, you can easily map name strings to author_id very efficiently.
It may be easier to fetch a hundred rows from a table, them massage them in your algorithm in your app code. (SQL is rather clumsy for algorithms.)
if you want to reduce size you could also think about splitting email addresses in two parts: 'jkrowling#' + 'gmail.com'. You could have a table where you store common email domains but seeing that over-normalization is a concern...

Is it fine to split data of one table even though it has one to one relation?

We have used commodities website where sellers upload their commodities and buyers can see their contact after registering their mobile number on our website. Every such lead/transaction where buyer takes seller details gets stored in Leads table that looks like:
LeadId | CustomerId
LeadId: Primary, Auto-increment key of this table.
CustomerId: Foreign key of CustomerTable.
Now we want to track which lead came from which source. For this we need several tracking parameters like: sourceId, campaignID, platformId.. and several others"
Though lead and these tracking parameters have one to one relation but does it make sense to keep them together OR we should have separate table for these:
Example: LeadTracking table which will have all these tracking parameters.
Note: This is over simplified example. We actually have several Lead and LeadTracking columns.
Can we keep them in separate tables? How should we decide when to split such tables?
Logically the tracking columns can probably all be in leads table (based on your description of them as 1-1).
You can implement the data structure using one or two tables; that is an implementation decision.
Why would you do this? Here are some reasons:
If many of the leads do not have tracking information, why bother having lots of empty columns? Such columns can both confuse users and take up extra space on the data pages.
If the tracking columns are particularly wide (or numerous), then the extra space can affect the performance of queries that do not use them.
If the tracking columns are frequently updated, then having them in a separate table can help prevent locking from slowing down queries.
If the tracking columns might be changing but the leads columns are stable, then you might want to isolate the columns to a table that is more likely to change.
If the tracking columns have different security requirements, it is easier to specify security on a table basis than on a column basis.
No doubt, there are other reasons as well. When I learned this process it was called "vertical partitioning", because the tables "split" the original data by columns (vertically) rather than by rows (horizontally).

Spreading/distributing an entity into multiple tables instead of a single on

Why would anyone distribute an entity (for example user) into multiple tables by doing something like:
user(user_id, username)
user_tel(user_id, tel_no)
user_addr(user_id, addr)
user_details(user_id, details)
Is there any speed-up bonus you get from this DB design? It's highly counter-intuitive, because it would seem that performing chained joins to retrieve data sounds immeasurably worse than using select projection..
Of course, if one performs other queries by making use only of the user_id and username, that's a speed-up, but is it worth it? So, where is the real advantage and what could be a compatible working scenario that's fit for such a DB design strategy?
LATER EDIT: in the details of this post, please assume a complete, unique entity, whose attributes do not vary in quantity (e.g. a car has only one color, not two, a user has only one username/social sec number/matriculation number/home address/email/etc.. that is, we're not dealing with a one to many relation, but with a 1-to-1, completely consistent description of an entity. In the example above, this is just the case where a single table has been "split" into as many tables as non-primary key columns it had.
By splitting the user in this way you have exactly 1 row in user per user, which links to 0-n rows each in user_tel, user_details, user_addr
This in turn means that these can be considered optional, and/or each user may have more than one telephone number linked to them. All in all it's a more adaptable solution than hardcoding it so that users always have up to 1 address, up to 1 telephone number.
The alternative method is to have i.e. user.telephone1 user.telephone2 etc., however this methodology goes against 3NF ( http://en.wikipedia.org/wiki/Third_normal_form ) - essentially you are introducing a lot of columns to store the same piece of information
edit
Based on the additional edit from OP, assuming that each user will have precisely 0 or 1 of each tel, address, details, and NEVER any more, then storing those pieces of information in separate tables is overkill. It would be more sensible to store within a single user table with columns user_id, username, tel_no, addr, details.
If memory serves this is perfectly fine within 3NF though. You stated this is not about normal form, however if each piece of data is considered directly related to that specific user then it is fine to have it within the table.
If you later expanded the table to have telephone1, telephone2 (for example) then that would violate 1NF. If you have duplicate fields (i.e. multiple users share an address, which is entirely plausible), then that violates 2NF which in turn violates 3NF
This point about violating 2NF may well be why someone has done this.
The author of this design perhaps thought that storing NULLs could be achieved more efficiently in the "sparse" structure like this, than it would "in-line" in the single table. The idea was probably to store rows such as (1 , "john", NULL, NULL, NULL) just as (1 , "john") in the user table and no rows at all in other tables. For this to work, NULLs must greatly outnumber non-NULLs (and must be "mixed" in just the right way), otherwise this design quickly becomes more expensive.
Also, this could be somewhat beneficial if you'll constantly SELECT single columns. By splitting columns into separate tables, you are making them "narrower" from the storage perspective and lower the I/O in this specific case (but not in general).
The problems of this design, in my opinion, far outweigh these benefits.

Does it cause problems to have a table associated with multiple content types?

I have multiple content types, but they all share some similarities. I'm wondering when it is a problem to use the same table for a different content type? Is it ever a problem? If so, why?
Here's an example: I have five kinds of content, and they all have a title. So, can't I just use a 'title' table for all five content types?
Extending that example: a title is technically a name. People and places have names. Would it be bad to put all of my content titles, people names, and place names in a "name" table? Why separate into place_name, person_name, content_title?
I have different kinds of content. In the database, they seem very similar, but the application uses the content in different ways, producing different outputs. Do I need a new table for each content type because it has a different result with different kinds of dependencies, or should I just allow null values?
I wouldn't do that.
If there are multiple columns that are the same among multiple tables, you should indeed normalize these to 1 table.
And example of that would be several types of users, which all require different columns, but all share some characteristics (e.g. name, address, phone number, email address)
These could be normalized to 1 table, which is then referenced to by all other tables through a foreign key. (see http://en.wikipedia.org/wiki/Database_normalization )
Your example only shows 1 common column, which is not worth normalizing. It would even reduce performance trying to fetch your data, because you'll need to join 2 tables to get all data; 1 of which (the one with the titles) contains a lot of data you won't need all the data from, thus straining the server more.
While normalization is a very good practice to avoid redundency and ensure consistency, it can be bad for performance sometimes. For example for a person table where you have columns like name, adress, dob its not very good performance wise to have a picture in the same table. A picture can be about 1MB easily while the remaining columns may not take any more than 1K. Imagine how many blokcs of data needed to be read even if you only want to list the name and address of people living in a certain city - if you are keeping everything in the same table.
If there is a variation in size of the contents and you might have to retrieve only certain types of contents in the same query, the performance gain from storing them in separate tables will outweight the normalization easily.
To typify data in this way, it's best to use a table (i.e., name), and a sub-table (i.e., name_type), and then use a FK constraint. Use an FK constraint because the InnoDB does not support column constraints, and the MyISAM engine is not suited for this (it is much less robust and feature rich, and it should really only be used for performance).
This kind of normailization is fine, but it should be done with a free-format column type, like VARCHAR(40), rather than with ENUM. Use triggers to restrict the input so that it matches the types you want to support.

Is there a performance decrease if there are too many columns in a table?

Is there a performance cost to having large numbers of columns in a table, aside from the increase in the total amount of data? If so, would splitting the table into a few smaller ones help the situation?
I don't agree with all these posts saying 30 columns smells like bad code. If you've never worked on a system that had an entity that had 30+ legitimate attributes, then you probably don't have much experience.
The answer provided by HLGEM is actually the best one of the bunch. I particularly like his question of "is there a natural split....frequently used vs. not frequently used" are very good questions to ask yourself, and you may be able to break up the table in a natural way (if things get out of hand).
My comment would be, if your performance is currently acceptable, don't look to reinvent a solution unless you need it.
I'm going to weigh in on this even though you've already selected an answer. Yes, tables that are too wide could cause performance problems (and data problems as well) and should be separated out into tables with one-one relationships. This is due to how the database stores the data (well at least in SQL Server not sure about MySQL but it is worth doing some reading in the documentation about how the database stores and accesses the data).
Thirty columns might be too wide and might not, it depends on how wide the columns are. If you add up the total number of bytes that your 30 columns will take up, is it wider than the maximum number of bytes that can be stored in a record?
Are some of the columns ones you will need less often than others (in other words is there a natural split between required and frequently used info and other stuff that may appear in only one place not everywhere else), then consider splitting up the table.
If some of your columns are things like phone1, phone2, phone3 - then it doesn't matter how many columns you have you need a related table with a one-to-many relationship instead.
In general, though 30 columns are not unusually big and will probably be OK.
If you really need all those columns (that is, it's not just a sign that you have a poorly designed table) then by all means keep them.
It's not a performance problem, as long as you
use appropriate indexes on columns you need to use to select rows
don't retrieve columns you don't need in SELECT operations
If you have 30, or even 200 columns it's no problem to the database. You're just making it work a little harder if you want to retrieve all those columns at once.
But having a lot of columns is a bad code smell; I can't think of any legitimate reason a well-designed table would have this many columns and you may instead be needing a one-many relationship with some other, much simpler, table.
Technically speaking, 30 columns is absolutely fine. However, tables with many columns are often a sign that your database isn't properly normalized, that is, it can contain redundant and / or inconsistent data.
30 doesn't seem too many to me. In addition to necessary indexes and proper SELECT queries, for wide tables, 2 basic tips apply well:
Define your column as small as possible.
Avoid using dynamic columns such as VARCHAR or TEXT as much as possible when you have large number of columns per table. Try using fixed length columns such as CHAR. This is to trade off disk storage for performance.
For instance, for columns 'name', 'gender', 'age', 'bio' in 'person' table with as many as 100 or even more columns, to maximize performance, they are best to be defined as:
name - CHAR(70)
gender - TINYINT(1)
age - TINYINT(2)
bio - TEXT
The idea is to define columns as small as possible and in fixed length where reasonably possible. Dynamic columns should be to the end of the table structure so fixed length columns are ALL before them.
It goes without saying this would introduce tremendous disk storage wasted with large amount of rows, but as you want performance I guess that would be the cost.
Another tip is as you go along you would find columns that are much more frequently used (selected or updated) than the others, you should separate them into another table to form a one to one relationship to the other table that contains infrequent used columns and perform the queries with less columns involved.
Should be fine, unless you have select * from yourHugeTable all over the place. Always only select the columns you need.
30 columns would not normally be considered an excessive number.
Three thousand columns, on the other hand...
How would you implement a very wide "table"?
Beyond performance, DataBase normalization is a need for databases with too many tables and relations. Normalization gives you easy access to your models and flexible relations to execute diffrent sql queries.
As it is shown in here, there are eight forms of normalization. But for many systems, applying first, second and third normal forms is enough.
So, instead of selecting related columns and write long sql queries, a good normalized database tables would be better.
Usage wise, it's appropriate in some situations, for example where tables serve more than one application that share some columns but not others, and where reporting requires a real-time single data pool for all, no data transitions. If a 200 column table enables that analytic power and flexibility, then I'd say "go long." Of course in most situations normalization offers efficiency and is best practice, but do what works for your need.