Need a database design advice - query vs. additional column - mysql

I have following tables:
Customer(customer_id) - 1000 rows (1000 customers)
Invoice(invoice_id, customer_id) - 1000000 rows (1000 invoices per customer)
Charge(charge_id, invoice_id, charge_amount) - 20000000 rows (20 charges per invoice)
Now, I am trying to produce a customer's invoice with it's total charge amount.
The resulting table would look something like this:
Customer_name | invoice_id | charge_total
test 1 $1000
test 2 $1200
test 3 $900
...
My question is, what is the best practice for database design for this case?
I am pondering over two options below:
Just run everything through a query?
Add "charge_total" column in Invoice table to save query processing time (20 times faster)
Thanks everybody!

There are two ways to look at this question. The database purist will say that derived or computed data is redundant and violates 3rd Normal Form. This is a concern in transactional systems where data is being edited, since normalization prevents you from falling into the trap of having self-conflicting data.
On the other hand, there is a practical view which says that data which is written once and never updated is not subject to update and delete anomalies anyway, so redundancy costs disk space, but is not otherwise a risk.
As a rule, I always design databases to be normalized first and then introduce redundancy on a limited basis, after careful examination of the competing risks.

This is hard to answer - do you know that you have a performance problem? I'd not optimize unless I really, really had to.
And even then, I would consider an "invoice archive" table to hold the computed values. Logically, there's nothing wrong in calculating summaries and storing them in a table to reflect the amount that was actually invoiced - including tax, shipping etc. This means you can store an archive version of the invoice data without having to worry about.
I'd not want to store it in the main "invoice" table unless invoices are immutable - you create it, and nothing ever changes from the moment it's created. That doesn't work if you have a business process in which invoices are created in advance and items are added to it over time.

This decision comes down to the tradeoff of speed for your users vs additional complexity in your database that makes your code more susceptible to errors. It reminds me of this discussion:
https://stackoverflow.com/questions/211414/is-premature-optimization-really-the-root-of-all-evil
In your case, since you've already done the performance testing, I feel like denormalizing your database like you suggest is a good thing.

One thing you want to keep in mind, is how often does the data change that would affect the value of "charge_total"? For example, if an item is returned, does that charge get taken off the invoice at a later date? If things do change often, you'll have to keep in mind the overhead of having those change events responsible for updating the "charge_total" field.

First you should check if the performance without an additional column is sufficient in your case. If it is not, then, and not before (!), you should check if your "20 times faster" guess is really correct. Try to add a View to your database for your charge_total and test how your DB system handles that view. I don't know MySql enough, but some modern DB systems are able to do internal caching of view data as long as the source data does not change.
When you have done that, and you are sure the additional column charge_total is a solution for a problem you really have, then you should make sure that those redundant data is hold consistent. You can do this on the DB side (using triggers), or on the client side - when you have the one-and-only process that changes the charges table under your control.

Making charge_total a calculated column in the invoice table would probably be the easiest way I can think of. It would save you from doing that calculation each time you ran the query to get the values, which I'm assuming happens more frequently that adding a charge.

Nowadays disk space is cheap so you do not have to worry about size. If the extra column improves the performance, just go with it.

Related

Should I normalise these fields?

I am torn. I am dealing with data that is very difficult to deal with; a "job" has at the moment over 100 columns.
I put all of the columns into the job because every time I get a job's info, I will 99.99% of the time need all of the data. So, splitting it would probably get me better grades if I were a student, but it would simply resolve into joints every time I load the data.
One example I find it hard to decide is cargoes. A ship can have one (80% of the time), 2 (99% of the time) or 3 (1% of the time) cargoes. Never 4. Storing cargoes in a 1:n relationship with the job is very easy, but it also means that:
Every time I load a job, I need an extra query to get the cargoes
CRUD is a little more painful, as I have to make another store, with permissions, etc.
However, now I have these columns in my DB:
cargoId1, cargoDescription1, contractTonnage1,
contractTonnageTolerance1, commentsOnTonnageTolerance1,
tonnageToBeLoaded1, tonnageLoaded1
cargoId2, cargoDescription2, contractTonnage2,
contractTonnageTolerance2, commentsOnTonnageTolerance2,
tonnageToBeLoaded2, tonnageLoaded2
cargoId3, cargoDescription3, contractTonnage3,
contractTonnageTolerance3, commentsOnTonnageTolerance3,
tonnageToBeLoaded3, tonnageLoaded3
What would you do? Ideas?
I'll have to warn you that you will probably get downvotes, close votes and/or delete votes for a "primarily opinion-based" question. I think your question IS primarily opinion-based, as it is essentially synonymous with "pros and cons of normalization". (ps: I hate the fact that this should get you downvotes though).
One thing you could do if you would like to have the best of both worlds is to make the table normalized, and create a view that will return the de-normalized form with PIVOT. This way, the integrity of your data gets better from normalization, and WRITING a query will be easier. Joins that will (slightly with a good index) affect performance will be done, but imo that's a small price for integrity.

MYSQL DB Normalization & Query Indexes

We currently have a table that contains 90 columns and as the table is growing and the business needs change, we're having to alter the table alot (add/remove cols & indexes).
|------ (Table name: quotes)
|Column|Type|Null|Default
|------
|//**id**//|int(11)|No|
....
|completed_at|datetime|Yes|NULL
|reviewed_at|datetime|Yes|NULL
|marked_dud_at|datetime|Yes|NULL
|closed_at|datetime|Yes|NULL
|subscribed_at|datetime|Yes|NULL
|admin_checked_at|datetime|Yes|NULL
|priced_at|datetime|Yes|NULL
|number_verified_at|datetime|Yes|NULL
|created_at|datetime|Yes|NULL
|deleted_at|datetime|Yes|NULL
For the application, our staff are constantly querying all sorts of variations on the above data, example being where it has been completed (completed_at), checked (admin_checked_at) and not deleted, reviewed (deleted_at, reviewed_at)
We're thinking it may be easier to offload some of these columns into their own row, we'll call it quotes_actions, then when querying do some joining.
|------ (Table name: quotes_actions)
|Column|Type|Null|Default
|------
|//**id**//|int(11)|No|
|quote_id|int(11)|No|
|action|varchar(100)|No|
|user_id|int(11)|No|
|time|datetime|Yes|NULL
|created_at|datetime|Yes|NULL
An example would be action = 'completed' using the field, with an index covering quote_id and action.
We've split the data into this format on 150,000 rows and it's not any faster nor slower than querying the original database with correct indexes.
Has anyone got any experience with this and has any recommendations or pitfalls for each approach? It's taking a lot of time to add covering indexes and add columns to the original table as we needed them, whereas the second approach has the indexes set up ready to go but is introducing a lot more joins and more complicated queries.
0.09s
select * from `quotes`
where `completed_at` is not null
and `approved_at` is not null
and deleted_at is null
=>
0.0005s
select * from `quotes_new`
inner join quotes_actions as q1 on q1.action = 'completed' and q1.quote_id = quotes_new.id
inner join quotes_actions as q2 on q2.action = 'approved' and q2.quote_id = quotes_new.id
where quotes_new.deleted_at is null
In addition, if the 2nd approach is better, how do you query for negative results, where a quote hasn't been approved?
Database design will vary from application to application, and things that are great for one implementation will be terrible for another. You've identified a few things that are important to you:
speed of data access (at least no reduction in current performance)
ability to respond to application needs/changes
limiting complexity of queries
Without being able to see the entirity of your database and how you are using it, these are the principles I would follow:
Use Stored Procedures and Views for as much as possible
This is just good design. You create an adapter layer between your application and the data tables, which allows you to make whatever changes you need to in the database (and the views/stored procs) without having to change the application itself. Decoupling your systems makes maintenance significantly easier. Also this is good for security, as if the only way outsiders can access the data is through your stored procs, you've eliminated a few avenues of attack. (There's also debate about whether or not the DBMS will cache execution plans for stored procedures, making them execute faster than similar queries, but I'm not a DBA or DBDev, so I'm not touching that).
Attempt to limit width of tables
One thing I've seen time and time again is every time a need arises in a production systems, a column gets added to a table and they call it a day. Far easier than rewriting a bunch of queries or reviewing table structures. This is terrible design. If you've already limited the changes needed to the application layer by following my first piece of advice, you've limited the work needed to actually resolve table changes in the right way. You should always evaluate whether data belongs to the row in question, or if it should be offloaded into its own table. You shouldn't be afraid to radically alter your database, as sometimes it is necessary.
Looking at the data you've provided, I think your second option is okay. You've identified many columns that actually represent the same thing (the "status changes" or as you put it "quote actions" that occur) and offloaded that from the main table to a secondary table. This is perfectly fine, and likely will be effective. You can further "cheat" to make this table faster by offloading status onto its own table, and using an integer to represent it instead of a string (since the string doesn't matter to the database, and integers are far faster to index and search).
This is not to say a wide table is a bad thing, sometimes tables just need to be wide. You just need to evaluate whether the data really belongs to the entity the data row represents.
Approach queries in new ways
You will want to play with the execution plan tools of your DBMS and understand how each query really works. Changing the order of joins can drastically alter the query return speed, and you shouldn't be afraid to use table variables and temp tables in your queries. They are all tools at your disposal.
Querying for Negative Results
Since you asked this question specifically, I'll address it. This requires thinking about your query in a little different way (consequently, if you haven't, you should look into taking a course or working through a textbook of Relational Algebra, it makes understanding databases so much easier).
Your original query made finding something where the quote was not approved easy. It was all in the table: approved_at is null. Simple, easy peasy, no problems. Now, however, instead of being in a column on the main table, it is in its own table, that also represents all the other actions that could be taken. You need to break the problem down a little.
You want to find the set wherein of all orders, there is no action to signify it is approved. In SQL that looks like:
select quote_id from quotes_action where quote_id not in
(select quote_id from quotes_action where action = 'approved');
Final Thoughts
You need to sit down with your team and talk about how you want to move forward with this product. Spend a few days or a couple weeks really thinking deeply about it. Brainstorm....hackathon....do something to find a solution you like and makes your product better and more maintainable. We've all been in the situation where we have an unmaintainable product that could have been fixed at some point, but is beyond that point. Try not to get to that point, and fix it while you have the opportunity.

MySQL database activity log: fields vs table

So basically I am in the process of creating a personal finance tracking system. It occurred to be that keeping tabs on when each instance and transaction was last edited or updated might be of relevant information some day.
Now as far as I can see there are two approaches to implement something like this:
Create "updated" fields to all the tables I want to keep track of and then let mysql update those fields for me (ON UPDATE clause)
Create a completely seperate table for holding the log data and then update that with a triggers and transactions
Now it seems that 1st approach would have the benefit of keeping things simple and easy to maintain. However how this will impact the performance if I suddenly decide to get every log in the database for review. Also this would kind of goes against normalization (not by much though) with same data stored in multiple tables.
The second approach would allow more flexibility to the logging system and might actually shorten the sql query necessary to retrieve certain data. However it would make the schema more complex as two additional tables would have to be created (the actual log table and many-to-many relation table for holding the keys) and maintained. On the other hand if I ever want to implement an activity history this approach would propably be the only one capable of doing it.
As such I would like to know some more pros and cons to each method. Since 2nd option allows more flexibility I am considering implementing it but I am not sure about performance issues. In the end it comes down to two guestions:
Are there any real life examples where both approaches are
implemented?
And:
Are there any studies, comparisons or other resource that might shed
some light on which is considered more performance friendly and "best
practices" approach?
It depends on what kind of reporting you need and your current architecture.
If you just want to know last update date, then having 2 fields (creation date and last update) should be enough. That's because having separate table won't give any perfomance boost, but will make your code harder to maintain.
It's another story if you want to have something more elaborate, like reporting differences (what was changed) and/or have full change log on each transaction (there might be few updates to one transaction, right?). In this case you actually must have separate table, because otherwise it will bloat your table and reduce perfomance.
Based on my experience, I'd go with separate table. That's because it will be easier to maintain - your logging logic will be practically separated from everything else and I think one day you'll need that additional info on your transactions and full transaction history.
As far as perfomance goes, you won't notice any formidable difference unless your system will be under serious load. But as your system is personal, any choice would suffice, just don't forget about proper indexing.
Note that I'm making alot of assumptions here, so if you want something more specific, please provide your actual architecture and reporting needs. I'd suggest some books on high availability/perfomance, but they are not on your specific needs, but on general availability/perfomance.

Should i recalculate big amounts of data from tables, or should I save it in my database?

My question is more general than specific, yet I am using an example to transfer the idea.
I have a forum, and in each replay I present the number of messages the users have.
Assuming that in some pages there are 15 different users, each has over 20,000 messages, should I recalculate the number of messages by counting how many entries in the messages table the user has, or would it be better to create a column in the users table that contains this data, and update the column every time a reply is made?
I know it defies the database normalizations rules, but it seems like a big waste to calculate it every time.
I'm using mySQL, if it matters.
Generally no, but in some specific cases, yes.
You should avoid having redundant data in a database. However, sometimes you have to make that tradeoff to get a decent performance.
I have actually done exactly the thing in your example. It works great for the performance, but it's really hard to keep the message count correct. You will get some inconsistent values sooner or later, so you need a plan for how to go through the values periodically and recalculate them.
You are talking about denormalization. Quoting wikipedia:
denormalization is the process of attempting to optimise the read
performance of a database by adding redundant data or by grouping
data.
Keep denormalized data in 'plain' code is not an easy issue. Remember than:
You can keep redundant data with triggers.
If your architecture includes ORM it is more easy to keep redundant data.
You could also go half way in your denormalisation: have a table with monthly data per user, filled by a monthly job, and calculate the number of messages on the fly, by counting the msg since 1st of month + sum of monthly data. Or if you don't need the monthly data, you can still calc on the fly over the month + a monthly process that updates the EOM figues. That will avoid triggers...
I'm surprised nobody has mentioned materialized views. These objects are very helpful when it comes to maintaining aggregates of data for performance reasons without violating the normalisation of our actual data. Find out more.
Have you tried to benchmark the results of counting the number of rows?
I'd recommend you just do you're calculation in a view. With the denormalization you're proposing, you're just exposing yourself to the risk of data corruption. The post count column will then end up with some arbitrary value that's go nothing to do with the reality of the number of posts.

Best database design for storing a high number columns?

Situation: We are working on a project that reads datafeeds into the database at our company. These datafeeds can contain a high number of fields. We match those fields with certain columns.
At this moment we have about 120 types of fields. Those all needs a column. We need to be able to filter and sort all columns.
The problem is that I'm unsure what database design would be best for this. I'm using MySQL for the job but I'm are open for suggestions. At this moment I'm planning to make a table with all 120 columns since that is the most natural way to do things.
Options: My other options are a meta table that stores key and values. Or using a document based database so I have access to a variable schema and scale it when needed.
Question:
What is the best way to store all this data? The row count could go up to 100k rows and I need a storage that can select, sort and filter really fast.
Update:
Some more information about usage. XML feeds will be generated live from this table. we are talking about 100 - 500 requests per hours but this will be growing. The fields will not change regularly but it could be once every 6 months. We will also be updating the datafeeds daily. So checking if items are updated and deleting old and adding new ones.
120 columns at 100k rows is not enough information, that only really gives one of the metrics: size. The other is transactions. How many transactions per second are you talking about here?
Is it a nightly update with a manager running a report once a week, or a million page-requests an hour?
I don't generally need to start looking at 'clever' solutions until hitting a 10m record table, or hundreds of queries per second.
Oh, and do not use a Key-Value pair table. They are not great in a relational database, so stick to proper typed fields.
I personally would recommend sticking to a conventional one-column-per-field approach and only deviate from this if testing shows it really isn't right.
With regards to retrieval, if the INSERTS/UPDATES are only happening daily, then I think some careful indexing on the server side, and good caching wherever the XML is generated, should reduce the server hit a good amount.
For example, you say 'we will be updating the datafeeds daily', then there shouldn't be any need to query the database every time. Although, 1000 per hour is only 17 per minute. That probably rounds down to nothing.
I'm working on a similar project right now, downloading dumps from the net and loading them into the database, merging changes into the main table and properly adjusting the dictionary tables.
First, you know the data you'll be working with. So it is necessary to analyze it in advance and pick the best table/column layout. If you have all your 120 columns containing textual data, then a single row will take several K-bytes of disk space. In such situation you will want to make all queries highly selective, so that indexes are used to minimize IO. Full scans might take significant time with such a design. You've said nothing about how big your 500/h requests will be, will each request extract a single row, a small bunch of rows or a big portion (up to whole table)?
Second, looking at the data, you might outline a number of columns that will have a limited set of values. I prefer to do the following transformation for such columns:
setup a dictionary table, making an integer PK for it;
replace the actual value in a master table's column with PK from the dictionary.
The transformation is done by triggers written in C, so although it gives me upload penalty, I do have some benefits:
decreased total size of the database and master table;
better options for the database and OS to cache frequently accessed data blocks;
better query performance.
Third, try to split data according to the extracts you'll be doing. Quite often it turns out that only 30-40% of the fields in the table are typically being used by the all queries, the rest 60-70% are evenly distributed among all of them and used partially. In this case I would recommend splitting main table accordingly: extract the fields that are always used into single "master" table, and create another one for the rest of the fields. In fact, you can have several "another ones", logically grouping data in a separate tables.
In my practice we've had a table that contained customer detailed information: name details, addresses details, status details, banking details, billing details, financial details and a set of custom comments. All queries on such a table were expensive ones, as it was used in the majority of our reports (reports typically perform Full scans). Splitting this table into a set of smaller ones and building a view with rules on top of them (to make external application happy) we've managed to gain a pleasant performance boost (sorry, don't have numbers any longer).
To summarize: you know the data you'll be working with and you know the queries that will be used to access your database, analyze and design accordingly.