Relational tables design problem - relational-database

How do you retain historical relational data if rows are changed? In this example, users are allowed to edit the rows in the Property table at any time. Tests can have any number of properties. If they edit the field 'Name' in the Property table, or drop a row in the Property table, Test rows might not hold conditions at the time of the test. Would you change the design of the Test table by adding a property names column, and dropping the TestProperty mapping table? The property names column would have to be something like a delimited list of strings. How is problem usually handled?
3 tables:
Test:
TestId AUTONUMBER,
Name CHAR,
TestDate DATE
Property:
PropertyId AUTONUMBER,
Name CHAR
TestProperty: (maps properties to tests)
TestId
PropertyId

I do not think the question has been answered fully.
If they edit the field 'Name' in the Property table ... Would you change the design of the Test table by adding a property names column, and dropping the TestProperty mapping table?
Definitely not. That would add massive duplication for no purpose.
If your requirement is to maintain the integrity of the data values (in Property) at the time of the Test, the correct (database) method is to implement a History table. That should be an exact copy of the source table, plus one item: a TIMESTAMP or DATETIME column is added to the PK.PropertyHistory
PropertyId AUTONUMBER,
Name CHAR
CONSTRAINT PRIMARY KEY CLUSTERED UC_PK (PropertyId)
PropertyHistory
PropertyId INT,
AuditedDtm DATETIME,
Name CHAR
CONSTRAINT PRIMARY KEY CLUSTERED UC_PK (PropertyId, AuditedDtm)
For this to be meaningful and useable, the Test table needs a timestamp as well, to identify which version of ProperyHistory to reference:TestProperty
TestId
PropertyId
TestDtm DATETIME
The property names column would have to be something like a delimited list of strings.
That would break basic design rules as well as Database Normalisation rules, and prevent you from performing ordinary Relational operations on it. Never store more than one data value in a single column.
... or drop a row in the Property table
Deletion is something different again. If it is a "database" then it has Integrity. Therfore you cannot delete a parent row if it has child rows in some other table (and you can delete it if it does not have children). This is usually implemented as a "soft delete", an Indicator such as IsObsolete is added. This is referenced in the various SELECTS to exclude the row from being used (to add new children) but remains available as the parent for existing children.

If you want to retain property relations, even if the property doesn't exist. Make it so that Properties aren't necessarily deleted, but add a flag that denotes if the property is currently active. If a property's name is changed, create a new property with the new name and set the old property to inactive.
If you do this, you'll have to create some way of garbage collecting the inactive properties.
I'd never make a single column into a field that imitates a one-to-multi relationship with a comma-denoted list. Otherwise, you defeat the purpose of relational database.

Seems like you're using Test as both a template for a particular instance of a test, as well as the test itself. Maybe every time a user performs a test according to the specification in Test, create a row in, say, TestRun? This would preserve the particular Propertys, and if the entries in Property change later, then subsequent TestRuns would reflect the new changes.

Related

How to track changes to a boolean column in a MySQL database?

My application serves customers which are online stores. One of the tables in my DB is "Product" and it has a column "In_Stock". This is a boolean (bit(1)) column. My customers send data feeds of their product catalog and each customer has their own version of this table. I would like to track changes to this In Stock column, something to the effect of...
11/13/2016 true
12/26/2016 false
01/07/2017 true
Just so that when I do some auditing, I can see for a given time period what was the state of a given product.
How best can I do this?
It seems overkill to create a separate history table and have it updated by a trigger just for one boolean column. Would a history column suffice? I can save the data there in some kind of JSON string.
Sorry, any workable solution will require a second table.
One such solution is Version Normal Form (vnf) which is a special case of 2nf. Consider your table containing the boolean field (assuming it is properly normalized to at least 3nf). Now you want to track the changes made to the boolean field. One way is to turn the rows into versions by adding an EffectiveDate column then, instead of updating the row, write a new row with the current date in the date field (or updating if the boolean field is unchanged).
This allows the tracking of the field, there being a new version for every time the field is changed. But there are severe disadvantages, not least of which is the fact that a row is no longer an entity, but a version of an entity. This makes is impossible to use a foreign key to this table as those want to refer to an entity.
But look carefully at the design. Before the change, you had a good, normalized table with no tracking of changes. After adding the EffectiveDate column, there has been a subtle change. All the fields except the boolean field are, as before, dependent only on the PK. The boolean field is dependent not only on the PK but the new date field as well. It is no longer is 2nf.
Normalizing the table requires moving the boolean field and the date field to a new table:
create table NewTable(
EntityID int not null references OriginalTable( ID ),
EffDate date not null,
TrackedCol boolean,
constraint PK_NewTable primary key( EntityID, EffDate )
);
The first version is inserted when a new row is inserted into the original table. From then on, another version is added only when an update to the original table changes the value of the boolean field.
Here is a previous answer that includes the query to get the current and any past values of the versioned data. I've discussed this design many times here.
Also, there is a way to structure the design so the application code doesn't need to be changed. That is, the redesign will be completely transparent to existing code. The answer linked above contains another link to more documentation to show how that is done.
I would do trigger thing. But don't replicate whole column - take unique column id, log timestamp and boolean value.
Sometimes having good logs is priceless :)
I've written an audit trail module for this purpose, it basically duplicates the table, add some information to each row and keep the original data table untouched except for triggers.

MySQL - Storing Default Values for System

I have a few tables storing their corresponding records for my system. For example, there could be a table called templates and logos. But for each table, one of the rows will be a default in the system. I would have normally added a is_default column for each table, but all of the rows except for 1 would have been 0.
Another colleague of mine sees another route, in which there is a system_defaults table. And that table has a column for each table. For example, this table would have a template_id column and a logo_id column. Then that column stores the corresponding default.
Is one way more correct than the other generally? The first way, there are many columns with the same value, except for 1. And the second, I suppose I just have to do a join to get the details, and the table grows sideways whenever I add a new table that has a default.
The solutions mainly differ in the ways to make sure that no more than one default value is assigned for each table.
is_default solution: Here it may happen that more than one record of a table has the value 1. It depends on the SQL dialect of your database whether this can be excluded by a constraint. As far as I understand MySQL, this kind of constraint can't be expressed there.
Separate table solution: Here you can easily make sure by your table design that at most one default is present per table. By assigning not null constraints, you can also force defaults for specific tables, or not. When you introduce a new table, you are extending your database (and the software working on it) anyway, so the additional attribute on the default table won't hurt.
A middle course might be the following: Have a table
Defaults
id
table_name
row_id
with one record per table, identified by the table name. Technically, the problem of more than one default per table may also occur here. But if you only insert records into this table when a new table gets introduced, then your operative software will only need to perform updates on this table, never inserts. You can easily check this via code inspection.

MySQL DB: Nullable bool in database column, enum, or two boolean columns, which one is more efficient?

I am doing EF 6 code first with MVC 5. One of my classes has a property that can mean three things:
Confirmed by user
Has not answered yet
Declined by user
My question is, what should I use?
A nullable bool, obviously mapped to the choices above
An enum (the column would store an integer as a foreign key to another table listing the states)
Or two bool columns (HasAnswered, IsConfirmed) where IsConfirmed only gets accessed if the user has answered
I am very thankful for every opinion you might have.
Disclaimer.. I thought you were suggesting an enum as a column datatype
None of the above.
An enum.. what if want to add more data to each status or rename one?
Nullable bool .. as above, plus what if you need to add another status?
Two bool columns.. same as above, plus you could introduce normalisation issues.
I'd go with a tiny_int called status (or something similar).
Mainly for flexibility.. if you need the status titles in the DB or need to add any other data to each option you can place it in another table and foreign key it in. If you don't you just need to translate the numbers somewhere.
Another option is to separate out the actions (the answer) from the state.
Consider an answer table with a column indicating confirmation or denial and a reference to the original table. By joining the tables it is possible to separate out the original rows that have no answers and those that have been confirmed/denied.
UPDATE
In response to Null's comment, an int is so much more powerful..
A status database table:
id, title, description, priority, cost... (attach extra data to the status)
Or in PHP pseudo code (without a table):
$query = new Query('SELECT * FROM table WHERE status = :status');
$query->bind('status'=>TableClass::STATUS_ANSWERED)
I wasn't suggesting you translate the numbers everywhere.. just somewhere.
The final great thing about an int is it separates meaning from the workings.. what if I want to rename one of the enums? I change one row with a foreign key, or potentially millions with an enum.

Polymorphic database design : does this approach have a name?

I have a base enitiy (items) that will host a vast range of item types (>200) with totaly different properties. I want a clean portable and fast solution and have come up with an idea that maby has a name I'm unaware of.
Here it goes:
items-entity holds base class fields + additional fields for subclass fields but with dummie-names, ItemID,ItemNo,ItemTypeID,int1,int2,dec1,dec2,dec3,str1,str2
referenced itemtype-record holds name of type and child enity (1:n):
itemtypefields [itemtypeid,name,type,realfield]
example in [53,MaxPressure,dec,dec3]
It's limitations:
hard to estimate field requirements in baseclass
harder to add domains/checkconstraints based on child type
need application layer to translate tagged sql to real query
Only possible to query one type at a time since shared attributes may be defined to different "real-fields".
3rd bullet explained:
select ItemNo,_MaxPressure_ from items where ItemTypeID=10 and _MaxPressure_>42
should translate to:
select ItemNo,dec3 as MaxPressure from items where ItemType=10 and dec3>42
(can't do that with sp's or udf's right - or whould it be possible?)
But benefits of:
Performance
Ease of CRUD-operations
Easier to sort/filter at application level.
Now - does it have a name?
This antipattern is called One True Lookup Table.
In a relational database, each column needs to be defined as one logical type. I don't mean one SQL data type like INT or VARCHAR, I mean everything in that column from start to finish must be from the same set of values, and you should be able to tell one value apart from another value.
You can't put shoe size and average temperature and threads per inch into the same column of a given table, and still call it a relation.
Basically, your database would not be a database at all -- it would be a spreadsheet.
Read almost any book by C. J. Date, such as SQL and Relational Theory for a proper explanation of relations and types.
Re your comment:
Read the Q again before lecuturing about elementary books and mocking about semi structured data.
Okay, I have re-read your post.
The classic use of One True Lookup Table isn't exactly what you're doing, but what you're doing shares the same problems with OTLT.
Suppose you have "MaxPressure" stored in column dec3 for ItemType 10. Suppose there are a fixed set of valid choices for the value of MaxPressure, and you want to put those in another lookup table, so that no one can enter an invalid MaxPressure value.
Now: declare a foreign key constraint on dec3 referencing your MaxPressures lookup table. You can't -- the problem is that the foreign key constraint applies to the dec3 column in all rows, not just those rows where ItemType is 10.
The reason is that you're storing more than one set of values in a single column. The same problem arises for any other kind of constraint -- unique constraints, check constraints, even NOT NULL. And you can't declare a DEFAULT value for the column either, because you probably have a different correct default for each ItemType (and some ItemTypes have no default for that attribute).
The reason that I referred to the C. J. Date book is that he gives a crisp definition for a type: it's a named finite set, over which the equality operation is defined. That is, you can tell if the value "42" on one row is the same as the value "42" on another row. In a relational column, that must be true because they must come from the same original set of values. In your table, dec3 could have the value "42" when it's MaxPressure, but "42" for another ItemType when it's threads per inch. Therefore they aren't the same value "42". If you had a unique constraint, these two 42's would not be considered duplicates. If you had a foreign key, each of the different 42's would reference a different lookup table, etc.
What you're doing is not a valid relational database design.
Don't bristle at my referring you to a resource on relational database design unless you understand that.

database storing multiple types of data, but need unique ids globally

a while ago, i asked about how to implement a REST api. i have since made headway with that, but am trying to fit my brain around another idea.
in my api, i will have multiple types of data, such as people, events, news, etc.
now, with REST, everything should have a unique id. this id, i take it, should be unique to the whole system, and not just each type of data.
for instance, there should not be a person with id #1 and a news item with id of #1. ultimately, these two things would be given different ids altogether: person #1 with unique id of #1 and news item #1 with unique id #2, since #1 was taken by a person.
in a database, i know that you can create primary keys that automatically increment. the problem is, usually you have a table for each data "type", and if you set the auto increment for each table individually, you will get "duplicate" ids (yes, the ids are still unique in their own table, but not the whole DB).
is there an easy way to do this? for instance, can all of these tables be set to work off of one incrementer (the only way i could think of how to put it), or would it require creating a table that holds these global ids, and ties them to a table and the unique id in that table?
You could use a GUID, they will be unique everywhere (for all intents and purposes anyway).
http://en.wikipedia.org/wiki/Globally_unique_identifier
+1 for UUIDs (note that GUID is a particular Microsoft implementation of a UUID standard)
There is a built-in function uuid() for generating UUID as text. You may probably prefix it with table name so that you may easily recognize it later.
Each call to uuid() will generate you a fresh new value (as text). So with the above method of prefixing, the INSERT query may look like this:
INSERT INTO my_table VALUES (CONCAT('my_table-', UUID()), ...)
And don't forget to make this column varchar of large enough size and of course create an index for it.
now, with REST, everything should have a unique id. this id, i take
it, should be unique to the whole system, and not just each type of
data.
That's simply not true. Every resource needs to have a unique identifier, yes, but in an HTTP system, for example, that means a unique URI. /people/1 and /news/1 are unique URI's. There is no benefit (and in fact quite a lot of pain, as you are discovering) from constraining the system such that /news/1 has to instead be /news/0983240-2309843-234802/ in order to avoid conflict.