Keys and terminology - terminology

I have a predicament related to terminology.
Our system processes events. Events are dispatched to a node based on the value of some field (or set of fields). We call this set of fields the key. We call the value of that set of fields the key value.
What adds confusion is that each event is essentially a bag of key-value pairs (i.e., a hash map). So the word key is used for two different purposes: 1) to describe the set of fields on which the event is dispatched, and 2) as a field name.
So if you had a collection of key-value pairs, and a set of those key-value pairs made up a database-style key, what terminology would you use to distinguish those two?
(One further complication is that the key on which the event is dispatched is not always unique. For instance, if we dispatch on userid, and that user performs multiple actions, we will process multiple events with the same userid value. So maybe key is the wrong word to describe the set of fields on which we dispatch an event).

What about always including the adjective: "dispatch key", "database table key"
Or renaming based on usage: "dispatch criteria"

You can try to find a good metaphor and try to stick to it, if possible not related to software programming to avoid confusion.
Say we are in a police story and your keys become an informer (or a fink, a squealer, a dealer, ...)
If you prefer a driving metaphor your dispatch keys become some roadsign, etc.
Just find one metaphor that fit for you.

Related

Proper database model for a user feedback system (an interesting case)

I am developing an application using PHP and Yii Framework. I've been thinking about the most suitable database structure for the given functionality and here's what I've come up with. Yet I'm not 100% positive that's how it should be done so I've decided to ask the community.
App Description:
Registered users may participate in an event. Every event can have an
unlimited number of users, called "participants of the event").
Once the event is over, every participant can leave a feedback about every other participant of the same event.
Database structure:
Since every event can have an unlimited number of users and users can participate in an unlimited number of events, I've created a table "Participant", which resolves the many-to-many relation.
Other tables are self-explanatory.
And here's the most important thing:
Every participant of an event can have the maximum number of feedbacks which equals the number of participants of the same event excluding the given participant (Example, if there are 5 participants of the event, the given participant can receive 4 feedbacks from participants of the same event).
Let me emphasize, only participants of the same event can leave a feedback (and only one) about the given participant.
Below are the steps I took to ensure the integrity of the database:
I've created the "id" column in the "Participant" table to give a unique ID to every user who participates in a certain event. This ID is composite (user_id and practice_id concatenated together). So, the participant id of the user 23 who participated in event 14 would be 14-23.
You may ask why I decided to create a separate column with this ID instead of simply defining the primary key like this:
PRIMARY KEY (user_id, event_id)
Read on.
When the event is over, every participant can leave a feedback about the others. Now, this participant ID can be references by the foreign keys "sender_id" and "recipient_id" in the feedback table.
Further on, the primary key of the feedback table will also be formed by combining "the sender_id" and the "recipient_id", so if the user 23 wants to leave a feedback about the user 45 (both participated in the event 71), the primary key for the feedback would be: 71-45-71-23.
This approach allows us to make sure on the database level that no participant leaves a feedback about the same participant twice and that a user can't participate in the same event twice.
Questions:
Does this approach has the right to exist?
What are the pros and
other, better way to approach this functionality?
Can I generate the primary keys based on the values of the other columns
automatically on record insertion?
This is a bad design. Just make a 2-column primary key, and 2-column foreign keys to it. This is a fundamental anti-pattern called "encoding information in keys" which (thereby) are called "smart", "intelligent" or "concatenated" keys. A good key is a "dumb" key.
Eg::
Despite it now being easy to implement a Smart Key, it is hard to
recommend that you create one of your own that isn't a natural key,
because they tend to eventually run into trouble, whatever their
advantages, because it makes the databases harder to refactor, imposes
an order which is
difficult to change and may not be optimal for your queries, requires
a string comparison if the Smart Key includes non-numeric characters,
and is less effective than a composite key in helping range-based
aggregations. It also violates the basic relational guideline that
every column should store atomic values
Smart Keys also tend to outgrow their original coding constraints
Besides, there is no need to do this.
Many DBMSes allow "computed columns" whose values are automatically calculated from other columns. To make one a primary key or foreign key you would usually need it "persisted", ie have take up memory like a normal column vs just being calculated when needed like a view. MySQL does not have these, but 5.7.5 has some functionality where they are called "generated columns", which can be "stored". But don't do this for PKs or FKs!
The actual design issue is handling database/SQL subtypes/hierarchies/inheritance/polymorphism.

Do I need a primary key for every table in MS Access?

I am new to MSAccess so I'm not sure about this; do I have to have a primary key for every single table in my database? I have one table which looks something like this:
(http://i108.photobucket.com/albums/n32/lurker3345/ACCESSHELP.png?t=1382688844)
In this case, every field/column has a repeating term. I have tried assigning the primary key to every field but it returns with an error saying that there is a repeated field.
How do I go about this?
Strictly speaking, Yes, every row in a relational database should have a Primary Key (a unique identifier). If doing quick-and-dirty work, you may be able to get away without one.
Internal Tracking ID
Some database generate a primary key under-the-covers if you do not assign one explicitly. Every database needs some way to internally track each row.
Natural Key
A natural key is an existing field with meaningful data that happens to identify each row uniquely. For example, if you were tracking people assigned to teams, you might have an "employee_id" column on the "person" table.
Surrogate Key
A surrogate key is an extra column you add to a table, just to assign an arbitrary value as the unique identifier. You might assign a serial number (1, 2, 3, …), or a UUID if your database (such as Postgres) supports that data type. Assigning a serial number or UUID is so common that nearly every database engine provides a built-in facility to help you automatically create such a value and assign to new rows.
My Advice
In my experience, any serious long-term project should always use a surrogate key because every natural key I've ever been tempted to use eventually changes. People change their names (get married, etc.). Employee IDs change when company gets acquired by another.
If, on the other hand, you are doing a quick-and-dirty job, such as analyzing a single batch of data to produce a chart once and never again, and your data happens to have a natural key then use it. Beware: One-time jobs often have a way of becoming recurring jobs.
Further advice… When importing data from a source outside your control, assign your own identifier even if the import contains a candidate key.
Composite Key
Some database engines offer a composite key feature, also called compound key, where two or more columns in the table are combined to create a single value which once combined should prove unique. For example, in a "person" table, "first_name" and "last_name", and "phone_number" fields might be unique when considered together. Unless two people married and sharing the same home phone number while also happening to each be named "Alex" with a shared last name! Because of such collisions as well as the tendency for meaningful data to change and also the overhead of calculating such combined values, it is advisable to stick with simple (single-column) keys unless you have a special situation.
If the data doesn't naturally have a unique field to use as the primary key, add an auto-generated integer column called "Id" or similar.
Read the "how to organize my data" section of this page:
http://www.htmlgoodies.com/primers/database/article.php/3478051
This page shows you how to create one (under "add an autonumber primary key"):
http://office.microsoft.com/en-gb/access-help/create-or-remove-a-primary-key-HA010014099.aspx
In you use a DataAdapter and a Currency Manager, your tables must have a primary key in order to push updates, additions and deletions back to the database. Otherwise, they will not register and you will receive an error.
I lost one week figuring that one out until I added this to the Try-Catch-End Try block: MsgBox(er.ToString) which mentioned "key". From there, I figured it out.
(NB : Having a primary key was not a requisite in VB6)
Not having a primary key usually means your data is poorly structured. However, it looks like you're dealing with summary/aggregate data there, so it's probably doesn't matter.

Polymorphic database design : does this approach have a name?

I have a base enitiy (items) that will host a vast range of item types (>200) with totaly different properties. I want a clean portable and fast solution and have come up with an idea that maby has a name I'm unaware of.
Here it goes:
items-entity holds base class fields + additional fields for subclass fields but with dummie-names, ItemID,ItemNo,ItemTypeID,int1,int2,dec1,dec2,dec3,str1,str2
referenced itemtype-record holds name of type and child enity (1:n):
itemtypefields [itemtypeid,name,type,realfield]
example in [53,MaxPressure,dec,dec3]
It's limitations:
hard to estimate field requirements in baseclass
harder to add domains/checkconstraints based on child type
need application layer to translate tagged sql to real query
Only possible to query one type at a time since shared attributes may be defined to different "real-fields".
3rd bullet explained:
select ItemNo,_MaxPressure_ from items where ItemTypeID=10 and _MaxPressure_>42
should translate to:
select ItemNo,dec3 as MaxPressure from items where ItemType=10 and dec3>42
(can't do that with sp's or udf's right - or whould it be possible?)
But benefits of:
Performance
Ease of CRUD-operations
Easier to sort/filter at application level.
Now - does it have a name?
This antipattern is called One True Lookup Table.
In a relational database, each column needs to be defined as one logical type. I don't mean one SQL data type like INT or VARCHAR, I mean everything in that column from start to finish must be from the same set of values, and you should be able to tell one value apart from another value.
You can't put shoe size and average temperature and threads per inch into the same column of a given table, and still call it a relation.
Basically, your database would not be a database at all -- it would be a spreadsheet.
Read almost any book by C. J. Date, such as SQL and Relational Theory for a proper explanation of relations and types.
Re your comment:
Read the Q again before lecuturing about elementary books and mocking about semi structured data.
Okay, I have re-read your post.
The classic use of One True Lookup Table isn't exactly what you're doing, but what you're doing shares the same problems with OTLT.
Suppose you have "MaxPressure" stored in column dec3 for ItemType 10. Suppose there are a fixed set of valid choices for the value of MaxPressure, and you want to put those in another lookup table, so that no one can enter an invalid MaxPressure value.
Now: declare a foreign key constraint on dec3 referencing your MaxPressures lookup table. You can't -- the problem is that the foreign key constraint applies to the dec3 column in all rows, not just those rows where ItemType is 10.
The reason is that you're storing more than one set of values in a single column. The same problem arises for any other kind of constraint -- unique constraints, check constraints, even NOT NULL. And you can't declare a DEFAULT value for the column either, because you probably have a different correct default for each ItemType (and some ItemTypes have no default for that attribute).
The reason that I referred to the C. J. Date book is that he gives a crisp definition for a type: it's a named finite set, over which the equality operation is defined. That is, you can tell if the value "42" on one row is the same as the value "42" on another row. In a relational column, that must be true because they must come from the same original set of values. In your table, dec3 could have the value "42" when it's MaxPressure, but "42" for another ItemType when it's threads per inch. Therefore they aren't the same value "42". If you had a unique constraint, these two 42's would not be considered duplicates. If you had a foreign key, each of the different 42's would reference a different lookup table, etc.
What you're doing is not a valid relational database design.
Don't bristle at my referring you to a resource on relational database design unless you understand that.

Trying to avoid "Polymorphic Associations" and uphold foreign key referential integrity

I'm building a site similar to Yelp (Recommendation Engine, on a smaller scale though), so there will be three main entities in the system: User, Place (includes businesses), and Event.
Now what I'm wondering about is how to store information such as photos, comments, and 'compliments' (similar to Facebook's "Like") for each of these type of entity, and also for each object they can be applied to (e.g. comment on a recommendation, photo, etc). Right now the way I was doing it was a single table for each i.e.
Photo (id, type, owner_id, is_main, etc...)
where type represents: 1=user, 2=place, 3=event
Comment (id, object_type, object_id, user_id, content, etc, etc...)
where object_type can be a few different objects like photos, recommendations, etc
Compliment (object_id, object_type, compliment_type, user_id)
where object_type can be a few different objects like photos, recommendations, etc
Activity (id, source, source_type, source_id, etc..) //for "activity feed"
where source_type is a user, place, or event
Notification (id, recipient, sender, activity_type, object_type, object_id, etc...)
where object_type & object_id will be used to provide a direct link to the object of the notification e.g. a user's photo that was complimented
But after reading a few posts on SO, I realized I can't maintain referential integrity with a foreign key since that's requires a 1:1 relationship and my source_id/object_id fields can relate to an ID in more than one table. So I decided to go with the method of keeping the main entity, but then break it into subsets i.e.
User_Photo (photo_id, user_id) | Place_Photo(photo_id, place_id) | etc...
Photo_Comment (comment_id, photo_id) | Recommendation_Comment(comment_id, rec_id) | etc...
Compliment (id, ...) //would need to add a surrogate key to Compliment table now
Photo_Compliment(compliment_id, photo_id) | Comment_Compliment(compliment_id, comment_id) | etc...
User_Activity(activity_id, user_id) | Place_Activity(activity_id, place_id) | etc...
I was thinking I could just create views joining each sub-table to the main table to get the results I want. Plus I'm thinking it would fit into my object models in Code Igniter as well.
The only table I think I could leave is the notifications table, since there are many object types (forum post, photo, recommendation, etc, etc), and this table will only hold notifications for a week anyway so any ref integrity issues shouldn't be much of a problem (I think).
So am I going about this in a sensible way? Any performance, reliability, or other issues that I may have overlooked?
The only "problem" I can see is that I would end up with a lot of tables (as it is right now I have about 72, so I guess i would end up with a little under 90 tables after I add the extras), and that's not an issue as far as I can tell.
Really grateful for any kind of feedback. Thanks in advance.
EDIT: Just to be clear, I'm not concerned if i end up with another 10 or so tables. From what I know, the number of tables isn't too much of an issue (once they're being used)... unless you had say 200 or so :/
Some propositions for this UoD (universe of discourse)
User named Bob logged in.
User named Bob uploaded photo number 56.
There is a place named London.
Photo number 56 is of place named London.
User named Joe created comment "very nice" on photo number 56.
To introduce object IDs
User (UserID) logged in.
User (UserID) uploaded Photo (PhotoID).
There is Place (PlaceID).
Photo (PhotoID) is of Place (PlaceID).
User (UserID) created Comment (CommentID) on Photo (PhotoID).
Just Fact Types
User logged in.
User uploaded Photo.
Place exists.
Photo is of Place.
User created Comment on Photo.
Now to extract predicates
Predicate Predicate Arity
---------------------------------------------
... logged in 1 (Unary predicate)
... uploaded ... 2 (Binary)
... exists 1 (Unary)
... is of ... 2 (Binary)
... created ... on ... 3 (Ternary)
It looks like each proposition is this UoD may be stated with max ternary predicate,
so I would suggest something like
Predicate role (Role_1_ID, Role_2_ID, Role_3_ID) is a part that an object plays in a predicate. Substitute the ... in a predicate from left to right with each Role_ID.
Note that only Role_1_ID is mandatory (at least unary predicate), the other two may be NULL.
In this simple model, it is possible to propose anything.
Hence, you would need to implement constraints on the application layer.
For example, you have to make sure that it is possible to create Comment on Place, but not create Place on Place.
Not all predicates represents action, for example ... logged in is an action while ... is of ... is not.
So, your activity feed would list all Propositions with Predicate.IsAction = True.
If you rearrange things slightly, you can simplify your comments and compliments. Essentially you want to have a single store of comments and another one of compliments. Your problem is that this won't let you use declarative referential integrity (foreign key constraints).
The way to solve this is to make sure that the objects that can attract comments and compliments are all logical sub-types of one supertype. From a logical perspective, it means you have an "THING_OF_INTEREST" entity (I'm not making a naming convention recommendation here!) and each of the various specific things which attract comments and compliments will be a sub-type of THING_OF_INTEREST. Therefore your comments table will have a "thing_of_interest_id" FK column and similarly for your compliments table. You will still have the sub-type tables, but they will have a 1:1 FK with THING_OF_INTEREST. In other words, THING_OF_INTEREST does the job of giving you a single primary key domain, whereas all of the sub-type tables contain the type-specific attributes. In this way, you can still use declarative referential integrity to enforce your comment and compliment relationships without having to have separate tables for different types of comments and compliments.
From a physical implementation perspective, the most important thing is that your various things of interest all share a common primary key domain. That's what lets your comment table have a single FK value that can be easily joined with whatever that thing of interest happens to be.
Depending on how you go after your comments and recommendations, you probably will (but may not) need to physically implement THING_OF_INTEREST - which will have at least two attributes, the primary key (usually an int) plus a partitioning attribute that tells you which sub-type of thing it is.
If you need referential integrity (RI) there is no better way to do it than to use many-to-many junction tables. True, you end up having a lot of tables in the system, but that's the cost you need to pay. It also has some other benefits going this route, for instance you get some sort of partitioning for free: you get the data partitioned by their relation type, each in its own table. This offers RI but it is not 100% safe either, for instance there's nothing to guarantee you that a comment belongs to a photo and to that photo alone, you'd need to enforce this kind of constraints manually should you need them.
On the other hand, going with a generic solution like you already did gets you faster off the ground and it's way easier to extend in the future but there'll be no RI unless you'll code it manually (which is very complex and a lot harder to deal with than the alternative M:M for every relation type).
Just to mention another alternative, similar to your existing implementation, you could use a custom M:M junction table to handle all your relations regardless of their type: object1_type, object1_id, object2_type, object2_id. Simple but no other benefit beside very easy to implement and extend. I'd only recommend it if you don't need RI and you got yourself a lot of tables, all interlinked.

database storing multiple types of data, but need unique ids globally

a while ago, i asked about how to implement a REST api. i have since made headway with that, but am trying to fit my brain around another idea.
in my api, i will have multiple types of data, such as people, events, news, etc.
now, with REST, everything should have a unique id. this id, i take it, should be unique to the whole system, and not just each type of data.
for instance, there should not be a person with id #1 and a news item with id of #1. ultimately, these two things would be given different ids altogether: person #1 with unique id of #1 and news item #1 with unique id #2, since #1 was taken by a person.
in a database, i know that you can create primary keys that automatically increment. the problem is, usually you have a table for each data "type", and if you set the auto increment for each table individually, you will get "duplicate" ids (yes, the ids are still unique in their own table, but not the whole DB).
is there an easy way to do this? for instance, can all of these tables be set to work off of one incrementer (the only way i could think of how to put it), or would it require creating a table that holds these global ids, and ties them to a table and the unique id in that table?
You could use a GUID, they will be unique everywhere (for all intents and purposes anyway).
http://en.wikipedia.org/wiki/Globally_unique_identifier
+1 for UUIDs (note that GUID is a particular Microsoft implementation of a UUID standard)
There is a built-in function uuid() for generating UUID as text. You may probably prefix it with table name so that you may easily recognize it later.
Each call to uuid() will generate you a fresh new value (as text). So with the above method of prefixing, the INSERT query may look like this:
INSERT INTO my_table VALUES (CONCAT('my_table-', UUID()), ...)
And don't forget to make this column varchar of large enough size and of course create an index for it.
now, with REST, everything should have a unique id. this id, i take
it, should be unique to the whole system, and not just each type of
data.
That's simply not true. Every resource needs to have a unique identifier, yes, but in an HTTP system, for example, that means a unique URI. /people/1 and /news/1 are unique URI's. There is no benefit (and in fact quite a lot of pain, as you are discovering) from constraining the system such that /news/1 has to instead be /news/0983240-2309843-234802/ in order to avoid conflict.