MySQL tables for node relationships - mysql

I'm trying to figure out what would be the optimal database and table structure to store relationships between nodes of the type (var)char. I've last used MySQL many years ago as a backend for some simple PHP webpages and never got beyond that. I hope some seasoned users can give me their opinion.
Let's say I have a bunch of names:
Thomas
Jane
Felix
Marc
Anne
I now want to store their relationships. My idea is to have two tables that might look like this:
names (id, name) relationships (id_1, id_2)
0 Thomas 0 1
1 Jane 0 3
2 Felix 1 2
3 Marc 3 4
4 Anne ...
...
The scope of the data is as follows:
Table 'names' will contain approx. 5 million rows.
Table 'relationships' will contain 150-200 million rows.
The database will only be accessed by me, locally (server and client are the same machine)
I don't need responsiveness as with a web server, only a high throughput during the few occasions when I access it (to reduce waiting time)
My questions are:
I recall proper use of PRIMARY_KEY being important. I vaguely remember there being the possibility to assign the key to two columns (i.e. id_1, id_2 in my case); this helps querying I imagine?
Is there a way from within MySQL to prevent the creation of duplicate relationships (e.g. 0:4 & 4:0) during insertion?
MySQL defaults to InnoDB for me. Is this the database you would recommend for my scenario?
Any pointers welcome. Thank you.

Firstly, you need to consider whether your relationships have a "direction" associated with them. For example, the relationship "is a child of" has the opposite direction to the otherwise identical relationship "is a parent of"; on the other hand, the relationship "is a sibling of" is undirected (or bidirectional, depending on one's point of view).
The structure you describe is perfect for directed relationships.
Bidirectional relationships, on the other hand, are often best represented by deliberately performing the duplication described in your second bulletpoint; whilst this consumes more storage, it greatly simplifies queries such as "find all siblings of X"—which might otherwise have to take the union of two separate queries:
SELECT id_2 FROM my_table WHERE id_1=X
UNION
SELECT id_1 FROM my_table WHERE id_2=X
Because there is no index on the resulting column, these sorts of queries can be quite slow if one wants to do something more with the result (such as sort by id, or join with thenames table—albeit in that particular case one could perform the joins before the union, but that just increases redundancy and complexity in one's data manipulation code).
One can use triggers to ensure that whenever a relationship is written (inserted, updated or deleted) to a table that represents bidirectional relationships, the same operation is automatically performed on the reverse relationship.
Secondly, the representation you describe is known as an "adjacency list", which is very simple and easy to understand. But it's not great at dealing with deep searches through the data hierarchy, especially on MySQL (which, unlike some other RDBMS, doesn't support recursive functions). Thus finding "all descendants of X" or "all ancestors of Y" is actually quite difficult. Other data models, such as "nested sets" or "transitive closure" are much better for these tasks.
With that preamble said, on to your questions:
I recall proper use of PRIMARY_KEY being important. I vaguely remember there being the possibility to assign the key to two columns (i.e. id_1, id_2 in my case); this helps querying I imagine?
There are four possible primary keys for your relationship table:
(id_1)
(id_2)
(id_1, id_2)
(id_2, id_1)
By definition, a primary key must be unique within your table. Indeed it is the primary means of identifying a record. But if desired one can also define further UNIQUE keys, which have the same constraining effect as a primary key (the differences are relatively minor and beyond the scope of this answer): thus one can actually enforce any combination of the above constraints.
The above constraints would respectively: limit each name to being on one side of the relationship no more than once; limit each name to being on the other side of the relationship no more than once; and the final two limit each combination of names to being within the same relationship no more than once (the difference is merely the order in which the index is stored). If the table represents undirected relationships, then obviously the second and fourth constraints are semantically equivalent to the first and third constraints respectively.
Some examples:
if your table represents "id_1 is the genetic father of id_2" then id_1 might have many children. So (id_1) cannot be the primary key, as it won't uniquely identify records of fathers who have more than one child. On the other hand id_2 can only have a single genetic father (embryological advances aside), so (id_2) will uniquely identify a record and can be the primary key (that said, many-to-one relationships of this sort might as well be modelled via a father_id column in the names table). The other two (composite) keys would permit children to have many fathers and must therefore be incorrect.
if your table represents "id_1 is a parent of id_2" then both a parent can have many children and children can have more than one parent (this is known as a many-to-many relationship). Therefore the first two constraints are incorrect and one must choose between the latter two (as mentioned previously, the difference is merely the order in which the index is stored—so MySQL must locate the first column before it can lookup the second). Incidentally, in this case one might consider adding an additional column to the relationship table that indicates which parent the relationship represents; if a child can only have one parent of each type, then one could define the primary key as (child_id, parent_type).
if your table represents "id_1 and id_2 are married" then both (id_1) and (id_2) are "candidate keys", because noone can be married to more than one other person (at least in the UK, polygamy aside). Thus one might define (id_1) as the primary key and define a second UNIQUE key over (id_2). As mentioned before, one may well wish to place the records inside the table both ways around—and these constraints will not prevent that.
Is there a way from within MySQL to prevent the creation of duplicate relationships (e.g. 0:4 & 4:0) during insertion?
Yes, one can do so with triggers: but note what was been said above regarding bidirectional relationships (where such "duplicates" are often desired). An example of trigger that would enforce this type of constraint might be:
CREATE TRIGGER rel_ins BEFORE INSERT ON relationships FOR EACH ROW
IF EXISTS (
SELECT * FROM relationships WHERE id_1=NEW.id_2 AND id_2=NEW.id_1
) THEN
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = 'Reverse relationship already exists';
END IF;;
One may also want a similar trigger "before update".
A situation where a constraint of this sort might be desirable would be where the table represents "is a parent of", since a parent cannot be their child's child (however, in this case it may be worth noting that in such a relationship table, one may actually wish to go further and prevent all circularities—e.g. prevent a child from being the parent of their grandparent). Again, "adjacency list" is not the best model for enforcing this sort of constraint—"nested sets", on the other hand, entirely prevent all circularities purely by virtue of their structure.
MySQL defaults to InnoDB for me. Is this the database you would recommend for my scenario?
The biggest advantage of InnoDB is that it is fully ACID compliant, and thus offers transactional support. This is especially useful if you might write to the database from multiple places at one time. If you're simply going to perform a one-time-load of a bunch of static data into the database for subsequent querying, it may well be a little slower than MyISAM.

Related

How to design table with primary key, index, unique in SQL [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Here we go again, the old argument still arises...
Would we better have a business key as a primary key, or would we rather have a surrogate id (i.e. an SQL Server identity) with a unique constraint on the business key field?
Please, provide examples or proof to support your theory.
Just a few reasons for using surrogate keys:
Stability: Changing a key because of a business or natural need will negatively affect related tables. Surrogate keys rarely, if ever, need to be changed because there is no meaning tied to the value.
Convention: Allows you to have a standardized Primary Key column naming convention rather than having to think about how to join tables with various names for their PKs.
Speed: Depending on the PK value and type, a surrogate key of an integer may be smaller, faster to index and search.
Both. Have your cake and eat it.
Remember there is nothing special about a primary key, except that it is labelled as such. It is nothing more than a NOT NULL UNIQUE constraint, and a table can have more than one.
If you use a surrogate key, you still want a business key to ensure uniqueness according to the business rules.
It appears that no one has yet said anything in support of non-surrogate (I hesitate to say "natural") keys. So here goes...
A disadvantage of surrogate keys is that they are meaningless (cited as an advantage by some, but...). This sometimes forces you to join a lot more tables into your query than should really be necessary. Compare:
select sum(t.hours)
from timesheets t
where t.dept_code = 'HR'
and t.status = 'VALID'
and t.project_code = 'MYPROJECT'
and t.task = 'BUILD';
against:
select sum(t.hours)
from timesheets t
join departents d on d.dept_id = t.dept_id
join timesheet_statuses s on s.status_id = t.status_id
join projects p on p.project_id = t.project_id
join tasks k on k.task_id = t.task_id
where d.dept_code = 'HR'
and s.status = 'VALID'
and p.project_code = 'MYPROJECT'
and k.task_code = 'BUILD';
Unless anyone seriously thinks the following is a good idea?:
select sum(t.hours)
from timesheets t
where t.dept_id = 34394
and t.status_id = 89
and t.project_id = 1253
and t.task_id = 77;
"But" someone will say, "what happens when the code for MYPROJECT or VALID or HR changes?" To which my answer would be: "why would you need to change it?" These aren't "natural" keys in the sense that some outside body is going to legislate that henceforth 'VALID' should be re-coded as 'GOOD'. Only a small percentage of "natural" keys really fall into that category - SSN and Zip code being the usual examples. I would definitely use a meaningless numeric key for tables like Person, Address - but not for everything, which for some reason most people here seem to advocate.
See also: my answer to another question
Surrogate key will NEVER have a reason to change. I cannot say the same about the natural keys. Last names, emails, ISBN nubmers - they all can change one day.
Surrogate keys (typically integers) have the added-value of making your table relations faster, and more economic in storage and update speed (even better, foreign keys do not need to be updated when using surrogate keys, in contrast with business key fields, that do change now and then).
A table's primary key should be used for identifying uniquely the row, mainly for join purposes. Think a Persons table: names can change, and they're not guaranteed unique.
Think Companies: you're a happy Merkin company doing business with other companies in Merkia. You are clever enough not to use the company name as the primary key, so you use Merkia's government's unique company ID in its entirety of 10 alphanumeric characters.
Then Merkia changes the company IDs because they thought it would be a good idea. It's ok, you use your db engine's cascaded updates feature, for a change that shouldn't involve you in the first place. Later on, your business expands, and now you work with a company in Freedonia. Freedonian company id are up to 16 characters. You need to enlarge the company id primary key (also the foreign key fields in Orders, Issues, MoneyTransfers etc), adding a Country field in the primary key (also in the foreign keys). Ouch! Civil war in Freedonia, it's split in three countries. The country name of your associate should be changed to the new one; cascaded updates to the rescue. BTW, what's your primary key? (Country, CompanyID) or (CompanyID, Country)? The latter helps joins, the former avoids another index (or perhaps many, should you want your Orders grouped by country too).
All these are not proof, but an indication that a surrogate key to uniquely identify a row for all uses, including join operations, is preferable to a business key.
I hate surrogate keys in general. They should only be used when there is no quality natural key available. It is rather absurd when you think about it, to think that adding meaningless data to your table could make things better.
Here are my reasons:
When using natural keys, tables are clustered in the way that they are most often searched thus making queries faster.
When using surrogate keys you must add unique indexes on logical key columns. You still need to prevent logical duplicate data. For example, you can’t allow two Organizations with the same name in your Organization table even though the pk is a surrogate id column.
When surrogate keys are used as the primary key it is much less clear what the natural primary keys are. When developing you want to know what set of columns make the table unique.
In one to many relationship chains, the logical key chains. So for example, Organizations have many Accounts and Accounts have many Invoices. So the logical-key of Organization is OrgName. The logical-key of Accounts is OrgName, AccountID. The logical-key of Invoice is OrgName, AccountID, InvoiceNumber.
When surrogate keys are used, the key chains are truncated by only having a foreign key to the immediate parent. For example, the Invoice table does not have an OrgName column. It only has a column for the AccountID. If you want to search for invoices for a given organization, then you will need to join the Organization, Account, and Invoice tables. If you use logical keys, then you could Query the Organization table directly.
Storing surrogate key values of lookup tables causes tables to be filled with meaningless integers. To view the data, complex views must be created that join to all of the lookup tables. A lookup table is meant to hold a set of acceptable values for a column. It should not be codified by storing an integer surrogate key instead. There is nothing in the normalization rules that suggest that you should store a surrogate integer instead of the value itself.
I have three different database books. Not one of them shows using surrogate keys.
I want to share my experience with you on this endless war :D on natural vs surrogate key dilemma. I think that both surrogate keys (artificial auto-generated ones) and natural keys (composed of column(s) with domain meaning) have pros and cons. So depending on your situation, it might be more relevant to choose one method or the other.
As it seems that many people present surrogate keys as the almost perfect solution and natural keys as the plague, I will focus on the other point of view's arguments:
Disadvantages of surrogate keys
Surrogate keys are:
Source of performance problems:
They are usually implemented using auto-incremented columns which mean:
A round-trip to the database each time you want to get a new Id (I know that this can be improved using caching or [seq]hilo alike algorithms but still those methods have their own drawbacks).
If one-day you need to move your data from one schema to another (It happens quite regularly in my company at least) then you might encounter Id collision problems. And Yes I know that you can use UUIDs but those lasts requires 32 hexadecimal digits! (If you care about database size then it can be an issue).
If you are using one sequence for all your surrogate keys then - for sure - you will end up with contention on your database.
Error prone. A sequence has a max_value limit so - as a developer - you have to put attention to the following points:
You must cycle your sequence ( when the max-value is reached it goes back to 1,2,...).
If you are using the sequence as an ordering (over time) of your data then you must handle the case of cycling (column with Id 1 might be newer than row with Id max-value - 1).
Make sure that your code (and even your client interfaces which should not happen as it supposed to be an internal Id) supports 32b/64b integers that you used to store your sequence values.
They don't guarantee non duplicated data. You can always have 2 rows with all the same column values but with a different generated value. For me this is THE problem of surrogate keys from a database design point of view.
More in Wikipedia...
Myths on natural keys
Composite keys are less inefficient than surrogate keys. No! It depends on the used database engine:
Oracle
MySQL
Natural keys don't exist in real-life. Sorry but they do exist! In aviation industry, for example, the following tuple will be always unique regarding a given scheduled flight (airline, departureDate, flightNumber, operationalSuffix). More generally, when a set of business data is guaranteed to be unique by a given standard then this set of data is a [good] natural key candidate.
Natural keys "pollute the schema" of child tables. For me this is more a feeling than a real problem. Having a 4 columns primary-key of 2 bytes each might be more efficient than a single column of 11 bytes. Besides, the 4 columns can be used to query the child table directly (by using the 4 columns in a where clause) without joining to the parent table.
Conclusion
Use natural keys when it is relevant to do so and use surrogate keys when it is better to use them.
Hope that this helped someone!
Alway use a key that has no business meaning. It's just good practice.
EDIT: I was trying to find a link to it online, but I couldn't. However in 'Patterns of Enterprise Archtecture' [Fowler] it has a good explanation of why you shouldn't use anything other than a key with no meaning other than being a key. It boils down to the fact that it should have one job and one job only.
Surrogate keys are quite handy if you plan to use an ORM tool to handle/generate your data classes. While you can use composite keys with some of the more advanced mappers (read: hibernate), it adds some complexity to your code.
(Of course, database purists will argue that even the notion of a surrogate key is an abomination.)
I'm a fan of using uids for surrogate keys when suitable. The major win with them is that you know the key in advance e.g. you can create an instance of a class with the ID already set and guaranteed to be unique whereas with, say, an integer key you'll need to default to 0 or -1 and update to an appropriate value when you save/update.
UIDs have penalties in terms of lookup and join speed though so it depends on the application in question as to whether they're desirable.
Using a surrogate key is better in my opinion as there is zero chance of it changing. Almost anything I can think of which you might use as a natural key could change (disclaimer: not always true, but commonly).
An example might be a DB of cars - on first glance, you might think that the licence plate could be used as the key. But these could be changed so that'd be a bad idea. You wouldnt really want to find that out after releasing the app, when someone comes to you wanting to know why they can't change their number plate to their shiny new personalised one.
Always use a single column, surrogate key if at all possible. This makes joins as well as inserts/updates/deletes much cleaner because you're only responsible for tracking a single piece of information to maintain the record.
Then, as needed, stack your business keys as unique contraints or indexes. This will keep you data integrity intact.
Business logic/natural keys can change, but the phisical key of a table should NEVER change.
Case 1: Your table is a lookup table with less than 50 records (50 types)
In this case, use manually named keys, according to the meaning of each record.
For Example:
Table: JOB with 50 records
CODE (primary key) NAME DESCRIPTION
PRG PROGRAMMER A programmer is writing code
MNG MANAGER A manager is doing whatever
CLN CLEANER A cleaner cleans
...............
joined with
Table: PEOPLE with 100000 inserts
foreign key JOBCODE in table PEOPLE
looks at
primary key CODE in table JOB
Case 2: Your table is a table with thousands of records
Use surrogate/autoincrement keys.
For Example:
Table: ASSIGNMENT with 1000000 records
joined with
Table: PEOPLE with 100000 records
foreign key PEOPLEID in table ASSIGNMENT
looks at
primary key ID in table PEOPLE (autoincrement)
In the first case:
You can select all programmers in table PEOPLE without use of join with table JOB, but just with: SELECT * FROM PEOPLE WHERE JOBCODE = 'PRG'
In the second case:
Your database queries are faster because your primary key is an integer
You don't need to bother yourself with finding the next unique key because the database itself gives you the next autoincrement.
Surrogate keys can be useful when business information can change or be identical. Business names don't have to be unique across the country, after all. Suppose you deal with two businesses named Smith Electronics, one in Kansas and one in Michigan. You can distinguish them by address, but that'll change. Even the state can change; what if Smith Electronics of Kansas City, Kansas moves across the river to Kansas City, Missouri? There's no obvious way of keeping these businesses distinct with natural key information, so a surrogate key is very useful.
Think of the surrogate key like an ISBN number. Usually, you identify a book by title and author. However, I've got two books titled "Pearl Harbor" by H. P. Willmott, and they're definitely different books, not just different editions. In a case like that, I could refer to the looks of the books, or the earlier versus the later, but it's just as well I have the ISBN to fall back on.
On a datawarehouse scenario I believe is better to follow the surrogate key path. Two reasons:
You are independent of the source system, and changes there --such as a data type change-- won't affect you.
Your DW will need less physical space since you will use only integer data types for your surrogate keys. Also your indexes will work better.
As a reminder it is not good practice to place clustered indices on random surrogate keys i.e. GUIDs that read XY8D7-DFD8S, as they SQL Server has no ability to physically sort these data. You should instead place unique indices on these data, though it may be also beneficial to simply run SQL profiler for the main table operations and then place those data into the Database Engine Tuning Advisor.
See thread # http://social.msdn.microsoft.com/Forums/en-us/sqlgetstarted/thread/27bd9c77-ec31-44f1-ab7f-bd2cb13129be
This is one of those cases where a surrogate key pretty much always makes sense. There are cases where you either choose what's best for the database or what's best for your object model, but in both cases, using a meaningless key or GUID is a better idea. It makes indexing easier and faster, and it is an identity for your object that doesn't change.
In the case of point in time database it is best to have combination of surrogate and natural keys. e.g. you need to track a member information for a club. Some attributes of a member never change. e.g Date of Birth but name can change.
So create a Member table with a member_id surrogate key and have a column for DOB.
Create another table called person name and have columns for member_id, member_fname, member_lname, date_updated. In this table the natural key would be member_id + date_updated.
Horse for courses. To state my bias; I'm a developer first, so I'm mainly concerned with giving the users a working application.
I've worked on systems with natural keys, and had to spend a lot of time making sure that value changes would ripple through.
I've worked on systems with only surrogate keys, and the only drawback has been a lack of denormalised data for partitioning.
Most traditional PL/SQL developers I have worked with didn't like surrogate keys because of the number of tables per join, but our test and production databases never raised a sweat; the extra joins didn't affect the application performance. With database dialects that don't support clauses like "X inner join Y on X.a = Y.b", or developers who don't use that syntax, the extra joins for surrogate keys do make the queries harder to read, and longer to type and check: see #Tony Andrews post. But if you use an ORM or any other SQL-generation framework you won't notice it. Touch-typing also mitigates.
Maybe not completely relevant to this topic, but a headache I have dealing with surrogate keys. Oracle pre-delivered analytics creates auto-generated SKs on all of its dimension tables in the warehouse, and it also stores those on the facts. So, anytime they (dimensions) need to be reloaded as new columns are added or need to be populated for all items in the dimension, the SKs assigned during the update makes the SKs out of sync with the original values stored to the fact, forcing a complete reload of all fact tables that join to it. I would prefer that even if the SK was a meaningless number, there would be some way that it could not change for original/old records. As many know, out-of-the box rarely serves an organization's needs, and we have to customize constantly. We now have 3yrs worth of data in our warehouse, and complete reloads from the Oracle Financial systems are very large. So in my case, they are not generated from data entry, but added in a warehouse to help reporting performance. I get it, but ours do change, and it's a nightmare.

Implementing efficient foreign keys in a relational database

All popular SQL databases, that I am aware of, implement foreign keys efficiently by indexing them.
Assuming a N:1 relationship Student -> School, the school id is stored in the student table with a (sometimes optional) index. For a given student you can find their school just looking up the school id in the row, and for a given school you can find its students by looking up the school id in the index over the foreign key in Students. Relational databases 101.
But is that the only sensible implementation? Imagine you are the database implementer, and instead of using a btree index on the foreign key column, you add an (invisible to the user) set on the row at the other (many) end of the relation. So instead of indexing the school id column in students, you had an invisible column that was a set of student ids on the school row itself. Then fetching the students for a given school is a simple as iterating the set. Is there a reason this implementation is uncommon? Are there some queries that can't be supported efficiently this way? The two approaches seem more or less equivalent, modulo particular implementation details. It seems to me you could emulate either solution with the other.
In my opinion it's conceptually the same as splitting of the btree, which contains sorted runs of (school_id, student_row_id), and storing each run on the school row itself. Looking up a school id in the school primary key gives you the run of student ids, the same as looking up a school id in the foreign key index would have.
edited for clarity
You seem to be suggesting storing "comma separated list of values" as a string in a character column of a table. And you say that it's "as simple as iterating the set".
But in a relational database, it turns out that "iterating the set" when its stored as list of values in a column is not at all simple. Nor is it efficient. Nor does it conform to the relational model.
Consider the operations required when a member needs to be added to a set, or removed from the set, or even just determining whether a member is in a set. Consider the operations that would be required to enforce integrity, to verify that every member in that "comma separated list" is valid. The relational database engine is not going to help us out with that, we'll have to code all of that ourselves.
At first blush, this idea may seem like a good approach. And it's entirely possible to do, and to get some code working. But once we move beyond the trivial demonstration, into the realm of real problems and real world data volumes, it turns out to be a really, really bad idea.
The storing comma separated lists is all-too-familiar SQL anti-pattern.
I strongly recommend Chapter 2 of Bill Karwin's excellent book: SQL Antipatterns: Avoiding the Pitfalls of Database Programming ISBN-13: 978-1934356555
(The discussion here relates to "relational database" and how it is designed to operate, following the relational model, the theory developed by Ted Codd and Chris Date.)
"All nonkey columns are dependent on the key, the whole key, and nothing but the key. So help me Codd."
Q: Is there a reason this implementation is uncommon?
Yes, it's uncommon because it flies in the face of relational theory. And it makes what would be a straightforward problem (for the relational model) into a confusing jumble that the relational database can't help us with. If what we're storing is just a string of characters, and the database never needs to do anything with that, other than store the string and retrieve the string, we'd be good. But we can't ask the database to decipher that as representing relationships between entities.
Q: Are there some queries that can't be supported efficiently this way?
Any query that would need to turn that "list of values" into a set of rows to be returned would be inefficient. Any query that would need to identify a "list of values" containing a particular value would be inefficient. And operations to insert or remove a value from the "list of values" would be inefficient.
This might buy you some small benefit in a narrow set of cases. But the drawbacks are numerous.
Such indices are useful for more than just direct joins from the parent record. A query might GROUP BY the FK column, or join it to a temp table / subquery / CTE; all of these cases might benefit from the presence of an index, but none of the queries involve the parent table.
Even direct joins from the parent often involve additional constraints on the child table. Consequently, indices defined on child tables commonly include other fields in addition to the key itself.
Even if there appear to be fewer steps involved in this algorithm, that does not necessarily equate to better performance. Databases don't read from disk a column at a time; they typically load data in fixed-size blocks. As a result, storing this information in a contiguous structure may allow it to be accessed far more efficiently than scattering it across multiple tuples.
No database that I'm aware of can inline an arbitrarily large column; either you'd have a hard limit of a few thousand children, or you'd have to push this list to some out-of-line storage (and with this extra level of indirection, you've probably lost any benefit over an index lookup).
Databases are not designed for partial reads or in-place edits of a column value. You would need to fetch the entire list whenever it's accessed, and more importantly, replace the entire list whenever it's modified.
In fact, you'd need to duplicate the entire row whenever the child list changes; the MVCC model handles concurrent modifications by maintaining multiple versions of a record. And not only are you spawning more versions of the record, but each version holds its own copy of the child list.
Probably most damning is the fact that an insert on the child table now triggers an update of the parent. This involves locking the parent record, meaning that concurrent child inserts or deletes are no longer allowed.
I could go on. There might be mitigating factors or obvious solutions in many of these cases (not to mention outright misconceptions on my part), though there are probably just as many issues that I've overlooked. In any case, I'm satisfied that they've thought this through fairly well...

SQL Server 2008: can 2 tables have the same composite primary key?

In this case, tables Reserve_details and Payment_details; can the 2 tables have the same composite primary key (clientId, roomId)?
Or should I merge the 2 tables so they become one:
clientId[PK], roomId[PK], reserveId[FK], paymentId[FK]
In this case, tables Reserve_details and Payment_details; can the 2 tables have the same composite primary key (clientId, roomId) ?
Yes, you can, it happens fairly often in Relational Databases.
(You have not set that tag, but since (a) you are using SQL Server, and (b) you have compound Keys, which indicates a movement in the direction of a Relational Database, I am making that assumption.)
Whether you should or not, in any particular instance, is a separate matter. And that gets into design; modelling; Normalisation.
Or should I merge the 2 tables so they become one:
clientId[PK], roomId[PK], reserveId[FK], paymentId[FK] ?
Ok, so you realise that your design is not exactly robust.
That is a Normalisation question. It cannot be answered on just that pair of tables, because:
Normalisation is an overall issue, all the tables need to be taken into account, together, in the one exercise.
That exercise determines Keys. As the PKs change, the FKs in the child tables will change.
The structure you have detailed is a Record Filing System, not a set of Relational tables. It is full of duplication, and confusion (Facts1 are not clearly defined).
You appear to be making the classic mistake of stamping an ID field on every file. That (a) cripples the modelling exercise (hence the difficulties you are experiencing) and (b) guarantees a RFS instead of a RDb.
Solution
First, let me say that the level of detail in an answer is constrained to the level of detail given in the question. In this case, since you have provided great detail, I am able to make reasonable decisions about your data.
If I may, it is easier the correct the entire lot of them, than to discuss and correct one or the other pair of files.
Various files need to be Normalised ("merged" or separated)
Various duplicates fields need to be Normalised (located with the relevant Facts, such that duplication is eliminated)
Various Facts1 need to be clarified and established properly.
Please consider this:
Reservation TRD
That is an IDEF1X model, rendered at the Table-Relation level. IDEF1X is the Standard for modelling Relational Databases. Please be advised that every little tick; notch; and mark; the crows feet; the solid vs dashed lines; the square vs round corners; means something very specific and important. Refer to the IDEF1X Notation. If you do not understand the Notation, you will not be able to understand or work the model.
The Predicates are very important, I have given them for you.
If you would like to information on the important Relational concept of Predicates, and how it is used to both understand and verify the model, as well as to describe it in business terms, visit this Answer, scroll down (way down) until you find the Predicate section, and read that carefully.
Assumption
I have made the following assumptions:
Given that it is 2015, when reserving a Room, the hotel requires Credit Card details. It forms the basis for a Reservation.
Rooms exist independently. RoomId is silly, given that all Rooms are already uniquely Identified by a RoomNo. The PK is ( RoomNo ).
Clients exist independently.
The real Identifier has to be (NameLast, NameFirst, Initial ... ), plus possibly StateCode. Otherwise you will have duplicate rows which are not permitted in a Relational Database.
However, that Key is too wide to be migrated into the child tables 2, so we add 3 a surrogate ( ClientId ), make that the PK, and demote the real Identifier to an AK.
CreditCards belong to Clients, and you want them Identified just once (not on each transaction). The PK is ( ClientId, CreditCardNo ).
Reservations are for Rooms, they do not exist in isolation, independently. Therefore Reservation is a child of Room, and the PK is ( RoomNo, Date ). You can use DateTime if the rooms are not for full days, if they are for short meetings, liaisons, etc.
A Reservation may, or may not, progress to be filled. The PK is identical to the parent. This allows just one filled reservation per Reservation.
Payments do not exist in isolation either. The Payments are only for Reservations.
The Payment may be for a ReservationFee (for "no shows"), or for a filled Reservation, plus extras. I will leave it to you to work out duration changes; etc. Multiple Payments (against a Reservation) are supported.
The PK is the Identifier of the parent, Reservation, plus a sequence number: ( RoomNo, Date, SequenceNo ).
Relational Database
You now have a Relational Database, with levels of (a) Integrity (b) Power and (c) Speed, each of which is way, way, beyond the capabilities of a Record Filing System. Notice, there is just one ID column.
Note
A Database is a collection of Facts about the real world, limited to the scope that the app engages.
Which is the single reason that justifies the use of a surrogate.
A surrogate is always an addition, not a substitution. The real Keys that make the row unique cannot be abandoned.
Please feel free to ask questions or comment.

Naming Conventions for Multivariable Dependency Tables MySQL

Conventions for normalized databases rule that the best practice for dealing with multivariable dependencies is spinning them off into their own table with two columns. One column is the primary key of the original table (for example, customer name, of which there is one), while the other is the value with has multiple values (for example, email or phone- the customer could have multiple of these). Together these two columns constitute the primary key for the spun off table.
However, when building normalized databases, I often find naming these spun off tables troublesome. It's hard to come up with a meaningful names for these tables. Is there a standard way of identifying these tables as multivariable dependency tables that are meaningless without the presence of the other table? Some examples I can think of (referencing the example above) are 'customer_phones' or 'customer_has_phones'. I don't think just 'phones' would be good, because that doesn't identify this table as related to and heavily dependent on the customers table.
In real life you end up running into a lot of combinations that vary a lot from each other.
Try to be as clear as possible in case someone else ends up inheriting your design. I personally like to keep short names in the parent tables so they don't end up being super long whenever the relationship grows or spans off new children.
For instance, if I have "Customer", "Subscriptions", "Product" tables I would end up naming their links like "Customer_Subscriptions" or "Subscriptions_Products" and such.
Most of the time it just gets down to what works better for you in terms of maintainability.
The convention we use is the name of the entity table, followed by the name of the attribute.
In your example, if the entity table is customer, the name of the table for the repeating (multi-valued) attribute would be customer_phone or customer_phone_number. (We almost always name tables in the singular, based on the idea that we are naming what ONE tuple (row) represents. (e.g. a row in that table represents one occurrence of a phone number for a customer.)

Performance gain or less using an association table even when there is just a one-to-many relationship

I am going to build a PHP web application and have already decided to use Codeigniter + DataMapper (OverZealous edition).
I have discovered that DataMapper (OverZealous edition) requires the use of an extra association table even when there is actually just a one-to-many relationship.
For example, a country can have many players but a player can only belong to one country. Typical database design would be like this:
[countries] country_id(pk), country_name
[players] player_id(pk), player_name, country_id(fk)
However, in DataMapper, it requires the design to be like this:
[countries] country_id(pk), country_name
[players] player_id(pk), player_name
[asso_countries_players] countries_players_id(pk), country_id(fk), player_id(fk)
It's good for maintenance because if later we change our mind that a player can belong to more than one country, it can be done with very little effort.
But what I would like to know is, for such database design, in general, is there any performance gain or loss when compared to the typical design?
"The fastest way to do anything is not to do it at all." -- Cary Millsap, Optimizing Oracle Performance.
"A designer knows he has achieved true elegance not when there is nothing left to add, but when there is nothing left to take away." -- Antoine de Saint-Exupéry
The simpler implementation has two tables and three indexes, two unique.
The more complicated implementation has three tables and five indexes, four unique. The index on asso_countries_players.player_name (which should be a surrogate ID -- what happens if a player's name changes, like if they get married or legally change it, as Chad Ochocinco (nee Johnson) did?) must also be unique to enforce the 0..1 nature of the relationship between players and countries.
If the associative entity isn't required by the data model, then eliminate it. It's generally pretty trivial to transform a 0..1 relationship or 1..n relationship to an n..n relationship:
Add associative entity (and I'd question the need for a surrogate key there unless the relationship itself had attributes, like a start or end date)
Populate associative entity with existing data
Reimplement the foreign key constraints
Remove superseded foreign key column in child table.
Selecting data and searching will mean more joins : you'll have to work on 3 tables instead of 2.
Inserting data will mean more insert queries : you'll have to insert to 3 tables instead of 2.
So I'm guessing this could mean a bit more work -- which, in turns, might hurt performances a bit.
Because this is one-to-many I'd personally not use an association table, it's totally unnecessary.
The performance hit from this decision won't be too great. But think about the context of your data too, don't just do it because some program tells you - understand your data.