Which one is ER diagram - mysql

I started to study ER diagram when i browse through ER diagram tutorials i found something like figure 1 and i learned
Figure 1
And then i tried to create a sample ER Diagram in mysql workbench i got the components like in below diagram
Figure 2
Then i browsed in Google images as ER Diagram i got both types of images... I dont know the similarities and difference between both diagrams..
Can u please help me to understand in detail and to move further...
Thanks in Advance...

Then i browsed in Google images as ER Diagram i got both types of
images... I dont know the similarities and difference between both
diagrams..
Can u please help me to understand in detail and to move further...
Thanks in Advance...
By developing databases, a DBA (or someone else) can use a Data Modeling technique known as Entity-Relationship Diagram.
This technique (as mentioned in other answers) was developed by an American (?) named Peter Chen and it is widely used today for the development of database structures such as tables and present relations between them.
The first image shown represents a Conceptual Model of a due problem/situation. The second image is a Physical Model of a problem/situation. Both models are part of the whole concept of data modeling by Peter Chen, the Entity-Relationship Diagram.
They (the models) represent stages in the course of a problem/situation. As you get a description of the problem/situation, you begin to develop the Conceptual Model of it. Once ready, the model is decomposed to become a new model called the Logical Model.
The Logical Model is also subsequently decomposed, resulting in the Physical Model, a final representation of the structures of the tables of a database containing the field names, their data types, relationships between tables, primary keys, foreign keys , and so on.
The decomposition process follows strict rules proposed by Peter Chen. This says that you do not draw nonsense. You make a model and need to follow rules to break it down so that you pass to the next stage.
You can see the Entity-Relationship Diagram as a tool or technique that helps you to develop a strong and concise database structure. With this technique, you can create a Model (3 actually) expressing business rules needed in a system/web application. However, remember the following things:
Even before the existence of the Conceptual Model, we have to
describe the problem (the needs of business rules) on a paper (or a
document). It is here that you will generate/start a Conceptual
Model. This document may even be a new phase, prior to the Conceptual
Model, called Descriptive Model (this is not official by Peter Chen).
That's where you will have the context of everything.
The context should be given only for what should be persisted (in a
database). There is no need to describe things that are not
persisted. Your Descriptive Model should not contain unnecessary
things.
During the development of the Conceptual Model, it is crucial that
you forget completely what are tables and foreign keys. These things
will only slow you down during this phase of development. They should
be seen later, during the next stages.
I advise you to find out more about the Entity-Relationship Model (A.K.A. Entity-Relationship Diagram), and study about it. There are cool books on the subject, and a lot of material on the Internet. Having grasped this, believe me, the development of databases will become something much easier and enjoyable.
If you have major questions, please make a comment and I will answer. Come join the community. Follow the entity-relationship tag. There are many interesting questions that can help you in your studies. Also, keep asking, keep participating. We are here to exchange knowledge!
Oh, one more thing. There are certain different notations used by different professionals. For example, some people represents cardinalities as N...1, as other N-1, other as (N,1). These characteristics do not change the end result.
EDIT
I thank who showed me this.

Your first diagram is a proper ER diagram, using the concepts and notation developed by Peter Chen in his paper The Entity-Relationship Model - Toward a Unified View of Data. This notation depicts both entities (rectangles) and relationships (diamonds). Ternary and higher relationships are easily represented and visible in this notation.
Your second diagram is commonly called an ER diagram. It doesn't distinguish entities from relationships, rather the applications that produce these diagrams tend to confuse tables with entities and relationships with foreign key constraints. These diagrams have more in common with the network data model than with the entity-relationship model, since they depict only binary relationships between tables rather than n-ary relationships between entities.

Figure 1 is a Entity Relationship Diagram it shows abstract relationships and attributes between entities.
Figure 2 is a Relational Schema Diagram which goes a step further and specifies foreign keys, data types of attributes, and one-to-many/many-to-many relationships.
Both are conceptual designs of databases and honestly there are some aspects added and removed from each one.

The first one is an Entity-Relationship diagram, although it's awfully specific and a lot of that cruft can be omitted. There are simple conventions you can use to declare relationships between tables, like arrows that mean "one to one", "one to many", or "many to many" but I've found that most of the time simply knowing the relationship exists is good enough.
Here's an example of a very high-level ERD that simply establishes the connection between different parts of your system:
There's usually no need to get into specifics at this point. Anyone not familiar with the project will immediately get a sense of how your data is structured and if they want to know more about implementation they can dig into the database level.
The second artifact there is a database diagram and is generally very specific in terms of details.
It's often easier to design your application starting with a very simple ERD, iterate on that until you're happy, then implement it in terms of database tables, fields and relationships later. This is when you'd use a database design tool to implement it if you prefer.

Related

Creating a knowledge base with common relational databases

It might be silly question and i know for such systems they usually use Ontology authoring tools or Rule-based languages like CLIPS, but is it possible to create a Knowledge base by relational databases?
For example, let's say i want to create a recommendation system for diabetes patients.The application gets their blood glucose level and response a recommendation based on their health situation. (I guess this one could be easily implemented by a mapping table )
But what if other factors like Age, Male/Female,calorie intake etc add to the problem.
Is it still possible to design a knowledge base in which has the relation of these factors and a recommendation table?
You might be able to do that,but it's not recommended. It has a very high overhead on the system and it may not probably be responsive.

Class Diagrams - questionably useful?

How is a class diagram actually any different to just looking at the class definition with all the functions collapsed? I've been asked to write some and realized that this is all just .. read the source .. it has comments. What's the point of a class diagram, how is it different to even minorly commented definitions, and what makes a good class diagram better than others?
Edit: Yes, the source already exists, and did so long before the class diagrams.
Another edit: People have been talking about visual vs textual tastes. That's not the definition of class diagram I was given. It's still purely textual. The sample class diagram is a bunch of text, that resembles the source code with the function definitions cut. That's the reason that I asked. If it was a genuine diagram, I could understand.
If you have one or two classes, that does not make a diference.
If you have a complex object model, things change.
And, at least for me, is easy to look first at a diagram in order to look for what I want in stead of looking at a bunch of source files.
Also seeing the classes on a picture and their relations helps to understant the ideas of the project.
I'd rather have source. Given that, I can always reverse engineer it.
You have to ask what UML is for: it's just a communication device, a way to get your ideas across to other developers. If UML is helping, great. If it becomes another burden to maintain, prefer working code with good unit tests.
A good class diagram clearly shows each classes responsibilies and associations - at an appropriate level of abstraction.
Class diagrams are useful because they allow you to design at a higher level of granularity. Operations drawn on a white board are easier to change than source code. It also clearly shows associations through lines, rather than leafing through code.
They're helpful in that they are a segue from conceptual ideas to source code.
They let you say more with less.
If the source already exists, I guess it's the old adage, "A picture tells a thousand words".
For someone not familiar with the source, a diagram may help them to grok the overall design quicker then reading the source, no matter how well documented. Some people are more visual than others. Personally, I'd rather have the source.
Like many things, it's probably a matter of taste.
Edit:
I thought the definition of a diagram was that it is visual. However, if it's just a bunch of text, then the only point I can see is that it provides an overview of intent without the unnecessary implementation details.
The difference between looking at a diagram and the source is that you don't need to process as much data when looking at the diagram (a picture) than when reading the source (says thousand words).
In my experience I've found class diagrams to be very useful when I'm not familiar with the architecture of the software. But class diagrams don't replace the need for source code and proper documentation, they're just a communication and productivity tool that complement the methods I mentioned before. Their intent is to understand the software architecture. not to replace other documentations. How useful a class diagram is depends on its quality and the complexity of it and the source code.
Don't put too much detail into the diagrams. It makes them confusing. You'll want them to communicate relationships, not API and a list of methods.
They also help to see when and where to refactor code. Use class diagrams along with proper documentation and you'll be all set.
I'm not sure quite what definition you've been given for a Class Diagram - it sounds almost as though the example you've been shown has just one class on it. If so, I can understand why you think it's a bit ridiculous.
Class Diagrams are a way to show the relationships between classes - a good one can provide a lot of information about how your system works in one diagram that rewards careful study. It allows a developer unfamiliar with a subsystem to come up to speed quickly without getting mired in the implementation details.
Here's one simple one I found with a quick Google:
http://netbeans.org/images_www/articles/uml-class-diagram/Completed-Class-Diagram.gif
Some tools (Microsoft's Visual Studio is one) contain tools that allow you to draw a class diagram once and have it automatically kept up to date ("in synch") with the code. Very useful.

Is there any reason to use one DataContext instance, instead of several?

For example, I have 2 methods that use one DataContext (Linq to sql).
using(DataContext data = new DataContext){
// doing something
another_datamethod(data);
}
void another_datamethod(DataContext data){
// doing
}
Use this style? Or with the same result, I can create separate "using DataContext". What benefits, I would achieve if i'll use one DataContext? Maybe some cache possibilities?
Recently, I've read numerous articles and blogs that "highly recommend" that you use multiple DataContexts for your applications, due to multiple issues including the creation of records associated with lookup tables. When I was learning LINQ-to-SQL, one of the most attractive qualities of it for me was the ability to import my complete database schema into one "big" DataContext. So, that's what I did...but a few months, in comes the contradictory information saying that what I did was a bad thing. What to do, what to do...
Nine months later, here's where I stand. My single large DataContext is still my single large DataContext. I have over thirty data repository classes accessing the sixty-plus tables contained within, and I still haven't seen a valid reason to break up my existing data-dom, or to not handle the next project using a single DataContext. The problems that the article and blog writers experienced were valid problems. However, like most things technical, there's never just one way to do things. The best investment of my time and energy was to learn and truly understand how LINQ-to-SQL does what it does. The best book that I found to help me do exactly that is Pro LINQ: Language Integrated Query in C# 2008 by Joseph C. Rattz, Jr. The LINQ-to-SQL coverage is detailed and clear, and there are plenty of examples to clarify the mystery.
So, in your case, create one big DataContext or create many smaller ones...the choice is up to you. Smaller ones clearly give better opportunity for reuse, while one big one helps increase the time you can focus on business logic and presentation code.
Datacontexts track changes and do caching, so yes caching is a possibility depending on what work you are performing.

How often do you need to create a real class hierarchy in your day to day programming?

I create business applications with heavy database use. Most of the programming work is just to connect components to the database and modifying components to adapt to general interface behaviour. I mostly use Delphi with its rich VCL library, and generally buy components needed. I keep most of the business logic in the database. I rarely get the chance to build a nice class hierarchy from the bottom up as there really is no need. Anyone else have this experience?
For me, occasionally a problem is clearer or easier with subclassing, but not often.
This also changes quite a bit in a given design as it's refactored.
My biggest problem is that programming courses and texts give so much weight to inheritance, hierarchies, and polymorphism through base classes (vs. interfaces or dynamic typing). This helps create legions of programmers that subclass everything and their mother.
The answer to this question is not totally language-agnostic;
Some languages like Java have a fairly limited set of language features available, meaning that subclassing is fairly often used because it's a convenient method for re-use, technical inheritance.
Closures and lambdas of C# make inheritance for technical reasons much less relevant. So normally inheritance is used for semantic reasons (like cat extends animal).
The last C# project I worked on, we more or less made all of the class hierarchies within a few weeks. After that it was more or less over.
On my current java project we create new class hierarchies all of the time.
Other languages will have other features that similarly affect this composition (mixins come to mind)
I put on my architecting/class design hat probably once or twice a month. It's probably the best hat I have and is the most fun to wear.
Depends what stage of the lifecycle your project is in though.
When your tackling problem domains you are well familiar with and already have a common code base to work from, you often have no need to create a new class hierarchy. It's when you stumble upon problems you have no ready solutions for, that you start building your own.
It's also very dependant on the type of applications you develop. If your domain already has well accepted conventions and libraries to work from, there probably isn't any need to reinvent the wheel (other than personal / academic interests). Some areas have inherently less available resources to work with, and in those you'll find yourself building everything from scratch most of the time.
A majority of applications, especially business applications, contains at least some kind of business logic in it. I would contend that business should not be in the database, but should rather be in the application. You can put referential integrity in the database as I think this is a good choice, but business logic should be only in the application.
By class hierarchy, I suppose you mean do you always have to end up with some inheritance in your object model, then the answer is no. But chances are you can often find some common code, factor it out and create a base class to contain the common code.
If you agree with me on the point that business logic should not be in the database, but should be in the application, then I recommend you look into the MVC Design Pattern to guide your design. You will find your design contain classes or objects. Your VCLs will represent your View, and you can have your Model classes map directly to the database table, i.e. each member in the class in the model corresponds to a field in a database table (again, this is the norm but there will be exception, where this simplicity fails to apply). Then you'll need a layer to handle the CRUD (Create, Read, Update, Delete) of the Model classes to the database tables. You will end up with an "layered" application that is easier to maintain and enhance.
It depends on what you mean by hierarchy - inheritance or layering?
When object oriented languages first came out, inheritance was overused. Complicated hierarchies were common. Now, interfaces (as in Java and C#) provide a simpler way to get the benefit of polymorphism without the complications of inheritance. I rarely use inheritance anymore.
Layering, however, is vital when creating a large application. Layering prevents general low-level classes (like lists) from directly referencing specific high-level classes (like web browser windows). As far as I know, there isn't a formal way to describe layering, but there are general guidelines (model-view-controller (MVC), separate GUI logic from business logic, separate data from presentation, etc.).
It really depends on the types/phases of the projects you're working on. I happen to do that everyday because I'm working on database internals for a new database, creating related libraries/frameworks. I'd imagine doing that a lot less if I'm working within a mature framework using other people's libraries.
I'm doing Infrastructure for our companys' product, so I'm writing a lot of code that will be used later by guys in other teams. So I end up writing lots of abstract classes, interfaces, hierarchies and so on. Mostly it's just a pattern of "default behaviour in an abstract/virtual class, which other programmers may override".
Very challenging, I must say.
The time that I find class hierarchies most beneficial is when the relationship between objects actually does match a true "is-a" relationship in the domain.
However if I can avoid large hierarchies I will due to the fact that they are often a little more tricky to map to relational databases and can really complicate your database designs. Since you say most of your applications make heavy use of databases this would be something to take into consideration.

Object Normalization

In the same line as Database Normalization - is there an approach to object normalization, not design pattern, but the same mathematical like approach to normalizing object creation. For example: first normal form: no repeating fields....
here's some links to DB Normalization:
http://en.wikipedia.org/wiki/Database_normalization
http://databases.about.com/od/specificproducts/a/normalization.htm
Would this make object creation and self-documentation better?
Here's a link to a book about class normalization (guess we're really talking about classes)
http://www.agiledata.org/essays/classNormalization.html
Normalization has a mathematical foundation in predicate logic, and a clear and specific goal that the same piece of information never be represented twice in a single model; the purpose of this goal is to eliminate the possibility of inconsistent information in a data model. It can be shown via mathematical proof that if a data model has certain specific properties (that it passes tests for 1st Normal Form (1NF), 2NF, 3NF, etc.) that it is free from redundant data representation, i.e. it is Normalized.
Object orientation has no such underlying mathematical basis, and indeed, no clear and specific goal. It is simply a design idea for introducing more abstraction. The DRY principle, Command-Query Separation, Liskov Substitution Principle, Open-Closed Principle, Tell-Don't-Ask, Dependency Inversion Principle, and other heuristics for improving quality of code (many of which apply to code in general, not just object oriented programs) are not absolute in nature; they are guidelines that programmers have found useful in improving understandability, maintainability, and testability of their code.
With a relational data model, you can say with absolute certainty whether it is "normalized" or not, because it must pass ALL the tests for normal form, and they are quite specific. With an object model, on the other hand, because the goal of "understandable, maintainable, testable, etc" is rather vague, you cannot say with any certainty whether you have met that goal. With many of the design heuristics, you cannot even say for sure whether you have followed them. Have you followed the DRY principle if you're applying patterns to your design? Surely repeated use of a pattern isn't DRY? Furthermore, some of these heuristics or principles aren't always even necessarily good advice all the time. I do try to follow Command-Query Separation, but such useful things as a Stack or a Queue violate that concept in order to give us a rather elegant and useful result.
I guess the Single Responsible Principle is at least related to this. Or at least, violation of the SRP is similar to a lack of normalization in some ways.
(It's possible I'm talking rubbish. I'm pretty tired.)
Interesting.
You may also be interested in looking at the Law of Demeter.
Another thing you may be interested in is c2's FearOfAddingClasses, as, arguably, the same reasoning that lead programmers to denormalise databases also leads to god classes and other code smells. For both OO and DB normalisation, we want to decompose everything. For databases this means more tables, for OO, more classes.
Now, it is worth bearing in mind the object relational impedance mismatch, that is, probably not everything will translate cleanly.
Object relational models or 'persistence layers', usually have 1-to-1 mappings between object attributes and database fields. So, can we normalise? Say we have department object with employee1, employee2 ... etc. attributes. Obviously that should be replaced with a list of employees. So we can say 1NF works.
With that in mind, let's go straight for the kill and look at 6NF database design, a good example is Anchor Modeling, (ignore the naming convention). Anchor Modeling/6NF provides highly decomposed and flexible database schemas; how does this translate to OO 'normalisation'?
Anchor Modeling has these kinds of relationships:
Anchors - unique object IDs.
Attributes, which translate to object attributes: (Anchor, value, metadata).
Ties - relationships between two or more objects (themselves anchors): (Anchor, Anchor... , metadata)
Knots, attributed Ties.
Attribute metadata can be anything - who changed an attribute, when, why, etc.
The OO translation of this is looks extremely flexible:
Anchors suggest attribute-less placeholders, like a proxy which knows how to deal with the attribute composition.
Attributes suggest classes representing attributes and what they belong to. This suggests applying reuse to how attributes are looked up and dealt with, e.g automatic constraint checking, etc. From this we have a basis to generically implement the GOF-style Structural patterns.
Ties and Knots suggest classes representing relationships between objects. A basis for generic implementation of the Behavioural design patterns?
Interesting and desirable properties of Anchor Modeling that also translate across are:
All this requires replacing inheritance with composition (good) in the exposed objects.
Attribute have owners rather than owners having attributes. Although this make attribute lookup more complex, it neatly solves certain aliasing problems, as there can only ever be one owner.
No need for NULL. This translates to clearer NULL handling. Empty-case attribute classes could provide methods for handling the lack of a particular attribute, instead of performing NULL-checking everywhere.
Attribute metadata. Attribute-level full historisation and blaming: 'play' objects back in time, see what changed, when and why, etc. (if required - metadata is entirely optional)
There would probably be a lot of very simple classes (which is good), and a very declarative programming style (also good).
Thanks for such a thought provoking question, I hope this is useful for you.
Perhaps you're taking this from a relational point-of-view, but I would posit that the principles of interfaces and inheritance correspond to normalization in the world of OOP.
For example, a Person abstract class containing FirstName, LastName, Gender and BirthDate can be used by classes such as Employee, User, Member etc. as a valid base class, without a need to repeat the definitions of those attributes in such subclasses.
The principle of DRY, (a core principle of Andy Hunt and Dave Thomas's book The Pragmatic Programmer), and the constant emphasis of object-oriented programming on re-use, also correspond to the efficiencies offered by Normalization in relational databases.
At first glance, I'd say that the objectives of Code Refactoring are similar in an abstract way to the objectives of normalization. But that's pretty abstract.
Update: I almost wrote earlier that "we need to get Jon Skeet in on this one." I posted my answer and who beat me? You guessed it...
Object Role Modeling (not to be confused with Object Relational Mapping) is the closest thing I know of to normalization for objects. It doesn't have as mathematical a foundation as normalization, but it's a start.
In a fairly ad-hoc and untutored fashion, that will probably cause purists to scoff, and perhaps rightly, I think of a database table as being a set of objects of a particular type, and vice versa. Then I take my thoughts from there. Viewed this way, it doesn't seem to me like there's anything particularly special you have to do to use normal form in your everyday programming. Each object's identity will do for starters as its primary key, and references (pointers, etc.) will do by way of foreign keys. Then just follow the same rules.
(My objects usually end up in 3NF, or some approximation thereof. I treat this all more as guidelines, and, like I said, "untutored".)
If the rules are followed properly, each bit of information then ends up in one place, the interrelationships are clear, and everything is structured such that introducing inconsistencies takes some work. One could say that this approach produces good results on this basis, and I would agree.
One downside is that the end result can feel a bit like a tangle of spaghetti, particularly after some time away, and it's hard to shake the constant lingering sensation, even though it's usually false, that surely a few of all these links could be removed...
Object oriented design is rational but it does not have the same mathematically well-defined basis as the Relational Model. There is nothing exactly equivalent to the well-defined normal forms of database design.
Whether this is a strength or a weakness of Object oriented design is a matter of interpretation.
I second the SRP. The Open Closed Principle applies as well to "normalization" although I might stretch the meaning of the word, in that it should be possible to extend the system by adding new implementations, without modifying the existing code. objectmentor about OCP
good question, sorry i can't answer in depth
I've been working on object normalization off and on for over 20 years. It's deep and complicated and beautiful, and is the subject of my second planned book, Object Mechanics II. ONF = Object Normal Form, you heard it here first! ;-)
since potentially patentable technology lurks within, I am not at liberty to say more, except that normalizing the data is the really easy part ;-)
ADDENDUM: change of plans - see https://softwareengineering.stackexchange.com/questions/84598/object-oriented-normalization