Business Objects - Containers or functional? [closed] - language-agnostic

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Where I work, we've gone back and forth on this subject a number of times and are looking for a sanity check. Here's the question: Should Business Objects be data containers (more like DTOs) or should they also contain logic that can perform some functionality on that object.
Example - Take a customer object, it probably contains some common properties (Name, Id, etc), should that customer object also include functions (Save, Calc, etc.)?
One line of reasoning says separate the object from the functionality (single responsibility principal) and put the functionality in a Business Logic layer or object.
The other line of reasoning says, no, if I have a customer object I just want to call Customer.Save and be done with it. Why do I need to know about how to save a customer if I'm consuming the object?
Our last two projects have had the objects separated from the functionality, but the debate has been raised again on a new project. Which makes more sense?
EDIT
These results are very similar to our debates. One vote to one side or another completely changes the direction. Does anyone else want to add their 2 cents?
EDIT
Eventhough the answer sampling is small, it appears that the majority believe that functionality in a business object is acceptable as long as it is simple but persistence is best placed in a separate class/layer. We'll give this a try. Thanks for everyone's input...

Objects are state and behavior together. If an object has sensible behavior (e.g., calculating age for a Person from their birth date, or a total tax for an Invoice), by all means add it. Business objects that are nothing more than DTOs are termed an "anemic domain model." I don't think it's a design requirement.
Persistence is a special kind of behavior. What I'm calling "sensible" is business behavior. A business object need not know that it's persistent. I'd say that a DAO can keep persistence separate from business behavior. I don't put "save" in the "sensible" category.

Business objects CAN have business functionality.
Persistence is not a business functionality , but is technical implementation.
Long story short:
Save/Update/Delete/Find etc - keep away from the business objects in a persistence layer.
CalculateSalary, ApplyDiscount etc are business related methods and can be:
methods of the business objects (so BO is self contained representation of entity) or;
separate services implementing particular functionality (so BOs are acting more like DTOs).
As for the point 2.
I should mention that the approach 2.1 tends to make the BOs too bloated and violate SRP. While 2.2 introduces more maintenance complexity.
I usually balance in between 2.1 and 2.2 so that I put trivial things related to the data into Business Objects and create services for a bit more complex scenarious (if there are more than 4 lines of code - make it a service).
This shifts the paradigm of Business Objects to be more Data Transfer Objects instead.
But this all makes project easier to develop, test and maintain.

The answer is the same regardless of platform or language. The key to this question is whether an object should be able to be autonomous or whether it is better for any given behavior to be spread out among objects with more focused responsibility.
For each class the answer might be different. We end up with a spectrum along which we can place classes based upon the Density of Responsibility.
(Level of responsibility for behavior)
Autonomy - - - - - - - - - - - - - - - - - - - Dependence
High
C - <<GOD object>> <<Spaghetti code>>
l -
a -
s -
s -
-
s -
i -
z -
e - <<Template>> <<Framework>>
low
Let's say you favor letting the class perform all the behaviours itself, or as many as you can. Starting on the left side of this graph, when you make your class more autonomous, the size of the class will grow unless you continuously refactor it to make it more generic. This leads to a template. If no refactoring is done, the temdency is for the class to become more "god-like" because if there is some behavior it needs, it has a method for that. The number of fields and methods grow and soon become both unmanageable and unavoidable. Since the class already does so much, coders would rather add to the monstrosity than try to piece it apart and cut the Gordian knot.
The right side of the graph has classes that depend on other classes to a large degree. If the dependency level is high but the individual class is small, that is a sign of a framework; each class doesn't do much and requires lots of dependent classes to accomplish some function. On the other hand, a highly-dependent class that also has a large amount of code is a sign that the class is full of Spaghetti.
The key to this question is to determine where you feel more comfortable on the graph. In any event, individual classes will end up spread out on the graph unless some organizational principle is applied, which is how you can achieve the results of Template or Framework.
Having just written that, I would say that there is a correlation between class size and degree of organization. Robert C. Martin (or "Uncle Bob") covers similar ground with package dependencies in his very thorough paper on Design Principles and Design Patterns. JDepend is an implementation of the ideas behind the graph on page 26 and complements static analysis tools such as Checkstyle and PMD.

I think it makes more sense for business objects to know how to "handle" themselves, then to have to put that burden elsewhere in the system. In your example, the most logical place to deal with how to "save" customer data would be, to me, in the Customer object.
This may be because I consider the database to be the "data container", so I'm in favor of "business objects" being the higher level that protects the data container from direct access AND enforces standard "business rules" about how that data is accessed/manipulated.

We've used Rocky Lhotka's CSLA framework for years and love the way it is designed. In that framework all of the functionality is contained in the objects. While I can see the value of separting the logic out, I don't think we'll switch away from this philosophy anytime soon.

Business objects should be about encapsulating data and associated behaviors of the business entity modeled by that object. Think of it like this: one of the major tenets of object-oriented programming is encapsulating data and associated behaviors on that data.
Persistence is not a behavior of the modeled object. I find development progresses more smoothly if business objects are persistence ignornant. Developing new code and unit testing new code happen more quickly and more smoother if the business objects are not specifically tied to the underlying plumbing. This is because I can mock those aspects and forget about having to go through hoops to get to the database, etc. My unit tests will execute more quickly (a huge plus if you have thousands of automated tests that run with each build) and I will have less stress because I won't have tests failing because of database connection issues (great if you often work offline or remotely and can't always access your database and oh, by the way, those aspects (database connectivity etc.) should be tested elsewhere!).
The other line of reasoning says, no, if I have a customer object I just want to call Customer.Save and be done with it. Why do I need to know about how to save a customer if I'm consuming the object?
Knowing that Customer has a Save method is already knowing how to save a customer object. You haven't avoided the problem by embedding that logic in your business object. Instead, you've made your code base more tightly coupled and therefore harder to maintain and test. Push off the responsibility of persisting the object to someone else.

The Business objects, as they are named, should obviously coutain their own business logic, the dynamic of the business logic among the domain being in the service layer.
On the other side, could the BO be a data container (DTO?) composition and methods; meaning BO are pure functionnal? That could avoid all the conversions between BO and DTO.

In an MVC architecture,
Can we say that Model contains business objects.

Related

Single Responsibility Principle Core Understanding

In brushing up on the SRP I read this document which I located via Uncle Bob's page on principles of OOD. I find the following passage puzzling and somewhat at odds with the rest of the document:
"If, on the other hand, the application is not changing in ways that cause the the two responsibilities to change at different times, then there is no need to separate them. Indeed, separating them would smell of Needless Complexity. There is a corollary here. An axis of change is only an axis of change if the changes actually occur. It is not wise to apply the SRP, or any other principle for that matter, if there is no symptom."
While I understand the answer to many software development questions is "it depends" principles like the SRP appear to be almost universally beneficial and to be implemented as a matter of course. The SRP itself affords code a high adaptability to future changes in requirements. Isn't the point to separate out responsibilities from the get-go to avoid struggling with highly coupled code and cascading changes later on?
I would really appreciate some clarification on this to make sure my understanding of this core principle is correct. Thanks in advance!
From my humble understanding, in the Modem example that is presented here, it might be possible that responsibilities of the modem (Connection and Data Exchange) will change as one.
You have two possibilities here :
When the protocol changes, it is possible that only the connection part change, or only the data exchange part change. It this case you should have two interfaces, because a change of protocol in the data does not imply a change of protocol in the connection.
When the protocol changes, it will always change the connection part and the data exchange part. In that case, you don't need two interfaces, because everytime you will have to rewrite the connection part, you are sure that the data exchange will change as well. In that case, you have two responsibilities put on the same change Axis (which is the protocol handled by the modem), so you can leave them inside a single interface.
The key to this statement is "not changing in ways that cause the two responsibilities to change at different times". Let's say for the sake of argument you have a PaymentLogger and a Payment class. Every time you create a new PaymentType (CreditCard, Cash, Paypal, etc) you need to update the PaymentLogger to log actions specific to those Payments. Instead of splitting out a PaymentLogger class you could have Payment class have a method called Log which does whatever is specific for itself.
In this case it could be that the act of recording actions should be build into the class itself since creating a new Payment requires also creating a new PaymentLogger. It's a responsibility that should have been part of Payment all along.

What design pattern is best for an RTS game in AS3?

I'm looking to get some good books on design patterns and I'm wondering what particular pattern you'd recommend for a Realtime Strategy Game (like Starcraft), MVC?.
I'd like to make a basic RTS in Flash at some point and I want to start studying the best pattern for this.
Cheers!
The problem with this kind of question is the answer is it completely depends on your design. RTS games are complicated even simple ones. They have many systems that have to work together and each of those systems has to be designed differently with a common goal.
But to talk about it a little here goes.
The AI system in an rts usually has a few different levels to it. There is the unit level AI which can be as simple as a switch based state machine all the way up to a full scale behavior tree (composite/decorators).
You also generally need some type of high level planning system for the strategic level AI. (the commander level and the AI player)
There are usually a few levels in between those also and some side things like resource managers etc.
You can also go with event based systems which tie in nicely with flash's event based model as well.
For the main game engine itself a basic state machine (anything from switch based to function based to class based) can easily be implemented to tie everything together and tie in the menu system with that.
For the individual players a Model-View-Controller is a very natural pattern to aim for because you want your AI players to be exposed to everything the human player has access to. Thus the only change would be the controller (the brain) without the need for a view obviously.
As I said this isn't something that can just be answered like the normal stackoverflow question it is completely dependent on the design and how you decide to implement it. (like most things) There are tons of resources out there about RTS game design and taking it all in is the only advice I can really give. Even simple RTS's are complex systems.
Good luck to you and I hope this post gives you an idea of how to think about it. (remember balance is everything in an RTS)
To support lots of units, you might employ Flyweight and Object Pool. Flyweight object can be stored entirely in a few bytes in a large ByteArray. To get usable object, read corresponding bytes, take empty object and fill it with data from those bytes. Object pool can hold those usable objects to prevent constant allocation/garbage collection. This way, you can hold thousands of units in ByteArray and manage them via dozen of pooled object shells.
It should be noted that this is more suitable to store tile properties that units, however — tile are more-or-less static, while units are created and destroyed on regular basis.
It would be good to be familiar with a number of design patterns and apply them to the architecture of your game where appropriate.
For example, you may employ an MVC architecture for your overall application, factory pattern for creating enemies, decorator pattern for shop items, etc. Point being, design patterns are simply methodologies for structuring your objects. How you define those object and how you decide how they fit together is non prescriptive.

A brilliant example of effective encapsulation through information hiding?

"Abstraction and encapsulation are complementary concepts: abstraction focuses on the observable behavior of an object... encapsulation focuses upon the implementation that gives rise to this behavior... encapsulation is most often achieved through information hiding, which is the process of hiding all of the secrets of object that do not contribute to its essential characteristics." - Grady Booch in Object Oriented Analysis and Design
Can you show me some powerfully convincing examples of the benefits of encapsulation through information hiding?
The example given in my first OO class:
Imagine a media player. It abstracts the concepts of playing, pausing, fast-forwarding, etc. As a user, you can use this to operate the device.
Your VCR implemented this interface and hid or encapsulated the details of the mechanical drives and tapes.
When a new implementation of a media player arrives (say a DVD player, which uses discs rather than tapes) it can replace the implementation encapsulated in the media player and users can continue to use it just as they did with their VCR (same operations such as play, pause, etc...).
This is the concept of information hiding through abstraction. It allows for implementation details to change without the users having to know and promotes low coupling of code.
The *nix abstraction of character streams (disk files, pipes, sockets, ttys, etc.) into a single entity (the "everything is a file") model allows a wide range of tools to be applied to a wide range of data sources / sinks in a way that simply would not be possible without the encapsulation.
Likewise, the concept of streams in various languages, abstracting over lists, arrays, files, etc.
Also, concepts like numbers (abstracting over integers, half a dozen kinds of floats, rationals, etc.) imagine what a nightmare this would be if higher level code was given the mantissa format and so forth and left to fend for itself.
I know there's already an accepted answer, but I wanted to throw one more out there: OpenGL/DirectX
Neither of these API's are full implementations (although DirectX is certainly a bit more top-heavy in that regard), but instead generic methods of communicating render commands to a graphics card.
The card vendors are the ones that provide the implementation (driver) for a specific card, which in many cases is very hardware specific, but you as the user need never care that one user is running a GeForce ABC and the other a Radeon XYZ because the exact implementation is hidden away behind the high-level API. Were it not, you would need to have a code path in your games for every card on the market that you wanted to support, which would be completely unmanageable from day 1. Another big plus to this approach is that Nvidia/ATI can release a newer, more efficient version of their drivers and you automatically benefit with no effort on your part.
The same principle is in effect for sound, network, mouse, keyboard... basically any component of your computer. Whether the encapsulation happens at the hardware level or in a software driver, at some point all of the device specifics are hidden away to allow you to treat any keyboard, for instance, as just a keyboard and not a Microsoft Ergonomic Media Explorer Deluxe Revision 2.
When you look at it that way, it quickly becomes apparent that without some form of encapsulation/abstraction computers as we know them today simply wouldn't work at all. Is that brilliant enough for you?
Nearly every Java, C#, and C++ code base in the world has information hiding: It's as simple as the private: sections of the classes.
The outside world can't see the private members, so a developer can change them without needing to worry about the rest of the code not compiling.
What? You're not convinced yet?
It's easier to show the opposite. We used to write code that had no control over who could access details of its implementation. That made it almost impossible at times to determine what code modified a variable.
Also, you can't really abstract something if every piece of code in the world might possibly have depended on the implementation of specific concrete classes.

How often do you need to create a real class hierarchy in your day to day programming?

I create business applications with heavy database use. Most of the programming work is just to connect components to the database and modifying components to adapt to general interface behaviour. I mostly use Delphi with its rich VCL library, and generally buy components needed. I keep most of the business logic in the database. I rarely get the chance to build a nice class hierarchy from the bottom up as there really is no need. Anyone else have this experience?
For me, occasionally a problem is clearer or easier with subclassing, but not often.
This also changes quite a bit in a given design as it's refactored.
My biggest problem is that programming courses and texts give so much weight to inheritance, hierarchies, and polymorphism through base classes (vs. interfaces or dynamic typing). This helps create legions of programmers that subclass everything and their mother.
The answer to this question is not totally language-agnostic;
Some languages like Java have a fairly limited set of language features available, meaning that subclassing is fairly often used because it's a convenient method for re-use, technical inheritance.
Closures and lambdas of C# make inheritance for technical reasons much less relevant. So normally inheritance is used for semantic reasons (like cat extends animal).
The last C# project I worked on, we more or less made all of the class hierarchies within a few weeks. After that it was more or less over.
On my current java project we create new class hierarchies all of the time.
Other languages will have other features that similarly affect this composition (mixins come to mind)
I put on my architecting/class design hat probably once or twice a month. It's probably the best hat I have and is the most fun to wear.
Depends what stage of the lifecycle your project is in though.
When your tackling problem domains you are well familiar with and already have a common code base to work from, you often have no need to create a new class hierarchy. It's when you stumble upon problems you have no ready solutions for, that you start building your own.
It's also very dependant on the type of applications you develop. If your domain already has well accepted conventions and libraries to work from, there probably isn't any need to reinvent the wheel (other than personal / academic interests). Some areas have inherently less available resources to work with, and in those you'll find yourself building everything from scratch most of the time.
A majority of applications, especially business applications, contains at least some kind of business logic in it. I would contend that business should not be in the database, but should rather be in the application. You can put referential integrity in the database as I think this is a good choice, but business logic should be only in the application.
By class hierarchy, I suppose you mean do you always have to end up with some inheritance in your object model, then the answer is no. But chances are you can often find some common code, factor it out and create a base class to contain the common code.
If you agree with me on the point that business logic should not be in the database, but should be in the application, then I recommend you look into the MVC Design Pattern to guide your design. You will find your design contain classes or objects. Your VCLs will represent your View, and you can have your Model classes map directly to the database table, i.e. each member in the class in the model corresponds to a field in a database table (again, this is the norm but there will be exception, where this simplicity fails to apply). Then you'll need a layer to handle the CRUD (Create, Read, Update, Delete) of the Model classes to the database tables. You will end up with an "layered" application that is easier to maintain and enhance.
It depends on what you mean by hierarchy - inheritance or layering?
When object oriented languages first came out, inheritance was overused. Complicated hierarchies were common. Now, interfaces (as in Java and C#) provide a simpler way to get the benefit of polymorphism without the complications of inheritance. I rarely use inheritance anymore.
Layering, however, is vital when creating a large application. Layering prevents general low-level classes (like lists) from directly referencing specific high-level classes (like web browser windows). As far as I know, there isn't a formal way to describe layering, but there are general guidelines (model-view-controller (MVC), separate GUI logic from business logic, separate data from presentation, etc.).
It really depends on the types/phases of the projects you're working on. I happen to do that everyday because I'm working on database internals for a new database, creating related libraries/frameworks. I'd imagine doing that a lot less if I'm working within a mature framework using other people's libraries.
I'm doing Infrastructure for our companys' product, so I'm writing a lot of code that will be used later by guys in other teams. So I end up writing lots of abstract classes, interfaces, hierarchies and so on. Mostly it's just a pattern of "default behaviour in an abstract/virtual class, which other programmers may override".
Very challenging, I must say.
The time that I find class hierarchies most beneficial is when the relationship between objects actually does match a true "is-a" relationship in the domain.
However if I can avoid large hierarchies I will due to the fact that they are often a little more tricky to map to relational databases and can really complicate your database designs. Since you say most of your applications make heavy use of databases this would be something to take into consideration.

How to avoid Anemic Domain Models and maintain Separation of Concerns?

It seems that the decision to make your objects fully cognizant of their roles within the system, and still avoid having too many dependencies within the domain model on the database, and service layers?
For example: Say that I've got an entity with a revision history, and several "lookup tables" that the data references, your entity object should have methods to get the details from some of the lookup tables, whether by providing access to the lookup table rows, or by delegating methods down to them, but in order to do so it depends on the database layer to read the data from those rows. Also, when the entity is saved, It needs to know not only how to save itself, but also to save entries into the revision history. Is it necessary to pass references to dozens of different data layer objects and service objects to the model object? This seems like it makes the logic far more complex to understand than just passing back and forth thin models to service layer objects, but I've heard many "wise men" recommending this sort of structure.
Really really good question. I have spent quite a bit of time thinking about such topics.
You demonstrate great insight by noting the tension between an expressive domain model and separation of concerns. This is much like the tension in the question I asked about Tell Don't Ask and Single Responsibility Principle.
Here is my view on the topic.
A domain model is anemic because it contains no domain logic. Other objects get and set data using an anemic domain object. What you describe doesn't sound like domain logic to me. It might be, but generally, look-up tables and other technical language is most likely terms that mean something to us but not necessarily anything to the customers. If this is incorrect, please clarify.
Anyway, the construction and persistence of domain objects shouldn't be contained in the domain objects themselves because that isn't domain logic.
So to answer the question, no, you shouldn't inject a whole bunch of non-domain objects/concepts like lookup tables and other infrastructure details. This is a leak of one concern into another. The Factory and Repository patterns from Domain-Driven Design are best suited to keep these concerns apart from the domain model itself.
But note that if you don't have any domain logic, then you will end up with anemic domain objects, i.e. bags of brainless getters and setters, which is how some shops claim to do SOA / service layers.
So how do you get the best of both worlds? How do you focus your domain objects only domain logic, while keeping UI, construction, persistence, etc. out of the way? I recommend you use a technique like Double Dispatch, or some form of restricted method access.
Here's an example of Double Dispatch. Say you have this line of code:
entity.saveIn(repository);
In your question, saveIn() would have all sorts of knowledge about the data layer. Using Double Dispatch, saveIn() does this:
repository.saveEntity(this.foo, this.bar, this.baz);
And the saveEntity() method of the repository has all of the knowledge of how to save in the data layer, as it should.
In addition to this setup, you could have:
repository.save(entity);
which just calls
entity.saveIn(this);
I re-read this and I notice that the entity is still thin because it is simply dispatching its persistence to the repository. But in this case, the entity is supposed to be thin because you didn't describe any other domain logic. In this situation, you could say "screw Double Dispatch, give me accessors."
And yeah, you could, but IMO it exposes too much of how your entity is implemented, and those accessors are distractions from domain logic. I think the only class that should have gets and sets is a class whose name ends in "Accessor".
I'll wrap this up soon. Personally, I don't write my entities with saveIn() methods, because I think even just having a saveIn() method tends to litter the domain object with distractions. I use either the friend class pattern, package-private access, or possibly the Builder pattern.
OK, I'm done. As I said, I've obsessed on this topic quite a bit.
"thin models to service layer objects" is what you do when you really want to write the service layer.
ORM is what you do when you don't want to write the service layer.
When you work with an ORM, you are still aware of the fact that navigation may involve a query, but you don't dwell on it.
Lookup tables can be a relational crutch that gets used when there isn't a very complete object model. Instead of things referencing things, you have codes, which must be looked up. In many cases, the codes devolve to little more than a static pool of strings with database keys. And the relevant methods wind up in odd places in the software.
However, if there is a more complete object model, we have first-class things instead of these degenerate lookup values.
For example, I've got some business transactions which have one of n different "rate plans" -- a kind of pricing model. Right now, the legacy relational database has the rate plan as a lookup table with a code, some pricing numbers, and (sometimes) a description.
[Everyone knows the codes -- the codes are sacred. No one is sure what the proper descriptions should be. But they know the codes.]
But really, a "rate plan" is an object that is associated with a contract; the rate plan has the method that computes the final price. When an app asks the contract for a price, the contract delegates some of the pricing work to the associated rate plan object.
There may have been some database query going on to lookup the rate plan when producing a contract price, but that's incidental to the delegation of responsibility between the two classes.
I aggree with DeadBeef - therein lies the tension. I don't really see though how a domain model is 'anemic' simply because it doesn't save itself.
There has to be much more to it. ie. It's anemic because the service is doing all the business rules and not the domain entity.
Service(IRepository) injected
Save(){
DomainEntity.DoSomething();
Repository.Save(DomainEntity);
}
'Do Something' is the business logic of the domain entity.
**This would be anemic**:
Service(IRepository) injected
Save(){
if(DomainEntity.IsSomething)
DomainEntity.SetItProperty();
Repository.Save(DomainEntity);
}
See the inherit difference ? I do :)
Try the "repository pattern" and "Domain driven design". DDD suggests to define certain entities as Aggregate-roots of other objects. Each Aggregate is encapsulated. The entities are "persistence ignorant". All the persistence-related code is put in a repository object which manages Data-access for the entity. This way you don't have to mix persistence-related code with your business logic. If you are interested in DDD, check out eric evans book.