A brilliant example of effective encapsulation through information hiding? - language-agnostic

"Abstraction and encapsulation are complementary concepts: abstraction focuses on the observable behavior of an object... encapsulation focuses upon the implementation that gives rise to this behavior... encapsulation is most often achieved through information hiding, which is the process of hiding all of the secrets of object that do not contribute to its essential characteristics." - Grady Booch in Object Oriented Analysis and Design
Can you show me some powerfully convincing examples of the benefits of encapsulation through information hiding?

The example given in my first OO class:
Imagine a media player. It abstracts the concepts of playing, pausing, fast-forwarding, etc. As a user, you can use this to operate the device.
Your VCR implemented this interface and hid or encapsulated the details of the mechanical drives and tapes.
When a new implementation of a media player arrives (say a DVD player, which uses discs rather than tapes) it can replace the implementation encapsulated in the media player and users can continue to use it just as they did with their VCR (same operations such as play, pause, etc...).
This is the concept of information hiding through abstraction. It allows for implementation details to change without the users having to know and promotes low coupling of code.

The *nix abstraction of character streams (disk files, pipes, sockets, ttys, etc.) into a single entity (the "everything is a file") model allows a wide range of tools to be applied to a wide range of data sources / sinks in a way that simply would not be possible without the encapsulation.
Likewise, the concept of streams in various languages, abstracting over lists, arrays, files, etc.
Also, concepts like numbers (abstracting over integers, half a dozen kinds of floats, rationals, etc.) imagine what a nightmare this would be if higher level code was given the mantissa format and so forth and left to fend for itself.

I know there's already an accepted answer, but I wanted to throw one more out there: OpenGL/DirectX
Neither of these API's are full implementations (although DirectX is certainly a bit more top-heavy in that regard), but instead generic methods of communicating render commands to a graphics card.
The card vendors are the ones that provide the implementation (driver) for a specific card, which in many cases is very hardware specific, but you as the user need never care that one user is running a GeForce ABC and the other a Radeon XYZ because the exact implementation is hidden away behind the high-level API. Were it not, you would need to have a code path in your games for every card on the market that you wanted to support, which would be completely unmanageable from day 1. Another big plus to this approach is that Nvidia/ATI can release a newer, more efficient version of their drivers and you automatically benefit with no effort on your part.
The same principle is in effect for sound, network, mouse, keyboard... basically any component of your computer. Whether the encapsulation happens at the hardware level or in a software driver, at some point all of the device specifics are hidden away to allow you to treat any keyboard, for instance, as just a keyboard and not a Microsoft Ergonomic Media Explorer Deluxe Revision 2.
When you look at it that way, it quickly becomes apparent that without some form of encapsulation/abstraction computers as we know them today simply wouldn't work at all. Is that brilliant enough for you?

Nearly every Java, C#, and C++ code base in the world has information hiding: It's as simple as the private: sections of the classes.
The outside world can't see the private members, so a developer can change them without needing to worry about the rest of the code not compiling.

What? You're not convinced yet?
It's easier to show the opposite. We used to write code that had no control over who could access details of its implementation. That made it almost impossible at times to determine what code modified a variable.
Also, you can't really abstract something if every piece of code in the world might possibly have depended on the implementation of specific concrete classes.

Related

Game Inventory - Design Pattern

I'm still studying OOP designs, so what's the best way to achieve an inventory for a simple flash game ? It seems that more than one design pattern could deliver some kind of an invetory but I would lose the flexibility if I try to adapt it somehow without a good knowledge about the subject.
For money to buy what is available in an inventory I thought of Singleton. If there's enough cash earned while playing the game, then one can buy new skills.
Maybe decorator pattern could list many thumbnails as buttons, and clicking on it applies new features and skills to the character.
I'd like to read standard advices on solving this problem, because I feel I'm on the wrong way. Thanks.
Stay away from singleton if possible
Singleton has its uses, however I believe it's overused in a lot of cases.
The biggest problem with a singleton is that you're using Global State, which is generally regarded as a bad thing as when complexity in your software grows it can cause you to have unintended side effects.
Object composition might be a better way
For games you might want to take a look at using Object Composition rather than traditional OOD Modelling.
A software component is a software element that conforms to a
component model and can be independently deployed and composed without
modification according to a composition standard.
A component model defines specific interaction and composition
standards. A component model implementation is the dedicated set of
executable software elements required to support the execution of
components that conform to the model.
A software component infrastructure is a set of interacting software
components designed to ensure that a software system or subsystem
constructed using those components and interfaces will satisfy clearly
defined performance specifications.
Component based game engine design
http://www.as3dp.com/2009/02/21/design-pattern-principles-for-actionscript-30-favor-object-composition-over-class-inheritance/
Reading over the material in the first link should give you some excellent ideas on how to model your inventory system and have it extendable in a nice way.

What design pattern is best for an RTS game in AS3?

I'm looking to get some good books on design patterns and I'm wondering what particular pattern you'd recommend for a Realtime Strategy Game (like Starcraft), MVC?.
I'd like to make a basic RTS in Flash at some point and I want to start studying the best pattern for this.
Cheers!
The problem with this kind of question is the answer is it completely depends on your design. RTS games are complicated even simple ones. They have many systems that have to work together and each of those systems has to be designed differently with a common goal.
But to talk about it a little here goes.
The AI system in an rts usually has a few different levels to it. There is the unit level AI which can be as simple as a switch based state machine all the way up to a full scale behavior tree (composite/decorators).
You also generally need some type of high level planning system for the strategic level AI. (the commander level and the AI player)
There are usually a few levels in between those also and some side things like resource managers etc.
You can also go with event based systems which tie in nicely with flash's event based model as well.
For the main game engine itself a basic state machine (anything from switch based to function based to class based) can easily be implemented to tie everything together and tie in the menu system with that.
For the individual players a Model-View-Controller is a very natural pattern to aim for because you want your AI players to be exposed to everything the human player has access to. Thus the only change would be the controller (the brain) without the need for a view obviously.
As I said this isn't something that can just be answered like the normal stackoverflow question it is completely dependent on the design and how you decide to implement it. (like most things) There are tons of resources out there about RTS game design and taking it all in is the only advice I can really give. Even simple RTS's are complex systems.
Good luck to you and I hope this post gives you an idea of how to think about it. (remember balance is everything in an RTS)
To support lots of units, you might employ Flyweight and Object Pool. Flyweight object can be stored entirely in a few bytes in a large ByteArray. To get usable object, read corresponding bytes, take empty object and fill it with data from those bytes. Object pool can hold those usable objects to prevent constant allocation/garbage collection. This way, you can hold thousands of units in ByteArray and manage them via dozen of pooled object shells.
It should be noted that this is more suitable to store tile properties that units, however — tile are more-or-less static, while units are created and destroyed on regular basis.
It would be good to be familiar with a number of design patterns and apply them to the architecture of your game where appropriate.
For example, you may employ an MVC architecture for your overall application, factory pattern for creating enemies, decorator pattern for shop items, etc. Point being, design patterns are simply methodologies for structuring your objects. How you define those object and how you decide how they fit together is non prescriptive.

handling amorphous subsystems in formal software design

People like Alexander Stepanov and Sean Parent vote for a formal and abstract approach on software design.
The idea is to break complex systems down into a directed acyclic graph and hide cyclic behaviour in nodes representing that behaviour.
Parent gave presentations at boost-con and google (sheets from boost-con, p.24 introduces the approach, there is also a video of the google talk).
While i like the approach and think its a neccessary development, i have a problem with imagining how to handle subsystems with amorphous behaviour.
Imagine for example a common pattern for state-machines: using an interface which all states support and having different behaviour in concrete implementations for the states.
How would one solve that?
Note that i am just looking for an abstract approach.
I can think of hiding that behaviour behind a node and defining different sub-DAGs for the states, but that complicates the design considerately if you want to influence the behaviour of the main DAG from a sub-DAG.
Your question is not clear. Define amorphous subsystems.
You are "just looking for an abstract approach" but then you seem to want details about an implementation in a conventional programming language ("common pattern for state-machines"). So, what are you asking for? How to implement nested finite state-machines?
Some more detail will help the conversation.
For a real abstract approach, look at something like Stream X-Machines:
... The X-machine model is structurally the
same as the finite state machine, except
that the symbols used to label the machine's
transitions denote relations of type X→X. ...
The Stream X-Machine differs from Eilenberg's
model, in that the fundamental data type
X = Out* × Mem × In*,
where In* is an input sequence,
Out* is an output sequence, and Mem is the
(rest of the) memory.
The advantage of this model is that it
allows a system to be driven, one step
at a time, through its states and
transitions, while observing the
outputs at each step. These are
witness values, that guarantee that
particular functions were executed on
each step. As a result, complex
software systems may be decomposed
into a hierarchy of Stream
X-Machines, designed in a top-down
way and tested in a bottom-up way.
This divide-and-conquer approach to
design and testing is backed by
Florentin Ipate's proof of correct
integration, which proves how testing
the layered machines independently is
equivalent to testing the composed
system. ...
But I don't see how the presentation is related to this. He seems to speak about a quite mainstream approach to programming, nothing similar to X-Machines. Anyway, the presentation is quite confusing and I have no time to see the video right now.
First impression of the talk, reading the slides only
The author touches haphazardly on numerous fields/problems/solutions, apparently without recognizing it: from Peopleware (for example Psychology of programming), to Software Engineering (for example software product lines), to various programming techniques.
How the various parts are linked and what exactly he is advocating is not clear at all (I'm accustomed to just reading slides and they are usually consequential):
Dataflow programming?
Constraints solving for User Interfaces? For practical implementations, see Garnet for Common Lisp, Amulet/OpenAmulet for C++.
What advantages gives us this "new" concept-based generic programming with respect to well-known approaches (for example, tools based on Hoare logic pre/post conditions and invariants or, better, Hoare's Communicating Sequential Processes (CSP) or Hehner's Practical Theory of Programming or some programming language with a sophisticated type-system like ATS, Qi or Epigram and so on)? It seems to me that introducing "concepts" - which, as-is, are specific to C++ - is not more simple than using the alternatives. Is it just about jargon and "politics"? (Finally formal methods... but disguised).
Why organizing program modules as a DAG and not as a tree, like David Parnas advocated decades ago in Designing software for ease of extension and contraction? (here a directly accessible .pdf and here slides from a lecture). The work on X-Machines probably is an answer to this question (going even beyond DAGs), but, again, the author seems to speak about a quite conventional program development regime in which Parnas' approach is the only sensible.
If/when I will see the video I will update this answer.

At what point should architecture become layered?

Obviously, "Hello World" doesn't require a separated, modular front-end and back-end. But any sort of Enterprise-grade project does.
Assuming some sort of spectrum between these points, at which stage should an application be (conceptually, or at a design level) multi-layered? When a database, or some external resource is introduced? When you find that the you're anticipating spaghetti code in your methods/functions?
when a database, or some external resource is introduced.
but also:
always (except for the most trivial of apps) separate AT LEAST presentation tier and application tier
see:
http://en.wikipedia.org/wiki/Multitier_architecture
Layers are a mean to keep a design loosely coupled and highly cohesive.
When you start to have a few classes (either implemented or just sketched with UML), they can be grouped logically, into layers - or more generally packages, or modules. This is called the art of separating the concerns.
The sooner the better: if you do not start layering early enough, then you risk to have never do it as the effort can be too important.
Here are some criteria of when to...
Any time you anticipate the need to
replace one part of it with a
different part.
Any time you find
yourself need to divide work amongst
parallel team.
There is no real answer to this question. It depends largely on your application's needs, and numerous other factors. I'd suggest reading some books on design patterns and enterprise application architecture. These two are invaluable:
Design Patterns: Elements of Reusable Object-Oriented Software
Patterns of Enterprise Application Architecture
Some other books that I highly recommend are:
The Pragmatic Programmer: From Journeyman to Master
Refactoring: Improving the Design of Existing Code
No matter your skill level, reading these will really open your eyes to a world of possibilities.
I'd say in most cases dealing with multiple distinct levels of abstraction in the concepts your code deals with would be a strong signal to mirror this with levels of abstraction in your implementation.
This does not override the scenarios that others have highlighted already though.
I think once you ask yourself "hmm should I layer this" the answer is yes.
I've worked on too many projects that probably started off as proof of concept/prototype that ended up being full projects used in production, which are horribly written and just wreak of "get it done quick, we'll fix it later." Trust me, you wont fix it later.
The Pragmatic Programmer lists this as the Broken Window Theory.
Try and always do it right from the start. Separate your concerns. Build it with modularity in mind.
And of course try and think of the poor maintenance programmer who might take over when you're done!
Thinking of it in terms of layers is a little limiting. It's what you see in whitepapers about a product, but it's not how products really work. They have "boxes" that depend on each other in various ways, and you can make it look like they fit into layers but you can do this in several different configurations, depending on what information you're leaving out of the diagram.
And in a really well-designed application, the boxes get very small. They are down to the level of individual interfaces and classes.
This is important because whenever you change a line of code, you need to have some understanding of the impact your change will have, which means you have to understand exactly what the code currently does, what its responsibilities are, which means it has to be a small chunk that has a single responsibility, implementing an interface that doesn't cause clients to be dependent on things they don't need (the S and the I of SOLID).
You may find that your application can look like it has two or three simple layers, if you narrow your eyes, but it may not. That isn't really a problem. Of course, a disastrously badly designed application can look like it has layers tiers if you squint as hard as you can. So those "high level" diagrams of an "architecture" can hide a multitude of sins.
My generic rule of thumb is to at least to separate the problem into a model and view layer, and throw in a controller if there is a possibility of more than one ways of handling the model or piping data to the view.
(Or as the first answer, at least the presentation tier and the application tier).
Loose coupling is all about minimising dependencies, so I would say 'layer' when a dependency is introduced. i.e. a database, third party application, etc.
Although 'layer' is probably the wrong term these days. Most of the time I use Dependency Injection (DI) through an Inversion of Control container such as Castle Windsor. This means that I can code on one part of my system without worrying about the rest. It has the side effect of ensuring loose coupling.
I would recommend DI as a general programming principle all of the time so that you have the choice on how to 'layer' your application later.
Give it a look.
R

Ratio of real code to supporting code

I'm finding only about 30% of my code actually solves problems, the rest is taken up by logging, tests, parameter checking, exceptions, error handling and so on. Do you find that in your code, and is there an IDE/Editor that allows you to hide code that's not interesting?
OTOH are there languages which make the support code more manageable and smaller in size?
Edit - I think we're all aware of the difference between business logic and other code. I'm not saying that the logging etc is not important. The things is, when I'm coding I'm either implementing business logic, or I'm making sure things don't break. For me that's two different ways of thinking, do others develop like that, and is there an IDE that supports that way of developing?
Supporting code is just as important as the "real code". The quality of your product is determined as much by supporting code as anything else.
Consider an automobile. In terms of just getting from point A to point B, that requires nothing more than a go-cart: a frame, a seat, an engine, a few tires. But modern cars have a lot more than just the basics. Highly efficient engines using electronic engine timing. Automatic transmissions. Bucket seats. Heating and A/C. Rack and pinion steering. Power brakes. Anti-lock brakes. Quiet, comfortable cabins protected from the weather. Air bags. Crumple zones and other advanced safety features. Etc. Etc.
Details and execution are important, even in software. If you find that your "supporting code" tends to look more like kludges and hacks, then it's time to rethink your fundamental approach. But ultimately the fit and finish determines quality of the end product as much as anything else.
Edit: The questions you should ask yourself:
Is your "supporting code":
An umbrella duct taped to a pole or a metal and glass cabin frame?
A piece of pipe tied to the front of the car or an energy absorbing bumper integrated into a crumple zone?
A grappling hook on a rope tied to the frame or 4-wheel anti-lock power brakes?
A pair of goggles and a thick coat or a windshield and a heating system?
Answers to these questions will probably affect how much you care about your "supporting code".
Edit: Response to Dave Turvey's comment:
I'd encourage rereading the original question, one of the examples of "support code" listed is "error handling". Consider this for a moment. Imagine it in the context of, say, an automobile, a microwave oven, or even an operating system. Should error handling be relegated to second class citizenship because it serves a "support" function in some abstract sense? In an automobile the safety features are part of the fundamental design of the vehicle and comprise a substantial part of the value of the car. The safety features and "error handling" of a microwave oven (indeed, of the microwave oven's embedded software as well) are an important part of its value as well. A microwave oven that was improperly shielded could cook food just fine, under the right circumstances, but it would pose a hazard to the operator.
The implicit featureset of every tool (software or otherwise) includes this list:
Robustness
Usability
Performance
Everything anyone has ever built or used has had these features. Failure to understand this will translate to failure to execute well on these features which will make for a poor quality product of low value and low commercial interest. There is no such thing as "support code", there is only a misunderstanding of the nature of what it means for a feature to be complete. A "feature" that works in the abstract only under laboratory conditions is an experiment, not a part of a product.
The idea of pure, pristine features floating on a bog of dirty, ugly support code is the wrong image of software development. Instead, think of elegant, superbly-integrated machinery that is well-built, intuitive to use, and powerful.
The supporting code is important, but you want not to be distracted by it when you don't want to. There are two technologies that can help.
A language with first-class functions will help you modularize your code so that logging, timing, and so on can be implemented once and then combined with many other modules. It will also help you write unit tests. Some good ways to learn the techniques are to read the paper Why Functional Programming Matters and to learn about the QuickCheck tool. (No, I am not a shill for John Hughes, but he does do wonderful work.)
If you cannot use a programming language with powerful capabilities for modularization and reuse, or if you don't want to, Don Knuth's Literate Programming technique will help you organize your code so that you can split up parts the way you want and pay attention only to what you want, when you want. The Noweb literate-programming tool supports any language that can be written in ASCII, and also combinations of those languages.
If my IDE could hide "not interesting code" I would definitely turn the feature off. You wouldn't want that happening, I bet :)
There are certainly languages that minimise the amount of supporting code, but I don't think you could switch from Java to lets say JavaScript simply because in JavaScript you wouldn't have to declare every exception... I think it's quite necessary to have your supporting code where it is.
Oh, and you could have your program formally specified and mathematically proven, then you wouldn't need to support your code too much ;D
The real code you are referring to is usually called "Business Logic".
In a good unit testing system, your unit tests should be in their own classes (and probably their own assemblies) so that shouldn't be an issue.
The rest is language based for the most part. The more advanced a language, the better it's ability to avoid writing support code to some degree. Also, a well-targetted development system can help you avoid writing a lot of code (Visual Basic's screen builder, Ruby on Rails, ...) but these abstractions can break down and cause you to write just as much code as anything else if you use it to develop targets outside it's intended types of apps. (VB & Ruby don't help all that much if you're calculating prime numbers)
Beyond the language/platform, you have refactoring--the art of eliminating all the supporting code that you can (as well as redundancies in your business logic) to keep your code-base clean and small.
When practicing advanced refactoring, you'll probably end up writing tools for yourself.
Sometimes abstracting data out of your code and into a structured file of some sort can eliminate huge piles of support code and move the rest into "Business logic" because now parsing that data and setting it up is part of the "business" your program does.
This is a good trade-off because this type of business logic tends to be more readable and easier to factor. The other advantage of this kind of abstraction is that all your "Configuration" is now done in data which tends to make it somebody elses' problem.
As an example of this type of tooling: Rails itself! It takes a lot of the boilerplate of web development and factors it out of the code and into libraries driven by data and simplistic code (Ruby blurs the line between code and data--their data files actually loop back to being specified in Ruby code!)
It's like you want to take a trip to the top of Pike's Peak. You can take the Winnebago, you can take your SUV, or a motorcycle, or ride up on your bike.
Some ways are a more or less expensive, faster, etc. Sometimes you end up taking along a lot of stuff the isn't there strictly for accomplishing vertical progress; it's nice to have a beer in the cooler. But it pays to remember that you're responsible for everything that goes with you to the top.
Aspect Oriented Programming partly addresses this. It allows you to inject code into existing source/bytecode. This way you can make a task such as logging appear in its own module instead of woven into the business logic.
Work expands to fill it's container. This really sounds like an economics question. (ie. optimizing your outputs- features for users and features for the developer) with expensive inputs (time spent writing features, time spent writing plumbing code.)
You have to include user visible features or you don't have a viable product or job. Once that is done partly done, your remaining budget of time will be split between activities with a visible return on effort and an invisible (but positive!) return on effort, like exception logging, memory management, etc.
What ever language makes it cheaper to implement features will probably increase your features/to plumbing code ratio. Likewise, whatever language makes it cheaper to implement plumbing code will probably increase the feature to plumbing code because you'll have freed up more time to write features.
Like all optimization problems you'd have 2 effects-- the increase in the size of the support code (because say, you're using cheap code generation) and the increase in the size of feature related code (because you have more time left over to write features), so the final ratio might be hard to predict.
I do not begrudge the 90% of my code that is data access plumbing, because it is all testable, code generated and very cheap, compared to the 10% of handwritten of domain specific code.
I don't try to make all routines foolproof, only those exposed to the outside world.
http://en.wikipedia.org/wiki/Folding_editor
Higher and more dynamic languages are usually less verbose. Weak typing also saves a lot of code. Of course there are trade-offs.
I use the #region directive in Visual Studio to collapse blocks of code that are not the primary focus, e.g. unit tests. With log4net logging statements are only ever one line. I haven't found any approaches to reduce the parameter checking code although it sounds like C# 4 has some kind of contract framework that will help there.
I have some coworkers who once, while being chewed out by a client for an overdue and bug-ridden project, bragged to the customer that they had written 5 times as much test code as operational code.
The client was not happy, and by "not happy" I mean their skin turned green, they grew to 5 times their normal size, and their clothes popped off.
You could just make a static class in a utilities assembly that checks your parameters and things. For instance in the Spring Framework (which is where I got the idea) it has an Assert class and it makes it really fast to make sure that string params aren't empty or that object params aren't null. It cleans up validation code and reduces duplicate code which is a win win.