I was reading about DSLs (Martin Fowler's book) and in the first chapter he talks about Semantic and Adaptive models. I dont really understand what these terms mean in the context of DSLs. I tried searching and reading more about them but I still dont quite get it since the explanations are also kind of complex. I would really appreciate if someone could explain these to me in simple terms. Thanks.
These are both patterns explained in a bit more detail later in the same book and have links in Fowler's on-line DSL Pattern Catalog (though these provide little information beyond pointers to the locations in the book). Semantic Model is detailed in Chapter 11 and Adaptive Model is in Chapter 47.
Basically, a Semantic Model is a model tightly tied to the language, describing the same domain of knowledge that the language does, is created pretty directly by the parser. Using one is generally recommended for separating the parsing logic from the semantic logic.
Adaptive Model is a technique for defining an alternative computational model (i.e., a computational model not normally done in the host language), and is sometimes actually a Semantic Model modeling a computational DSL.
Related
I am writing a small game,and I now have 9 C# scripts that make it work. I have lost track of what exactly is happening and how. I want to know how things work from the moment the game starts. Whats happening and how, etc.
I am a beginner, and I have heard that writing down your program flow is called documenting it. How can I document? Do I have to write comments everywhere in my code to explain the flow of the program?
Putting extensive comments into your code is not a good approach. Basically you should try to make your code as self-explanatory as possible. You do this by carefully planning what belongs into a class or function and by using meaningful names for your classes, functions and variables. Comments are nothing but a last resort if additional explanation is really required.
In most cases you should also also have some documents in addition to the code that explain certain aspects of your software:
Requirements document - what is the purpose of the software, how is it used
Architecture and design specification - what are the modules and classes of the software and how do they interact. Often this document mainly consists of one or more diagrams (UML or something else).
Build manual - how to compile and link the software
Installation instructions
User manual
This list is neither complete nor is it mandatory. If, for example, the user interface of your software is simple and self-explaining, you probably won't need a user manual.
Sometimes diagrams make better documentation than text. There is a standard way of diagramming a control flow (whether it's of a program or a business process). They're called ... wait for it ... control-flow diagrams. But I don't think that's exactly what you're after.
There are also flow charts (often spelled as one word), which may be more suited to software than general control-flow diagrams. Flow charts can be useful for understanding an algorithm, but they generally don't give a good big-picture view.
With a complicated program, what might be more important to keep in mind is the data flow. For those we have ... can you guess? ... data-flow diagrams (DFDs).
DFDs can be drawn at varying levels of detail. You can have a high-level one that shows the major components of the system and how they fit together and low-level ones that show the nitty-gritty details for the portions of the system that require more detail.
DFDs can be used for a variety of analyses, including things like threat modeling. But I find them great for getting an overview of what's-what when I'm looking at a new project (or one I've forgotten about). You should be able to find some tutorials about DFDs online, and I think some drawing software (like Visio) have templates specifically for DFDs (and probably the other types of diagrams I've mentioned).
Some might consider DFDs a bit old-school and prefer more rigorous systems like UML (Unified Modeling Language), which is capable of expressing many more concepts and of having a very direct mapping between your "model" and your code. I've never learned enough UML to get much use out of it. The diagrams in many books on software patterns are expressed in UML.
I have been reading a lot about game engine architectures, and would like to know the following: what is (are?) the currently considered best architecture for a game engine?
I would be very grateful if we could find some academic resources (such as papers or similar) rather than anecdotal experience from isolated developers.
Thanks
"Best" is quite subjective. Not being a professional game developer, I can't answer your question entirely rigorously, but I've heard about component-based game engines and (functional-)reactive engines. There's also the option of using a hierarchy of entities, as described in, for instance, "Evolve Your Hierarchy", but from what I've seen, that's being dropped in favor of component-based engines.
Hierarchical: Use inheritance to provide functionality ("Evolve Your Hierarchy" shows an example of a Car subclassing Vehicle which subclasses Moveable with subclasses Entity). As described, the problem is deciding where functionality should be put: if put higher in the inheritance tree, it burdens subclasses with functionality that may not be used; put it too low, and you end up duplicating/refactoring code to get the functionality to where it's needed.
Component-based: Each game object/entity is a composition of "components" that provide reusable, specific functionality, such as animation/movement, react to physics, be activated by the player, etc. "Component based game engine design", another SO question, describes it in detail and provides links to papers, etc., which should help.
(Functional) reactive: Often studied in the context of functional languages such as Haskell. As I understand it, the central concept is first-class time-varying values, with behavior being described in terms of these values and other building blocks, such as events to represent discrete events. "What is functional reactive programming" provides a better description than I can. I believe .NET's WPF data binding is an example of the concepts. As far as I know, this is still being experimented with/researched, but the Wikipedia article for reactive programming (not FRP, mind you) links to libraries written for Lisp, Python, Java, and other languages.
Many programming languages share generic and even fairly universal features. For example, if you compared Java, VB6, .NET, PHP, Python, then you would find common functions such as control structures, numeric and string manipulation, etc.
What has been done to define these features at a meta-language (or language-agnostic) level?
UML offers a descriptive reference of software in every aspect, but the real-world focus seems to be data processes. Is UML relevant?
I'm not asking "Why we don't have a single language that replaces the current plethora." We need many different tools (at least in this eon).
I'm not asking that all languages fit a template -- assembly vs. compiled languages are different enough to make that unfeasible (and some folks call HTML a language, though I wouldn't). Any attempt would start with a properly narrow scope. In line with this, I wouldn't expect the model to cover even a small selection with full validity.
I would expect however that such a model could be used to transpose from one language to another (with limited goals -- think jist translation).
There have been many attempts at this, but none have been very successful. The earliest I'm aware of is UNCOL more than 50 years ago.
You've given a list of languages that have a lot in common because they're pretty similar -- they're all procedural languages with common roots and some OO extensions thrown in, so that's not too suprising. If you start looking at different languages like LISP, haskell, erlang, prolog, or even SQL you start seeing very different things.
What you're describing sounds like the formal semantics of programming languages. There are a variety of approaches and each will give a way to formally specify the meaning of a program in some programming language. In some cases, this specification is essentially a translation into another language such as lambda calculus, or compilation for a formally specified abstract machine such as SECD.
There is so much work here it's hard to pick a specific reference. But I hope I've given you some useful keywords to continue your search.
UML is typically used to define algorithms/code in simpler terms before moving on to real code.
To answer what I am guessing to be your question, there is already a defined set of required parts of languages while,for,if,else... Will this ever be set as a standard, or made into a base library that is used by all languages: no, this is because the different developers of languages like to do it themselves.
I think the closest you can get to this without loss of generality is a Turing machine, which is not very useful for practical purposes. But if you allow Turing machine languages to be "labeled" and reused, you could build up the concepts you need, working from low- to high-level.
I think that MOF is the universal language.
You can for example create UML diagrams from MOF via a UML metamodel. If you save this metamodel information into xmi then you can save what ever information you need and even more than in any language. XMI semantic is so rich that there is no limit to its use. If you map UML to xmi on the top of a metamodel live synchronize with MOF then this is for me the universal language.
The author of Pattern Calculus seems to propose such a universal model. I expect that it will turn out to be just as useful as previous attempts to define a universal model, that is to say, good in parts but not the last word.
People like Alexander Stepanov and Sean Parent vote for a formal and abstract approach on software design.
The idea is to break complex systems down into a directed acyclic graph and hide cyclic behaviour in nodes representing that behaviour.
Parent gave presentations at boost-con and google (sheets from boost-con, p.24 introduces the approach, there is also a video of the google talk).
While i like the approach and think its a neccessary development, i have a problem with imagining how to handle subsystems with amorphous behaviour.
Imagine for example a common pattern for state-machines: using an interface which all states support and having different behaviour in concrete implementations for the states.
How would one solve that?
Note that i am just looking for an abstract approach.
I can think of hiding that behaviour behind a node and defining different sub-DAGs for the states, but that complicates the design considerately if you want to influence the behaviour of the main DAG from a sub-DAG.
Your question is not clear. Define amorphous subsystems.
You are "just looking for an abstract approach" but then you seem to want details about an implementation in a conventional programming language ("common pattern for state-machines"). So, what are you asking for? How to implement nested finite state-machines?
Some more detail will help the conversation.
For a real abstract approach, look at something like Stream X-Machines:
... The X-machine model is structurally the
same as the finite state machine, except
that the symbols used to label the machine's
transitions denote relations of type X→X. ...
The Stream X-Machine differs from Eilenberg's
model, in that the fundamental data type
X = Out* × Mem × In*,
where In* is an input sequence,
Out* is an output sequence, and Mem is the
(rest of the) memory.
The advantage of this model is that it
allows a system to be driven, one step
at a time, through its states and
transitions, while observing the
outputs at each step. These are
witness values, that guarantee that
particular functions were executed on
each step. As a result, complex
software systems may be decomposed
into a hierarchy of Stream
X-Machines, designed in a top-down
way and tested in a bottom-up way.
This divide-and-conquer approach to
design and testing is backed by
Florentin Ipate's proof of correct
integration, which proves how testing
the layered machines independently is
equivalent to testing the composed
system. ...
But I don't see how the presentation is related to this. He seems to speak about a quite mainstream approach to programming, nothing similar to X-Machines. Anyway, the presentation is quite confusing and I have no time to see the video right now.
First impression of the talk, reading the slides only
The author touches haphazardly on numerous fields/problems/solutions, apparently without recognizing it: from Peopleware (for example Psychology of programming), to Software Engineering (for example software product lines), to various programming techniques.
How the various parts are linked and what exactly he is advocating is not clear at all (I'm accustomed to just reading slides and they are usually consequential):
Dataflow programming?
Constraints solving for User Interfaces? For practical implementations, see Garnet for Common Lisp, Amulet/OpenAmulet for C++.
What advantages gives us this "new" concept-based generic programming with respect to well-known approaches (for example, tools based on Hoare logic pre/post conditions and invariants or, better, Hoare's Communicating Sequential Processes (CSP) or Hehner's Practical Theory of Programming or some programming language with a sophisticated type-system like ATS, Qi or Epigram and so on)? It seems to me that introducing "concepts" - which, as-is, are specific to C++ - is not more simple than using the alternatives. Is it just about jargon and "politics"? (Finally formal methods... but disguised).
Why organizing program modules as a DAG and not as a tree, like David Parnas advocated decades ago in Designing software for ease of extension and contraction? (here a directly accessible .pdf and here slides from a lecture). The work on X-Machines probably is an answer to this question (going even beyond DAGs), but, again, the author seems to speak about a quite conventional program development regime in which Parnas' approach is the only sensible.
If/when I will see the video I will update this answer.
I've been hearing and reading about cases when people had come across cases of overused design patterns. Ok, missused design patterns are understandable phenomenon. What does it actually mean overused design patterns?
Do you have any examples and why do you think there are too many patterns?
The singleton is probably the most overused design pattern. I often see it used in many cases when it's out of scope and much more appropriate to directly instantiate objects.
After that, I believe the factory pattern is way overused as a shortcut of instantiating objects, many times without a real need.
Object Orientation, which is no longer a design pattern but a way of life. I have seen a lot of procedural code munged up in objects and a lot of objects for the sake of objects because the zeitgeist says "presumably you are object oriented", when a few lines of C and a struct would do just as well.
I cite it as the most over-ued design pattern because it is (probably) the most widely used design pattern and its merits are rarely questioned.
I vote for ActiveRecord.
Many popular data access frameworks use ActiveRecord as the only data access pattern, a sort of one-size-fits-all solution, even though Martin Fowler's book "Patterns of Enterprise Application Architecture" describes several other data access patterns, and details strengths of each pattern and how to decide when to use each pattern.
The (sometimes) so-called JavaBeans-Pattern: getters and setters for every field. Highly questionable and extremely widespread.
I guess Singleton gets easily overused (though it certainly has its legitime uses).
Addiction to the Singleton pattern is called Singletonitis. :) Symptoms include, at least, unnecessarily high coupling, and testing becoming more difficult.
Edit: As a prescribed cure for Singletonitis, you could try Inline Singleton, described in Refactoring to Patterns by Joshua Kerievsky.
Edit 2: For a good discussion on Singletons, see this older question: What is so bad about Singletons
PREAMBLE: Generally, Singleton is considered the most abused pattern, if for nothing more than the fact that many will use it to write in-line programming in fact, if not in actuality, while others use it as a substitute for global variables.
BODY:There is a book out there called, "A Pattern Language" which predates the illustrious GoF by several years. It calls for a similar language among different aspects of a project — it was apparently a major influence on "Design Patterns" and those who know both texts consider it superior.
My personal experience is that the GoF is only useful in certain circumstances, and a far cry from encompassing all of OOP. I actually find it quite amusing that several of the patterns are made obsolete in other languages, and others are merely redundantly describing the same scenario (Is there really that much difference between something which adapts and translates?)
Patterns, in general, are a good thing. It is good that Singletons generally use a static getInstance method. It is good that many MVC structures use similar naming conventions. On the other hand, Patterns are not everything and that needs to be remembered.
Recommended reading:
http://perl.plover.com/yak/design/
The singleton pattern, which is only suitable in a very few cases and makes testing harder. It's not only over-used, but it's often badly implemented in Java and C# - people often rush into double-checked locking when it's not only inappropriate but also relatively hard to get right.
EDIT: I really should have realised that everyone would post the same thing.
Next example, the factory pattern and in particular its use in the Java DOM API. Blech.
I would say the Singleton is well overused. There are often much better solutions than using what's essentially a global variable.
I'm going to weigh in on the much-overused Singleton. Quite often, developers learn only this one pattern and use it when a static class would be just as effective.
I think a worse problem than overused design patterns is patterns misapplied by enthusiastic developers who've recently learned a new pattern tool and decide they need to try it out. Recently I've been reading some of Misko Hevery's blog (http://misko.hevery.com/2008/08/17/singletons-are-pathological-liars/) entries on dependency injection. One of his major assertions is that the singleton pattern implemented as a global instance severely limits testability and should be avoided.
A few days ago I read an interesting opinion on patterns from Christian Gruber's blog. He suggests they are useful as a tool for discussing architectures, but shouldn't be used during design conception lest software architecture deteriorate into what he calls "paint by numbers." See the paragraph on Design Patterns: http://www.geekinasuit.com/2008/12/testability-re-discovering-what-we.html.
So possibly the issue with design patterns is misapplication and tunnel vision induced by the perception that all well designed software must fit into one of the patterns described in Gang of Four.
This was actually discussed by our one and only Jeff Atwood on Coding Horror:
http://www.codinghorror.com/blog/archives/000380.html
I keep seeing the Provider pattern used where there is only one provider. This seems like an awful lot of extra work with no benefit.
I too vote for Singleton: a global in abstracted clothing.
And Factory, since that makes it easier not to think about how objects are connected together in a given program.