Model driven development vs model driven architecture vs model driven engineering - terminology

Could someone explain what's the key difference among those concepts (MDD vs MDA vs MDE)?

Model Driven architecture is the standard as defined by the object management group. It requires the production of computation independent models which are then transformed to platform independent and finally platform specific models. It has close ties to other standards such as UML, XMI, MOF e.t.c
Model driven engineering is the attempt to use Model driven architecture and turn it into a realisable engineering framework for use in the system/software design process.
Model driven development is probably the same as Model driven engineering, although my guess is that it is aimed solely at the production of software systems.

This website explains it quite well with a nice diagram to boot:
http://modeling-languages.com/blog/content/relationship-between-mdamdd-and-mde
The difference between MDD and MDE is admittedly fairly vague and I have heard of other people use the terms interchangeably.

Model-driven engineering (MDE) is an approach to software development where models rather than programs are the principal outputs of the development process.
The programs that execute on a hardware/software platform are then generated automatically from the models.
Model-driven architecture (MDA) was the precursor of more general model-driven engineering
MDA is a model-focused approach to software design and implementation that uses a subset of UML models to describe a system.
Sommerville 9th Edition chapter 5

Related

Junit - program verification vs whitebox fuzzing?

I understand that program verification is a branch of computer engineering - but that it's practical application to real world code bases is limited by combinatorial explosion.
I also understand that as part of designing your software change, for a modification to an existing Java framework, it's helpful to think about whitebox, boundary and blackbox tests for your algorithm, in advance. (Some people call this hammock driven development - thinking before you code.)
Assuming you take this thinking and embed it in junit style tests, I'm assuming that the Computer Science name for the contents is strictly 'whitebox testing/fuzzing' and not sufficient to comprise 'program verification'.
So my question is - junit tests - whitebox fuzzing or program verification?
Program verification is done proving mathematical properties on a mathematical model which is related to your application (it can be derived from the formal semantic of the programming language or by hand, like writing behavioral types that models your web service).
Take a look at pi-calculus to understand what I mean.
Of course, junit has nothing to do with formal program verification.

Domain Driven Design vs Model Driven Architecture

I am curious, what are the differences between Domain Driven Design and Model Driven Architecture? I have the impression they have certain similarities.
Could you enlighten me?
Thanks
Don't disagree with most of the above although it's perhaps worth expanding a little.
The single most important concept in DDD is to focus on the problem domain. To put technology obsession to the side and concentrate primarily on modelling the problem you're trying to solve. So put ajax, ORMs, databases, frameworks etc. into the background and instead make sure you have a complete, accurate model of the problem first and foremost. (Of course you still need the architectural components - but they're explicitly subservient to the model). DDD calls this "Ubiquitous Language" - a model expressed in terms domain experts and developers alike use and understand. A model where the names of classes, methods etc. are taken from the problem domain.
DDD doesn't mandate /how/ you capture that model, although the book implies using an OO language to do so.
MDA shares that same notion of modelling the problem domain first and foremost (the PIM, Platform-Independent Model). As opposed to DDD, it recommends creating that model with UML. But the intent is the same: understand the problem domain without tainting it with (software) architectural concerns.
MDA's PSM (Platform-Specific Model) is somewhat analogous to applying the architectural patterns in DDD (e.g. aggregate, repository, etc.). Again - while different in specifics - both aim to solve the problem of converting a 'pure' problem domain model into a full software system.
So summing up, I'd say they are similar in two ways:
The centrality of the Model (as #Rui says) - specifically the /Domain/ model.
Applying architectural patterns to the model in order to realise the target system.
hth.
The root of both Domain-Driven Design (DDD) and Model Driven Architecture (MDA) is Model-Driven Engineering(MDE), also known as Model-Driven Software Development (MDSD) if limited to the software development domain. See Wikipedia: http://en.wikipedia.org/wiki/Model-driven_development
All approaches falling under the MDE umbrella have one thing in common: a model. How this model is materialized depends on the specific MDE flavor.
MDA is regarded as overly complex. DDD is considered by some as too abstract. My personal favorite MDE implementations are DSM and ABSE (not listed on the Wikipedia article).
DDD is about approaching a software solution from a business perspective with the intent of keeping the design as much close to the real world as possible. This is more of an art than engineering.
MDA solves different set of problems. More details here: http://xml.coverpages.org/OMG-MDAFAQfinal1.pdf
Each X-Driven approach helps deliver values of specific aspects and representations in problem-solving activities. From my point of view, the main difference is that DDD is a design technique and MDA is an infrastructure, which is needed when the engineering community wanted to use it in the real world industry.
The term of Domain in DDD has isA relationship to "Problem Domain" and often seems the same thing. DDD values domain expertise, where decision depends on how much we understand the problems and how we choose the right path from initial to winning states. Before the final design spec can be written, there will be a great effort on problem studies. By looking at the main 3 principles of DDD. I map DDD with things I familiar with my age nowadays, (a) Focus on the core domain (DDD & MVP seems identical in the focus setting), (b) Explore models in a creative collaboration (This is Model-Driven/Based Engineering). Two contributors consist of domain expert - designer and professional software developer. (c) Speak a ubiquitous language within an explicitly bounded context (Communicate using Domain-specific language and develop artifacts relevant to the problem domain)
By looking at the development collaboration of MDA and related standards, it is an infrastructure for the application of Model-Driven Engineering. This is the evolution of the software industry in supporting the way to describe a software system using models and demonstrates how we organize CIM/PIM/PSM models and artifacts. Many powerful modeling operations and tools such as model transformation, domain-specific modeling languages, and automated software engineering techniques are officially emerged with MDA

What is the difference between 4GL and DSL?

What is the difference between 4GL and DSL? Both seem to target a specific domain, but is it safe to say that 4GL is business oriented, while DSLs target any possible domain?
From http://en.wikipedia.org/wiki/Fourth-generation_programming_language:
A fourth-generation programming
language (1970s-1990) (abbreviated
4GL) is a programming language or
programming environment designed with
a specific purpose in mind, such as
the development of commercial business
software. In the history of
computer science, the 4GL followed the
3GL in an upward trend toward higher
abstraction and statement power. The
4GL was followed by efforts to define
and use a 5GL.
Fourth-generation languages have often been compared to domain-specific
programming languages (DSLs). Some
researchers state that 4GLs are a
subset of DSLs. Given the persistence
of assembly language even now
in advanced development environments
(MS Studio), one expects that a system
ought to be a mixture of all the
generations, with only very limited
use of the first.
Also see: http://en.wikipedia.org/wiki/Domain-specific_language
4GLs are a subset of DSLs. DSLs can also include languages for a specific audience (like LOGO), not only specific uses. 4GLs are geared towards specific usage (Math, buisnes logic, etc.)
see http://homepages.cwi.nl/~arie/papers/dslbib/ and http://en.wikipedia.org/wiki/4GL

We treat interfaces and implementations like we treat content and styling, so why not handle it similarly?

I've used Spring, and I've looked into Guice, and I think that these are both rather obtrusive extensions to languages. I firmly believe that programming languages themselves need to adapt to patterns more cohesive to dependency injection, testing, etc., so why not gravitate to a stylesheet based approach? By allowing multiple "stylings," you could define configurations of objects for different purposes. Perhaps classes and other goodness could allow you to specify ranges of transactions more powerful than simple class/method name matching.
Does this seem like a good idea to anyone? Also, do you think that DI and AOP will be integrated into future languages as a core feature, rather than an afterthought? I was just thinking, and seems like interface -> implementation corresponds almost exactly to data -> style.
Thoughts?
This is a very old idea, first implemented in the early 1980s. Then it was known by the terms "configuration programming", "software integrated circuits" or "architecture description languages". "Dependency Injection" is a neologism coined when enterprise developers recently rediscovered the ideas.
For examples, look at the Conic [1] and Regis/Darwin [2] systems. These systems were used to write industrial control software and directly influenced how software is** written for Phillips' TV sets. An interesting feature of Darwin is that the language has both a textual and graphical representation [3] and a formal semantics.
Conic and Regis/Darwin did a lot more than existing DI frameworks because they were used to construct distributed systems: the configuration language compiled into a program that deployed the system in parallel across a network of machines (the formal semantics define how this "elaboration" process operates). In comparison, Spring, Guice etc. only configure objects within a single address space and leave the much greater difficulties of connecting distributed components up to the programmer.
Another rediscovery of the idea is the TinyOS operating system for sensor net applications, although that does not have as clean a conceptual model of components and configuration.
Kramer, J., Magee, J., Sloman, M.S., and Lister, A., CONIC: An Integrated Approach to Distributed Computer Control Systems, IEE Proceedings., 130, Pt. E, ( 1983), 1-10.
Magee, J., Dulay, N. and Kramer, J., Regis: A constructive development environment for distributed programs, Distributed Systems Engineering Journal, Vol. 1, No. 5., Sept 1994, 304-312
Kramer, J., Magee, J., and Ng, K., Graphical Configuration Programming, IEEE Computer, 22(10), (1989), 53-65.
** maybe "was" by now.

Is a process design really declarative programming?

I've heard from someone that they´re using a business process automation tool (like Weblogic Integration) as a programming language (what sounds like something kind of stupid) to make things declarative. Then they put all the logic inside a process, every single if and while.
But, isn´t a process a how to step-by-step entity to reach a target?
For me it makes a process completely imperative. What do you think?
Orchestration languages are in fact imperative scripting languages with conditionals, looping and other traditionally imperative constructs, typically expressed through a flowchart-based user interface. They certainly do not (in my experience) implement tail-recursive functional programming, backward chaining or any other paradigm that might reasonably described as declarative in the generally accepted sense.
MS Workflow Foundation is advertised as having a rules engine, but this is fairly simplistic and doesn't really do forward chaining, except in a somewhat roundabout way. ILOG actually makes an adaptor for their rules engine specifically to drop it into MS workflow foundation.
Other workflow tools have better rule engines and a proper forward chaining system that could be viewed as declarative. However, once you get into the workflows themselves with looping and conditional branches you are most definitely in the territory of imperative programming.
However, some systems also implement a petri-net or state change based markup system for workflow, which might reasonably be described as declarative, but they still have an imperative mode of interaction with the underlying system. They still update variables and have side-effects.
I have seen one or two applications (for example TOAD for data anlaysis) actually using MS Workflow Foundation as a scripting language. As such it allows you to add a scripting facility to the application that (at least for marketing purposes) doesn't require programming skill to use.
In practice, a tool designed for writing, editing and running SQL queries being fitted with a scripting framework for 'non-programmers' makes one wonder what audience it's really aimed at. As a scripting language, workflow modelling tools are fairly clumsy and offer very limited opportunities for abstraction; in practice a .Net based scripting language such as IronPython or Boo, particularly in conjunction with a decent templating mechanism, would be a very powerful addition to such a tool.
One point about graphical languages of this sort is that they do not scale well with complexity. A similar issue applies with ETL tools as well. I have seen a provisioning application (see below) that was done (ironically) with Crossworlds (now known as Websphere Integrator). Within a month of starting on the application it became obvious that the graphical workflow language was not going to scale with the complexity of the application and it was re-built, based on a custom rules engine written in Java and a fairly large body of bespoke java code.
This type of issue is not uncommon with EAI and Orchestration systems and is one of the reasons that SOA is hard to implement in practice. What you are doing is actually pushing business logic into a very clumsy programming environment that is not being officially acknowledged as such. This will work in a simple case but is hard to make work on a complex system - this is sort of a guilty secret in SOA circles.
Coda:
A provisioning application is a system that takes plans for telecommunication services contracts (in this case for a mobile phone network) and pushes configuration information
based on rules out to various switches, billing applications and other applications. They tend to be fairly complex. When you buy a mobile phone plan with so many minutes and so many texts per month, a provisioning application is pushing out configuration information to the rest of the system about your access and billing rules.
It is definitely not what people usually mean when they talk about declarative programming, even if it some sense can be called declarative.