What is the difference between 4GL and DSL? - terminology

What is the difference between 4GL and DSL? Both seem to target a specific domain, but is it safe to say that 4GL is business oriented, while DSLs target any possible domain?

From http://en.wikipedia.org/wiki/Fourth-generation_programming_language:
A fourth-generation programming
language (1970s-1990) (abbreviated
4GL) is a programming language or
programming environment designed with
a specific purpose in mind, such as
the development of commercial business
software. In the history of
computer science, the 4GL followed the
3GL in an upward trend toward higher
abstraction and statement power. The
4GL was followed by efforts to define
and use a 5GL.
Fourth-generation languages have often been compared to domain-specific
programming languages (DSLs). Some
researchers state that 4GLs are a
subset of DSLs. Given the persistence
of assembly language even now
in advanced development environments
(MS Studio), one expects that a system
ought to be a mixture of all the
generations, with only very limited
use of the first.
Also see: http://en.wikipedia.org/wiki/Domain-specific_language

4GLs are a subset of DSLs. DSLs can also include languages for a specific audience (like LOGO), not only specific uses. 4GLs are geared towards specific usage (Math, buisnes logic, etc.)
see http://homepages.cwi.nl/~arie/papers/dslbib/ and http://en.wikipedia.org/wiki/4GL

Related

Domain Driven Design vs Model Driven Architecture

I am curious, what are the differences between Domain Driven Design and Model Driven Architecture? I have the impression they have certain similarities.
Could you enlighten me?
Thanks
Don't disagree with most of the above although it's perhaps worth expanding a little.
The single most important concept in DDD is to focus on the problem domain. To put technology obsession to the side and concentrate primarily on modelling the problem you're trying to solve. So put ajax, ORMs, databases, frameworks etc. into the background and instead make sure you have a complete, accurate model of the problem first and foremost. (Of course you still need the architectural components - but they're explicitly subservient to the model). DDD calls this "Ubiquitous Language" - a model expressed in terms domain experts and developers alike use and understand. A model where the names of classes, methods etc. are taken from the problem domain.
DDD doesn't mandate /how/ you capture that model, although the book implies using an OO language to do so.
MDA shares that same notion of modelling the problem domain first and foremost (the PIM, Platform-Independent Model). As opposed to DDD, it recommends creating that model with UML. But the intent is the same: understand the problem domain without tainting it with (software) architectural concerns.
MDA's PSM (Platform-Specific Model) is somewhat analogous to applying the architectural patterns in DDD (e.g. aggregate, repository, etc.). Again - while different in specifics - both aim to solve the problem of converting a 'pure' problem domain model into a full software system.
So summing up, I'd say they are similar in two ways:
The centrality of the Model (as #Rui says) - specifically the /Domain/ model.
Applying architectural patterns to the model in order to realise the target system.
hth.
The root of both Domain-Driven Design (DDD) and Model Driven Architecture (MDA) is Model-Driven Engineering(MDE), also known as Model-Driven Software Development (MDSD) if limited to the software development domain. See Wikipedia: http://en.wikipedia.org/wiki/Model-driven_development
All approaches falling under the MDE umbrella have one thing in common: a model. How this model is materialized depends on the specific MDE flavor.
MDA is regarded as overly complex. DDD is considered by some as too abstract. My personal favorite MDE implementations are DSM and ABSE (not listed on the Wikipedia article).
DDD is about approaching a software solution from a business perspective with the intent of keeping the design as much close to the real world as possible. This is more of an art than engineering.
MDA solves different set of problems. More details here: http://xml.coverpages.org/OMG-MDAFAQfinal1.pdf
Each X-Driven approach helps deliver values of specific aspects and representations in problem-solving activities. From my point of view, the main difference is that DDD is a design technique and MDA is an infrastructure, which is needed when the engineering community wanted to use it in the real world industry.
The term of Domain in DDD has isA relationship to "Problem Domain" and often seems the same thing. DDD values domain expertise, where decision depends on how much we understand the problems and how we choose the right path from initial to winning states. Before the final design spec can be written, there will be a great effort on problem studies. By looking at the main 3 principles of DDD. I map DDD with things I familiar with my age nowadays, (a) Focus on the core domain (DDD & MVP seems identical in the focus setting), (b) Explore models in a creative collaboration (This is Model-Driven/Based Engineering). Two contributors consist of domain expert - designer and professional software developer. (c) Speak a ubiquitous language within an explicitly bounded context (Communicate using Domain-specific language and develop artifacts relevant to the problem domain)
By looking at the development collaboration of MDA and related standards, it is an infrastructure for the application of Model-Driven Engineering. This is the evolution of the software industry in supporting the way to describe a software system using models and demonstrates how we organize CIM/PIM/PSM models and artifacts. Many powerful modeling operations and tools such as model transformation, domain-specific modeling languages, and automated software engineering techniques are officially emerged with MDA

Performance advantages of using methods inside of classes verses data structures with libraries of functions?

Basically is the only advantage of object oriented languages the improved understanding of a programs purpose?
Do the compilers of object oriented languages break apart the objects into structures and function libraries?
Basically, yes. The only advantage is improved understanding of code.
For some languages the OO version is the same as the non-OO version after compilation. Perl for example. For the majority of cases the OO version is much slower than the non-OO version. With very rare exceptions, non-OO languages are always faster than OO languages.
But in general, most experienced programmers will tell you not to worry about the performance differences between OO and non-OO languages (or Lispers will tell you not to worry about the performance difference between procedural and functional languages). This is because you should never, ever, ever, underestimate the importance of understanding code.
These days we rarely talk about it anymore because we've gotten used to using very high level languages - be it OO or functional or multi-paradigm or metaprogramming. But back in the 80s and 90s there was what was then known as the software crisis. What was the software crisis? It's basically the fact that most software projects were never completed!
The software crisis affected all sectors of the industry: from military radar systems, to games to commercial operating systems. The consumers called them vaporware. They were projects that were too ambitious.
But these days there are lots of very ambitious and impressive projects that manage to reach at least beta versions (and for web2.0 beta is good enough for public consumption). Part of the reason is that we now understand requirements engineering better and we also understand the process of software development better. But part of it is also because we have better tools to actually understand what we're doing. And OO is part of that toolset.
Yes, method code is central to the class definition and each instance method accepts an implicit this pointer to the data as its first argument. If you disassemble an instance method call you will see this.
Here are a couple of links to compare speed, first is comparing C/C++, please read the entire article:
http://unthought.net/c++/c_vs_c++.html
To compare Python, Java, C++, PHP and other languages:
http://blog.dhananjaynene.com/2008/07/performance-comparison-c-java-python-ruby-jython-jruby-groovy/
But, to answer your question, the main advantage of OO is that for many problems it is the best way to model the solution, as the model naturally fits into objects. But, if you try to force it to work where it is not a good fit you will have harder to understand code.
There are various language paradigms as there are many different types of problems, and you should pick the language type that best models the solution. For example, I would not want to write an OS in C++ as it doesn't seem to fit well in OO methodologies, but I would also not want to write a car racing game in C, as it would make more sense to have objects.
Depending on the language and compiler, you may see the compiled application compile to C, but others are not, as some are going to be interpreted.
For example, C++ compiles to C, but Java does not, neither does .NET languages. PHP is generally interpreted, though it is possible to compile it (though I have never tried it). One compiler is:
http://www.phpcompiler.org/

We treat interfaces and implementations like we treat content and styling, so why not handle it similarly?

I've used Spring, and I've looked into Guice, and I think that these are both rather obtrusive extensions to languages. I firmly believe that programming languages themselves need to adapt to patterns more cohesive to dependency injection, testing, etc., so why not gravitate to a stylesheet based approach? By allowing multiple "stylings," you could define configurations of objects for different purposes. Perhaps classes and other goodness could allow you to specify ranges of transactions more powerful than simple class/method name matching.
Does this seem like a good idea to anyone? Also, do you think that DI and AOP will be integrated into future languages as a core feature, rather than an afterthought? I was just thinking, and seems like interface -> implementation corresponds almost exactly to data -> style.
Thoughts?
This is a very old idea, first implemented in the early 1980s. Then it was known by the terms "configuration programming", "software integrated circuits" or "architecture description languages". "Dependency Injection" is a neologism coined when enterprise developers recently rediscovered the ideas.
For examples, look at the Conic [1] and Regis/Darwin [2] systems. These systems were used to write industrial control software and directly influenced how software is** written for Phillips' TV sets. An interesting feature of Darwin is that the language has both a textual and graphical representation [3] and a formal semantics.
Conic and Regis/Darwin did a lot more than existing DI frameworks because they were used to construct distributed systems: the configuration language compiled into a program that deployed the system in parallel across a network of machines (the formal semantics define how this "elaboration" process operates). In comparison, Spring, Guice etc. only configure objects within a single address space and leave the much greater difficulties of connecting distributed components up to the programmer.
Another rediscovery of the idea is the TinyOS operating system for sensor net applications, although that does not have as clean a conceptual model of components and configuration.
Kramer, J., Magee, J., Sloman, M.S., and Lister, A., CONIC: An Integrated Approach to Distributed Computer Control Systems, IEE Proceedings., 130, Pt. E, ( 1983), 1-10.
Magee, J., Dulay, N. and Kramer, J., Regis: A constructive development environment for distributed programs, Distributed Systems Engineering Journal, Vol. 1, No. 5., Sept 1994, 304-312
Kramer, J., Magee, J., and Ng, K., Graphical Configuration Programming, IEEE Computer, 22(10), (1989), 53-65.
** maybe "was" by now.

Do formal methods of program verfication have a place in industry?

I took a glimpse on Hoare Logic in college. What we did was really simple. Most of what I did was proving the correctness of simple programs consisting of while loops, if statements, and sequence of instructions, but nothing more. These methods seem very useful!
Are formal methods used in industry widely?
Are these methods used to prove mission-critical software?
Well, Sir Tony Hoare joined Microsoft Research about 10 years ago, and one of the things he started was a formal verification of the Windows NT kernel. Indeed, this was one of the reasons for the long delay of Windows Vista: starting with Vista, large parts of the kernel are actually formally verified wrt. to certain properties like absence of deadlocks, absence of information leaks etc.
This is certainly not typical, but it is probably the single most important application of formal program verification, in terms of its impact (after all, almost every human being is in some way, shape or form affected by a computer running Windows).
This is a question close to my heart (I'm a researcher in Software Verification using formal logics), so you'll probably not be surprised when I say I think these techniques have a useful place, and are not yet used enough in the industry.
There are many levels of "formal methods", so I'll assume you mean those resting on a rigourous mathematical basis (as opposed to, say, following some 6-Sigma style process). Some types of formal methods have had great success - type systems being one example. Static analysis tools based on data flow analysis are also popular, model checking is almost ubiquitous in hardware design, and computational models like Pi-Calculus and CCS seem to be inspiring some real change in practical language design for concurrency. Termination analysis is one that's had a lot of press recently - The SDV project at Microsoft and work by Byron Cook are recent examples of research/practice crossover in formal methods.
Hoare Reasoning has not, so far, made great inroads in the industry - this is for more reasons than I can list, but I suspect is mostly around the complexity of writing then proving specifications for real programs (they tend to get big, and fail to express properties of many real world environments). Various sub-fields in this type of reasoning are now making big inroads into these problems - Separation Logic being one.
This is partially the nature of ongoing (hard) research. But I must confess that we, as theorists, have entirely failed to educate the industry on why our techniques are useful, to keep them relevant to industry needs, and to make them approachable to software developers. At some level, that's not our problem - we're researchers, often mathematicians, and practical usage is not foremost in our minds. Also, the techniques being developed are often too embryonic for use in large scale systems - we work on small programs, on simplified systems, get the math working, and move on. I don't much buy these excuses though - we should be more active in pushing our ideas, and getting a feedback loop between the industry and our work (one of the main reasons I went back to research).
It's probably a good idea for me to resurrect my weblog, and make some more posts on this stuff...
I cannot comment much on mission-critical software, although I know that the avionics industry uses a wide variety of techniques to validate software, including Hoare-style methods.
Formal methods have suffered because early advocates like Edsger Dijkstra insisted that they ought to be used everywhere. Neither the formalisms nor the software support were up to the job. More sensible advocates believe that these methods should be used on problems that are hard. They are not widely used in industry, but adoption is increasing. Probably the greatest inroads have been in the use of formal methods to check safety properties of software. Some of my favorite examples are the SPIN model checker and George Necula's proof-carrying code.
Moving away from practice and into research, Microsoft's Singularity operating-system project is about using formal methods to provide safety guarantees that ordinarily require hardware support. This in turn leads to faster performance and stronger guarantees. For example, in singularity they have proved that if a third-party device driver is allowed into the system (which means basic verification conditions have been proved), then it cannot possibly bring down that whole OS–he worst it can do is hose its own device.
Formal methods are not yet widely used in industry, but they are more widely used than they were 20 years ago, and 20 years from now they will be more widely used still. So you are future-proofed :-)
Yes, they are used, but not widely in all areas. There are more methods than just hoare logic, some are used more, some less, depending on suitability for given task. The common problem is that sofware is biiiiiiig and verifying that all of it is correct is still too hard a problem.
For example the theorem-prover (a software that aids humans in proving program correctness) ACL2 has been used to prove that a certain floating-point processing unit does not have a certain type of bug. It was a big task, so this technique is not too common.
Model checking, another kind of formal verification, is used rather widely nowadays, for example Microsoft provides a type of model checker in the driver development kit and it can be used to verify the driver for a set of common bugs. Model checkers are also often used in verifying hardware circuits.
Rigorous testing can be also thought of as formal verification - there are some formal specifications of which paths of program should be tested and so on.
"Are formal methods used in industry?"
Yes.
The assert statement in many programming languages is related to formal methods for verifying a program.
"Are formal methods used in industry widely ?"
No.
"Are these methods used to prove mission-critical software ?"
Sometimes. More often, they're used to prove that the software is secure. More formally, they're used to prove certain security-related assertions about the software.
There are two different approaches to formal methods in the industry.
One approach is to change the development process completely. The Z notation and the B method that were mentioned are in this first category. B was applied to the development of the driverless subway line 14 in Paris (if you get a chance, climb in the front wagon. It's not often that you get a chance to see the rails in front of you).
Another, more incremental, approach is to preserve the existing development and verification processes and to replace only one of the verification tasks at a time by a new method. This is very attractive but it means developing static analysis tools for exiting, used languages that are often not easy to analyse (because they were not designed to be).
If you go to (for instance)
http://dblp.uni-trier.de/db/indices/a-tree/d/Delmas:David.html
(sorry, only one hyperlink allowed for new users :( )
you will find instances of practical applications of formal methods to the verification of C programs (with static analyzers Astrée, Caveat, Fluctuat, Frama-C) and binary code (with tools from AbsInt GmbH).
By the way, since you mentioned Hoare Logic, in the above list of tools, only Caveat is based on Hoare logic (and Frama-C has a Hoare logic plug-in). The others rely on abstract interpretation, a different technique with a more automatic approach.
My area of expertise is the use of formal methods for static code analysis to show that software is free of run-time errors. This is implemented using a formal methods technique known "abstract interpretation". The technique essentially enables you to prove certain atributes of a s/w program. E.g. prove that a+b will not overflow or x/(x-y) will not result in a divide by zero. An example static analysis tool that uses this technique is Polyspace.
With respect to your question: "Are formal methods used in industry widely?" and "Are these methods used to prove mission-critical software?"
The answer is yes. This opinion is based on my experience and supporting the Polyspace tool for industries that rely on the use of embedded software to control safety critical systems such as electronic throttle in an automobile, braking system for a train, jet engine controller, drug delivery infusion pump, etc. These industries do indeed use these types of formal methods tools.
I don't believe all 100% of these industry segments are using these tools, but the use is increasing. My opinion is that the Aerospace and Automotive industries lead with the Medical Device industry quickly ramping up use.
Polyspace is a a (hideously expensive, but very good) commercial product based on program verification. It's fairly pragmatic, in that it scales up from 'enhanced unit testing that will probably find some bugs' to 'the next three years of your life will be spent showing these 10 files have zero defects'.
It is based more on negative verification ('this program won't corrupt your stack') instead positive verification ('this program will do precisely what these 50 pages of equations say it will').
To add to Jorg's answer, here's an interview with Tony Hoare. The tools Jorg's referring to, I think, are PREfast and PREfix. See here for more information.
Besides of other more procedural approaches, Hoare logic was in the basis of Design by Contract, introduced as an object oriented technique by Bertrand Meyer in Eiffel (see Meyer's article of 1992, page 4). While Design by Contract is not the same as formal verification methods (for one thing, DbC doesn't prove anything until the software is executed), in my opinion it provides a more practical use.

Is a process design really declarative programming?

I've heard from someone that they´re using a business process automation tool (like Weblogic Integration) as a programming language (what sounds like something kind of stupid) to make things declarative. Then they put all the logic inside a process, every single if and while.
But, isn´t a process a how to step-by-step entity to reach a target?
For me it makes a process completely imperative. What do you think?
Orchestration languages are in fact imperative scripting languages with conditionals, looping and other traditionally imperative constructs, typically expressed through a flowchart-based user interface. They certainly do not (in my experience) implement tail-recursive functional programming, backward chaining or any other paradigm that might reasonably described as declarative in the generally accepted sense.
MS Workflow Foundation is advertised as having a rules engine, but this is fairly simplistic and doesn't really do forward chaining, except in a somewhat roundabout way. ILOG actually makes an adaptor for their rules engine specifically to drop it into MS workflow foundation.
Other workflow tools have better rule engines and a proper forward chaining system that could be viewed as declarative. However, once you get into the workflows themselves with looping and conditional branches you are most definitely in the territory of imperative programming.
However, some systems also implement a petri-net or state change based markup system for workflow, which might reasonably be described as declarative, but they still have an imperative mode of interaction with the underlying system. They still update variables and have side-effects.
I have seen one or two applications (for example TOAD for data anlaysis) actually using MS Workflow Foundation as a scripting language. As such it allows you to add a scripting facility to the application that (at least for marketing purposes) doesn't require programming skill to use.
In practice, a tool designed for writing, editing and running SQL queries being fitted with a scripting framework for 'non-programmers' makes one wonder what audience it's really aimed at. As a scripting language, workflow modelling tools are fairly clumsy and offer very limited opportunities for abstraction; in practice a .Net based scripting language such as IronPython or Boo, particularly in conjunction with a decent templating mechanism, would be a very powerful addition to such a tool.
One point about graphical languages of this sort is that they do not scale well with complexity. A similar issue applies with ETL tools as well. I have seen a provisioning application (see below) that was done (ironically) with Crossworlds (now known as Websphere Integrator). Within a month of starting on the application it became obvious that the graphical workflow language was not going to scale with the complexity of the application and it was re-built, based on a custom rules engine written in Java and a fairly large body of bespoke java code.
This type of issue is not uncommon with EAI and Orchestration systems and is one of the reasons that SOA is hard to implement in practice. What you are doing is actually pushing business logic into a very clumsy programming environment that is not being officially acknowledged as such. This will work in a simple case but is hard to make work on a complex system - this is sort of a guilty secret in SOA circles.
Coda:
A provisioning application is a system that takes plans for telecommunication services contracts (in this case for a mobile phone network) and pushes configuration information
based on rules out to various switches, billing applications and other applications. They tend to be fairly complex. When you buy a mobile phone plan with so many minutes and so many texts per month, a provisioning application is pushing out configuration information to the rest of the system about your access and billing rules.
It is definitely not what people usually mean when they talk about declarative programming, even if it some sense can be called declarative.