Our business currently revolves around the development of a library, which can be used in a wide spectrum of industries (desktop, mobile, web and embedded). At the moment we only have customers within the desktop and web world and we already see that we are basically having to duplicate our code across multiple languages (c#, java and c++). How is this typically managed in other companies, do we really have to settle for having our middleware written in multiple languages to meet very different industries.
We have tried c# interop along with with java jni but the results where not as good as we had expected and it didn't seem like a good and professional solution.
Anyone have any ideas how we could keep the core in a single language but develop interfaces in different languages allowing for the different industries.
I have not a lot of ideas when it comes to industry, but I guess that the problem is similar to other not-so-strict scenarios. It is unclear also what do you mean by "middleware". Anyway, I would souggest you to not duplicate code and effectively have, if needed, multiple interfaces to the other languages.
In an ideal scenario, it would be very convenient to have one isolated process with your application logic, written in whatever language you feel more comfortable, and have this process to communicate using some inter-process-communication mechanisms (e.g., pipes, sockets, shared memory, files) with thin client-libraries which could be use by the clients in the other languages. That way, you are forced to create a very clean communication/usage protocol and to have some separation of concerns (because all the communication has to be serialized); you gain in robustness (a fatal error in your application process or in your client does not necessarily means a crash in the other one) and you don't have to use ffi mechanisms, but rather things like sockets and pipes. Even more, if you decide to publish your comm. protocol, then third party developers can create wrappers in their exotic predilect language without need of direct linking/importing to your development components.
This kind of paradigm has been in use with success over the years in the unix world and softwares like Mathematica . In a similar move, the developers of java have made the java plugin to work out-of-browser, in a separated process, to reduce interop complexity.
Ah, I forgot, maybe you may use this: ICE (http://www.zeroc.com/overview.html). It is probably the lighter interop layer that you can use to communicate in a clean way java, c++, c#, python and a few others. They even have some embedding solution...
Pick common denominator standards: REST or SOAP/XML over HTTP can be consumed by a lot of languages, including both Java and C#.
You'll be saddled with heavyweight WS-* standards and lots of network latency, but if interoperability is your goal this is one way to achieve it.
Might take a look at the Haxe language. It "compiles" to a dozen other languages.
https://haxe.org/
Related
Somebody please re-tag with appropriate tags
Hello,
This is my story but I guess it holds true for all programmers.
We begin programming with some simple Hello World program. We practice & add functions/classes to the program. But they still maintain the Hello World style. function calling some other functions standard library.
But when it comes to real world projects(I'm just familiar with OpenSource). Lot more other things come into picture. Then begins the hardships of this newbie programmer.
Project Flow:
Program is not running as expected. Make use of Debugger
Making use of third party libraries. Today, we have
library in every popular language for
almost everything we need.
Multiple persons working on same project. Using Version Control
Systems.
Project is growing big. Build Automation
Lot of people started using your application. You need to port it to
different platforms (operating
systems/architectures). Need for
Cross Compiliation
I don't know why but we need Unit Testing Framework and/or unit tests
What else???
The problem in this is the lack of knowledge of this newbie programmer about existence of these things.
What I mean is when I started looking into some real world projects(Opensource). I didn't know what is this? and why we need to do this?
$./configure
$make
$make install
Recently I became aware of the keyword "Build Automation". I was in need of some library which was available for linux but I needed it in windows. I didn't know that its called "Cross compilation" and tools like MinGW/MSYS exist for this purpose. I had to learn these things in the hard way. I wish some one has told me about existence of such things. That would have saved my lot of time.
Today I ran into performance problem and was feeling the need for something. I guess the thing I'm looking for is Profiler. Thanks to my involvement in opensource projects. Even though I didn't realized/felt the need for this, I'm aware of term Unit Testing.
Though this (hard)way of learning things has some big advantages like now, I'm able to figure out solution or some unknown thing very quickly & unlike my other friends I don't get struck at any point. But I hate the wastage of time involved. You do not believe how much I time I wasted in figuring out the Makefiles & Gnu Build System
So, what am I looking for this in this post?
Please complete the Project Flow. I want to see what all things are involved.
For each of the tasks in the Project Flow list. I want to see following information.
Most popular solutions/tools availabe.
Wikipedia list to all alternatives.
[optional] Suggest some good books/tutorials/guides for learning about this. Or link to relavent SO posts/tags.
I know somethings are language & OS specific. I would say we have only handful of major platforms Linux/Unix, Windows, Java, .NET and handful of major languages C, C++, Java, .NET, Python. Address these languages. Its more than sufficient.
Example:
Making use of libraries:
Libraries are distributed in any of the following forms
Source Distrubtion
Static Libraries(*.lib for windows / *.a for linux)
Dynamic Libraries (.dll for windows /.so for linux)
.NET assemblies
I don't know about java
Resources (Now, once I know the above info. I can search on my own for resources)
http://en.wikipedia.org/wiki/Library_(computing)
How To Write Shared Libraries
http://www.tenouk.com/ModuleBB.html
Note:
Please not that I'm not asking to suggest info on how to learn each of these things. I'm asking about what more such kind of things are involved and alternatives for each of them.
Your question seems to be too broad, but in comments you asked for examples of different methods. This would at least enable you to find some other pointers and make your own decisions.
Rational Unified Process (RUP)/Unified Process (UP)
eXtreme Programming (XP)
Scrum
Test Driven Development (TDD)/Behaviour Driven Development (BDD)
Lean Software development
Kanban
Feature Driven Development (FDD)
Those and many more are software development methods, which comprise of different principles, processes and roles. Most of them also come with some sets of practices for different purposes, e.g. ones you mention in your question. Those are quite widely used in real world, although many companies use different combinations and also some own processes and methods. What is not part of what is used so widely, but might give you some interesting views on development are formal methods, which are precise, but most of the time too unpractical.
Though I come from a purely PHP background on the web development side of programming, I have also spent much time with C# and C++ on the desktop.
I don't really want to spark any flame wars, but:
When should you use scripting languages over compiled languages for website development?
(and vice versa)
Just to clarify, for the sake of this question, I define a "scripting language" to mean an interpreted language like PHP, Python, or Ruby, and a "compiled language" to mean a strongly typed, compiled language like C#, C++, Java, or VB.
It depends :-)
On...
...where and how you want to deploy the application
...the skillsets of the engineers in your organization
...what third-party components you want to integrate with or incorporate
Deployment
If you need to be able to deploy the solution on any of dozens of different possible platforms, you may find that you're better off with PHP than Java (for example). There are hundreds of thousands of Java hosting providers out there, but there are probably millions of PHP hosting providers. (And I say this as a Java-head who finds PHP "so so" at best.)
This goes to OS as well. Mono aside, .Net stuff is going to limit you to Windows-based deployment (or lagging behind the cutting edge and having to very, very rigorously test each and every 3rd party component you bring in, to ensure that it doesn't have Mono...issues).
Skillsets
Coming up to speed in an environment or language is non-trivial. For most of us, picking up the basics is pretty quick, but you may not be making the best architectural/design decisions because you're (comparatively) weak on the environment/language. Skillsets count.
Related to this: Skillset hiring counts. Is it easier (and/or cheaper) to hire PHP devs with 3-4 years of experience, or Java devs with 3-4 years of experience, or C# devs, or...?
Buying/finding/integrating vs. building
In your target area of development, which server-side components or packages will you want to integrate with? PHP has a vast array of things available for it, as does Java, as does C# or ASP.Net. But they're different things (by and large), so you'll want to look at what you actually want to use.
Conclusion
So I think it's less a matter of compiled vs. scripted (in today's world), and more a matter of what's the best fit by other criteria for what you're trying to do.
Addendum: Both/And
And of course, there's always "both/and". For instance, I do work in two main, unrelated environments right now, both using a combination of scripted and compiled resources. (One of them is Java + JavaScript via Rhino on Tomcat, the other is compiled COM objects + JScript [again, server-side] on IIS.)
A programmer can write good/bad fast/slow scalable/unscable code in any language. Although, some language and technologies make it harder to do. In my experience, with scripting languages you can produce a small to medium scale application faster than you can with compiled languages like Java. However, as applications grow in size, compiled languages become more suited to the task I think this comes from strongly typing objects, deeper layers of architecture to manage tasks, and more QA frameworks to verify things are running as they should be as changes occur.
I find it to be mostly a matter of opinion. At first I hated the pre-compiled web applications asp.net provides, but I've gotten used to it so I don't hate it anymore. It has advantages and disantages:
Pro
pre-compiled web applications are easy to deploy, often you'll only have to update the bin-directory
pre-compiled web applications perform well
you don't have to upload source code, which is nice imho.
Con
updating a pre-compiled web app generally means the web application is reset, so unless you've changed the session state, it'll end all sessions and log everyone out
rebuilding a large web application can take some time, which is added to the time it took you to write the changes in the first place. I am sometimes impatient.
I've always liked how easy it is to just update one file in a PHP project without having to rebuild a project or something like that, on the other hand, .net has a nice IDE that allows you to debug everying, from back end (C#, VB.net) to front end (Javascript), in one package.
But again; both have advantages and disadvantages.
I wouldn't draw such a sharp distinction between compiled and interpreted languages - this is really just an implementation detail, and tends to change with time (faster than the languages themselves change.) Case in point - thanks to Facebook, PHP is now a "compiled language" too. Another case in point - I enjoy web development with Scheme - and my preferred Scheme implementation now runs a VM and in that sense is at least as compiled as Java is.
So I think the issues to focus on are the expressiveness of the language, its performance, and its ease of deployment - compiled vs. interpreted is only important insofar as it relates to these things.
I'm a big fan of compiled languages everywhere, if for nothing more than the static typing. On the other hand, scripting languages are very convenient -- no binaries to deal with, only text files, which is a big win for web servers.
In the end, it doesn't really matter -- use whatever language you know and feel most comfortable with for the job.
I think that speed is a key concern in a web application, in particular
how fast is it to write my code
how fast is it to fix my code
how fast is it to refactor my code
how fast is it to test my code
That is, I am concerned about the speed of the slowest link: myself. Anything else is fast enough for Twitter-like loads.
Today, the number one on my evaluation list for a new project would be Tornado and Python.
If I had a choice of platforms, of course.
Ah, Python is among the fastest in scripting languages.
For scripting languages, anyone that has a copy of your software could potentially modify your source code because it's open source.
For programming languages, anyone that has a copy the software cannot simply modify your source code because it is compiled.
So I guess, it depends upon your preferences.
I've taken courses, studied, and even developed a little by myself, but so far, i've only worked with Microsoft technologies, and until now I have no problems with it.
I recently got a job in a Microsoft gold partner company for development in C#, VB.net and asp.net.
I'd like tips on how to diversify, learning technologies other than those from Microsoft. Not necessarely for finding another job, I think my job just fits me for my current interests. I think that by learning by myself other languages, frameworks, databases.. I may become a better programmer as a whole and (maybe) at the end of it all having more options of job opportunities, choosing what i'm going to be working with.
What should I start with? how should I do it?
If you're comfortable with C# and VB, learn a language that uses different paradigms. The usual suspects would be Ruby, Erlang, Haskell, Lisp. All of these are available for Windows and other platforms. You might have to get used to different tools to interact with them but that's not necessarily a bad thing.
At the risk of sounding trite, why not install some variant of Linux on a cheap desktop? The mere act of setting up a Linux box is educational.
Once you find your way around it, do some shell scripting and install things like a web server. That should keep you busy for a while. Once you past that, play with some dynamic languages like perl, ruby, python, PHP, etc.
If you're interested in other languages, just pick one and away you go. You sound like you have enough experience to be apt in another language.
If you're looking into a new desktop-development-language then I'd recommend Java or Python, both of which you'd ease into with your C# and VB.NET experience.
If you're looking into web programming, go for PHP?
Browse some source
examples and see what catches your
eye as the most interesting.
Pick up a book on that language.
Ideally, one should know at least one example from each of the major "paradigms":
Assembly (nowadays a dying art, and not that useful)
plain C
one of the OO-variants of C (C++, objective C)
Java or C# (they are very similar, probably no need to learn both)
a scripting language like Ruby or Perl
Javascript (preferrably via Crockford's book)
a non-pure functional language, e.g Scheme (PLT Scheme is a nice learning environment)
a pure-functionalal language like Haskell or OCAML
Erlang (somewhat of a class of its own)
a mathematical/statistical language like R, or J (an APL-successor)
Microsoft technologies aren't bad to start with. My advice would be:
Make sure you aquire sound knowledge about the foundations of programming and the technologies you use. The more basics you know, the more independent you'll be from the latest fads:
Read "Windows Internals" to understand the operating system you're working with. In the process, you will understand other operating systems a lot better.
Toy around with other languages. Learn the differences between statically-typed languages and duct-type languages, functional programming languages, iterative programming languages whatever.
Learn the language you use the best you can. Become John Skeet!
In other words, don't move sideways first. Dig deeper and become better at understanding what you do.
It would be a nice idea to get associated with one the open source programm on http://sf.net. That way you can even have your learning for new platform and also produce some legitimate code. Also you get to look at some good coding practices. Last but not least some giving back to the software community
Maybe think of a project that would be of use to you in your daily life and see if you could develop that in a suitable language. That way you have a goal and at the end of the project you have something useful.
Alternatively why not try learing something not directly programming related, project management might be of use for future roles or do some reading about the history of technology.
These won't add any new languages to your CV but they might add some different aspects to your thinking that might make you a more well rounded potential employee.
I see two main directions to go:
Specific technologies. Select these depending upon how you want to extend yourself, new language (perhaps scripting if you haven't done that, perhaps functional programming), or new techniques (for example, UI programming, or low-level network programming depending upon what you haven't already done), or new OS (Linux if you're a Windows person).
Or, look at higher level problems, for example Design Methods and Team organisation. Read books such as Brooks' Mythical Man Month and Beck's Extreme Pogramming. Consider how to deal with problems bigger that can be solved by one person. Read up on (Rational) Unified Process, UML. Explore revision control systems, Testing techniques, not just Unit Test, but otehr flavours. Think about how you would organise a team if you were the leader. How would the tasks be subdivided, how would communication be managed?
I've heard from someone that they´re using a business process automation tool (like Weblogic Integration) as a programming language (what sounds like something kind of stupid) to make things declarative. Then they put all the logic inside a process, every single if and while.
But, isn´t a process a how to step-by-step entity to reach a target?
For me it makes a process completely imperative. What do you think?
Orchestration languages are in fact imperative scripting languages with conditionals, looping and other traditionally imperative constructs, typically expressed through a flowchart-based user interface. They certainly do not (in my experience) implement tail-recursive functional programming, backward chaining or any other paradigm that might reasonably described as declarative in the generally accepted sense.
MS Workflow Foundation is advertised as having a rules engine, but this is fairly simplistic and doesn't really do forward chaining, except in a somewhat roundabout way. ILOG actually makes an adaptor for their rules engine specifically to drop it into MS workflow foundation.
Other workflow tools have better rule engines and a proper forward chaining system that could be viewed as declarative. However, once you get into the workflows themselves with looping and conditional branches you are most definitely in the territory of imperative programming.
However, some systems also implement a petri-net or state change based markup system for workflow, which might reasonably be described as declarative, but they still have an imperative mode of interaction with the underlying system. They still update variables and have side-effects.
I have seen one or two applications (for example TOAD for data anlaysis) actually using MS Workflow Foundation as a scripting language. As such it allows you to add a scripting facility to the application that (at least for marketing purposes) doesn't require programming skill to use.
In practice, a tool designed for writing, editing and running SQL queries being fitted with a scripting framework for 'non-programmers' makes one wonder what audience it's really aimed at. As a scripting language, workflow modelling tools are fairly clumsy and offer very limited opportunities for abstraction; in practice a .Net based scripting language such as IronPython or Boo, particularly in conjunction with a decent templating mechanism, would be a very powerful addition to such a tool.
One point about graphical languages of this sort is that they do not scale well with complexity. A similar issue applies with ETL tools as well. I have seen a provisioning application (see below) that was done (ironically) with Crossworlds (now known as Websphere Integrator). Within a month of starting on the application it became obvious that the graphical workflow language was not going to scale with the complexity of the application and it was re-built, based on a custom rules engine written in Java and a fairly large body of bespoke java code.
This type of issue is not uncommon with EAI and Orchestration systems and is one of the reasons that SOA is hard to implement in practice. What you are doing is actually pushing business logic into a very clumsy programming environment that is not being officially acknowledged as such. This will work in a simple case but is hard to make work on a complex system - this is sort of a guilty secret in SOA circles.
Coda:
A provisioning application is a system that takes plans for telecommunication services contracts (in this case for a mobile phone network) and pushes configuration information
based on rules out to various switches, billing applications and other applications. They tend to be fairly complex. When you buy a mobile phone plan with so many minutes and so many texts per month, a provisioning application is pushing out configuration information to the rest of the system about your access and billing rules.
It is definitely not what people usually mean when they talk about declarative programming, even if it some sense can be called declarative.
I have been doing a little reading on Flow Based Programming over the last few days. There is a wiki which provides further detail. And wikipedia has a good overview on it too. My first thought was, "Great another proponent of lego-land pretend programming" - a concept harking back to the late 80's. But, as I read more, I must admit I have become intrigued.
Have you used FBP for a real project?
What is your opinion of FBP?
Does FBP have a future?
In some senses, it seems like the holy grail of reuse that our industry has pursued since the advent of procedural languages.
1. Have you used FBP for a real project?
We've designed and implemented a DF server for our automation project (dispatcher, component iterface, a bunch of components, DF language, DF compiler, UI). It is written in bare C++, and runs on several Unix-like systems (Linux x86, MIPS, avr32 etc., Mac OSX). It lacks several features, e.g. sophisticated flow control, complex thread control (there is only a not too advanced component for it), so it is just a prototype, even it works. We're now working on a full-featured server. We've learnt lot during implementing and using the prototype.
Also, we'll make a visual editor some day.
2. What is your opinion of FBP?
2.1. First of all, dataflow programming is ultimate fun
When I met dataflow programming, I was feel like 20 years ago, when I met programming first. Altough, DF programming differs from procedural/OOP programming, it's just a kind of programming. There are lot of things to discover, even sooo simple ones! It's very funny, when, as an experienced programmer, you met a DF problem, which is a very-very basic thing, but it was completely unknown for you before. So, if you jump into DF programming, you will feel like a rookie programmer, who first met the "cycle" or "condition".
2.2. It can be used only for specific architectures
It's just a hammer, which are for hammering nails. DF is not suitable for UIs, web server and so on.
2.3. Dataflow architecture is optimal for some problems
A dataflow framework can make magic things. It can paralellize procedures, which are not originally designed for paralellization. Components are single-threaded, but when they're organized into a DF graph, they became multi-threaded.
Example: did you know, that make is a DF system? Try make -j (see man, what -j is used for). If you have multi-core machine, compile your project with and without -j, and compare times.
2.4. Optimal split of the problem
If you're writing a program, you often split up the problem for smaller sub-problems. There are usual split points for well-known sub-problems, which you don't need to implement, just use the existing solutions, like SQL for DB, or OpenGL for graphics/animation, etc.
DF architecture splits your problem a very interesting way:
the dataflow framework, which provides the architecture (just use an existing one),
the components: the programmer creates components; the components are simple, well-separated units - it's easy to make components;
the configuration: a.k.a. dataflow programming: the configurator puts the dataflow graph (program) together using components provided by the programmer.
If your component set is well-designed, the configurator can build such system, which the programmer has never even dreamed about. Configurator can implement new features without disturbing the programmer. Customers are happy, because they have personalised solution. Software manufacturer is also happy, because he/she don't need to maintain several customer-specific branches of the software, just customer-specific configurations.
2.5. Speed
If the system is built on native components, the DF program is fast. The only time loss is the message dispatching between components compared to a simple OOP program, it's also minimal.
3. Does FBP have a future?
Yes, sure.
The main reason is that it can solve massive multiprocessing issues without introducing brand new strange software architectures, weird languages. Dataflow programming is easy, and I mean both: component programming and dataflow configuration building. (Even dataflow framework writing is not a rocket science.)
Also, it's very economic. If you have a good set of components, you need only put the lego bricks together. A DF program is easy to maintain. The DF config building requires no experienced programmer, just a system integrator.
I would be happy, if native systems spread, with doors open for custom component creating. Also there should be a standard DF language, which means that it can be used with platform-independent visual editors and several DF servers.
Interesting discussion! It occurred to me yesterday that part of the confusion may be due to the fact that many different notations use directed arcs, but use them to mean different things. In FBP, the lines represent bounded buffers, across which travel streams of data packets. Since the components are typically long-running processes, streams may comprise huge numbers of packets, and FBP applications can run for very long periods - perhaps even "perpetually" (see a 2007 paper on a project called Eon, mostly by folks at UMass Amherst). Since a send to a bounded buffer suspends when the buffer is (temporarily) full (or temporarily empty), indefinite amounts of data can be processed using finite resources.
By comparison, the E in Grafcet comes from Etapes, meaning "steps", which is a rather different concept. In this kind of model (and there are a number of these out there), the data flowing between steps is either limited to what can be held in high-speed memory at one time, or has to be held on disk. FBP also supports loops in the network, which is hard to do in step-based systems - see for example http://www.jpaulmorrison.com/cgi-bin/wiki.pl?BrokerageApplication - notice that this application used both MQSeries and CORBA in a natural way. Furthermore, FBP is natively parallel, so it lends itself to programming of grid networks, multicore machines, and a number of the directions of modern computing. One last comment: in the literature I have found many related projects, but few of them have all the characteristics of FBP. A list that I have amassed over the years (a number of them closer than Grafcet) can be found in http://www.jpaulmorrison.com/cgi-bin/wiki.pl?FlowLikeProjects .
I do have to disagree with the comment about FBP being just a means of implementing FSMs: I think FSMs are neat, and I believe they have a definite role in building applications, but the core concept of FBP is of multiple component processes running asynchronously, communicating by means of streams of data chunks which run across what are now called bounded buffers. Yes, definitely FSMs are one way of building component processes, and in fact there is a whole chapter in my book on FBP devoted to this idea, and the related one of PDAs (1) - http://www.jpaulmorrison.com/fbp/compil.htm - but in my opinion an FSM implementing a non-trivial FBP network would be impossibly complex. As an example the diagram shown in
is about 1/3 of a single batch job running on a mainframe. Every one of those blocks is running asynchronously with all the others. By the way, I would be very interested to hearing more answers to the questions in the first post!
1: http://en.wikipedia.org/wiki/Pushdown_automaton Push-down automata
Whenever I hear the term flow based programming I think of LabView, conceptually. Ie component processes who's scheduling is driven primarily by a change to its input data. This really IS lego programming in the sense that the labview platform was used for the latest crop of mindstorm products. However I disagree that this makes it a less useful programming model.
For industrial systems which typically involve data collection, control, and automation, it fits very well. What is any control system if not data in transformed to data out? Ie what component in your control scheme would you not prefer to represent as a black box in a bigger picture, if you could do so. To achieve that level of architectural clarity using other methodologies you might have to draw a data domain class diagram, then a problem domain run time class relationship, then on top of that a use case diagram, and flip back and forth between them. With flow driven systems you have the luxury of being able to collapse a lot of this information together accurately enough that you can realistically design a system visually once the components are build and defined.
One question I never had to ask when looking at an application written in labview is "What piece of code set this value?", as it was inherent and easy to trace backwards from the data, and also mistakes like multiple untintended writers were impossible to create by mistake.
If only that was true of code written in a more typically procedural fashion!
1) I build a small FBP framework for an anomaly detection project, and it turns out to have been a great idea.
You can also have a look at some of the KNIME videos, that give a good idea of what a flow based framework feels like when the framework is put together by a great team. Admittedly, it is batch based and not created for continuous operation.
By far the best example of flow based programming, however, is UNIX pipes which is one of the oldest, most overlooked FBP framework. I don't think I have to elaborate on the power of nix pipes...
2) FBP is a very powerful tool for a large set of problems. The intrinsic parallelism is a great advantage, and any FBP framework can be made completely network transparent by using adapter modules. Smart frameworks are also absurdly fault tolerant, and able to dynamically reload crashed modules when necessary. The conceptual simplicity also allows cleaner communication with everybody involved in a project, and much cleaner code.
3) Absolutely! Pipes are here to stay, and are one of the most powerful feature of unix. The power inherent in a FBP framework compared to a static program are many, and trivialise change, to the point where some frameworks can be reconfigured while running with no special measures.
FBP FTW! ;-)
In automotive development, they have a language agnostic messaging protocol which is part of the MOST specification (Media Oriented Systems Transport), this was designed to communicate between components over a network or within the same device. Systems usually have both a real and visualized message bus - therefore you effectively have a form of flow based programming.
That was what made the light bulb go on for me several years ago and brought me here. It really is a fantastic way to work and so much more fun than conventional programming. The message catalog form the central specification and point of reference. It works well for both developers and management. i.e. Management are able to browse the message catalog instead of looking at source.
With integrated logging also referencing the catalog to produce intelligible analysis things can get really productive. I have real world experience of developing commercial products in this way. I am interested in taking things further, particularly with regards to tools and IDEs. Unfortunately I think many people within the automotive sector have missed the point about how great this is and have failed to build on it. They are now distracted by other fads and failed to realize that there was far more to most development than the physical bus.
I've used Spring Web Flow extensively in Java Web applications to model (typically) application processes, which tend to be complex wizard-like affairs with lots of conditional logic as to which pages to display. Its incredibly powerful. A new product was added and I managed to recut the existing pieces into a completely new application process in an hour or two (with adding a couple of new views/states).
I also looked into using OS Workflow to model business processes but that project got canned for various reasons.
In the Microsoft world you have Windows Workflow Foundation ("WWF"), which is becoming more popular, particularly in conjunction with Sharepoint.
FBP is just a means of implementing a finite state machine. It's nothing new.
I realize that it is not exactly the same thing, but this model has been used for years in PLC programming. ISO calls it Sequential Flow Chart, but many people call it Grafcet after a popular implementation. It offers parallel processing and defines transitions between states.
It's being used in the Business Intelligence world these days to mashup and process data. Data processing steps like ETL, querying, joining , and producing reports can be done by the end-user. I'm a developer on an open system - ComposableAnalytics.com In CA, the flow-based apps can be shared and executed via the browser.
This is what MQ Series, MSMQ and JMS are for.
This is cornerstone of Web Services and Enterprise Service Bus implementations.
Products like TIBCO and Sun's JCAPS are basically flow-based without using this particular buzz-word.
Most of the work of the application is done with small modules that pass messages through a processing network.