Why are Micro-Services Architectures not based on Enterprise Service Buses? - esb

What reasons are there against (or for) using the features of an Enterprise Service Bus when building an overall service adhering to a micro-service architecture (http://martinfowler.com/articles/microservices.html)? Why should we use dumb pipes and smart endpoints as opposed to using smarter pipes and be able to develop simpler services?

This is a huge question and probably can't be answered effectively in SO's Q&A format.
It depends what you are doing with it.
If you are building a single product which consists of lots of small pieces of function that can be thought of as being independent then microservices maybe the way to go.
If you are a large enterprise organisation where IT is not the main consideration of the board of directors as a competitive advantage and you work in a heavily regulate industry where new standards have to be applied across globally distributed projects with their own IT departments, some from new acquisitions, where you can't centrally control all the endpoints and applications within your organisation, then maybe you need an ESB.
I don't want to be accused of trying to list ALL the advantages of both approaches here as they wouldn't be complete and may be out of date quickly.
Having said that, in an effort to be useful to the OP:
If you look up how Spotify and Netflix do microservices you can find many things they like about the approach, including but not limited to: ease of blue/green deployment of individual services, decoupled team structures, and isolation of failures.
ESBs allow you to centrally administer and enforce policies, like legal requirements, audit everything in one place rather than hoping each team got the memo about logging everything, provide global statistics about load and uptime, as well as many other things. ESBs grew out of large enterprises where the driver was not customer response time on a website and speed of innovation (amongst other things) but Service Level Agreements, cost effectiveness and regulations (amongst other things).
There is a lot of value in both approaches. Microservices are being written about a lot at the moment, just as ESBs were 10-15 years ago. Maybe that's a progression, maybe it's just a change, maybe it's just that consumer product companies need to market themselves and large enterprises like to keep details private. We may find out in another 10 years. For now, it depends heavily on what you are doing. As with most things in programming, I'd start out simple and only move to the more complex solution if you need to.

The term ESB has gotten overloaded, primarily in the Java world, to mean a big and complex piece of infrastructure that ends up hosting a bunch of poorly implemented logic in a central place.
Lighter-weight technologies like Apache Caml or NServiceBus don't encourage this kind of approach and indeed follow the "dumb pipes / smart endpoints" approach that has served as the backbone of the internet from the beginning.
NServiceBus specifically focuses on providing a higher level framework than most messaging libraries to make it easier to build smart endpoints that are more reliable through its deeper support for once-and-only-once message processing.
Full disclosure - I'm the founder of NServiceBus.

Because services are isolated and pipes are reused.
Core idea of microservices is isolation - any part of the system can be replaced without affecting other services. Smart pipes means they have configuration, they have state, they have complex (which often means hard-to-predict) behavior. Thus, smart pipes are less likely to retain their exact behavior over time.
But - pipe change will affect every service attached while service change affects only other services that use it.

The problem with how ESB is used is that it creates a coupling between ESB and services by having some business logic built into the ESB. This would makes it more difficult to deploy a single service independently and increasingly making the ESB more complex and difficult to maintain.

Related

Can someone explain an Enterprise Service Bus to me in non-buzzspeak?

Some of our partners are telling us that our software needs to interact with an Enterprise Service Bus. After researching this a bit, my instinct is to say that this is just buzz speak for saying that we need to have a platform-indpendent way to pass messages back and forth. I'm just trying to get a feel for what our partners are telling us. Am I correct in dismissing our partners' request as just trying to get our software to be more buzzword-compliant, or are they telling us something we should listen to (even if encoded in buzzspeak)?
Although ESB is based on messaging, it is not "just" messaging and not just a buzzword.
So if you start with plain old async messaging, the early networks tended to be very point-to-point. You had to wire up (i.e. configure through some admin interface) each connection and each pair of destinations and if you dared to move anything around invariably something broke. Because the connection points were wired by hand these networks never achieved high connection density. The incremental cost was too high and did not scale. There was also a lot of access control and policy embedded in the topology. The lack of connection density actually favors this approach to security, even though it inhibits flexibility.
The ESB attempts to address these issues with...
Run-time resolution of destinations/services/resources
Location transparency
Any-to-any connectivity and maximum connection density
Architected for redundancy, horizontal scalability, failover
Policy, access control, rules externalized from topology
Logical messaging network layer implemented atop the physical messaging network layer
Common namespace
So when your customer asks for ESB compatibility, they want things like the above. From an application standpoint, this also implies...
Avoiding message affinities such as requirements to process in strict sequence or to address requests only to specific nodes instead of to a generic network destination
Ability to resolve destinations dynamically at run time (i.e. add another instance of a queue and it automatically starts getting traffic, delete one and traffic routes to the remaining nodes)
Requestor and provider apps decoupled from knowing where each other "lives". Requestor makes one connection, regardless of how many services it might need to call
Authorize by policy rather than by topology
Service provider apps able to recognize and handle dupes (as per JMS spec, see "functional duplicate" due to session handling)
Ability to run multiple active instances of a service provider application
Instrument the service provider applications so you can inquire on the status of the network or perform a test without sending an actual transaction
On the other hand, if your client cannot articulate these things then they may just want to be able to check a box that says "works with the ESB."
I'll try & keep it buzzword free (but a buzz acronym may creep in).
When services/applications/mainframes/etc... want to integrate (so send messages to each other) you can end up with quite a mess. An ESB hides that mess inside of itself (or itselves) so that an organisation can pretend that there isn't a mess and that it has something manageable. It then wraps a whole load of features around this to make this box even more enticing to the senior people in an organisation who'll make the decision to buy such an expensive product. These people typically will want to introduce a large initiative which costs a lot of money to prove that they are 'doing something' and know how to spend large amounts of money. If this is an SOA initiative then vendors various will have told them that an ESB is required to make the vendors vision of what SOA is work (typically once the number of services which they might want passes a trivial number).
So an ESB is:
A vehicle for vendors to make lots of money;
A vehicle for consultants to make lots of money;
A way for senior executives (IT Directors & the like) to show they can spend lots of money;
A box to hide a mess in;
A total PITA for a technical team to work with.
After researching this a bit, my
instinct is to say that this is just
buzz speak for saying that we need to
have a platform-indpendent way to pass
messages back and forth
You are correct, partially because the term ESB is always nice word that fits well with another buzzword, legitimate or not - which is governance (i.e. helps you manage who is accessing your endpoints and reporting metrics - Metrics btw is what all the suits like to see, so that may be a contributor)
Another reason they might want a platform neutral device is so that any services they consume are always exposed as endpoints from a central location, instead of a specific machine resource. The ESB makes the actual physical endpoints of your services irrelevant to them, which they shouldn't care much about anyway, but that enables you to move services around however they will only consume the ESB Endpoint.
Apart from a centralized repository for Discovery, an ESB also makes side by side versioning of services easier. If I had a choice and my company had the budget, we would have purchased IBM's x150 appliance :(
Thirdly, a lot of more advanced buses, like SoftwareAG's product if I recall, is natively able to expose legacy data, like from data sitting on main frames as services without the need for coding via adapters
I don't know if their intent is to leverage all the benefits an ESB provides, or as you said, make it buzzword compliant.
After researching this a bit, my instinct is to say that this is just buzz speak for saying that we need to have a platform-indpendent way to pass messages back and forth
That's about right. Sometimes an ESB will go a little bit further and include additional features like message delivery guarantees, confirmation/acknowledgement messages, and so on. The presence of an ESB also usually explicitly or implicitly creates a new protocol where none previously existed, which is another important consideration. (That is, some sort of standard or interface has to be set regarding the format of the messages.)
Am I correct in dismissing our partners' request as just trying to get our software to be more buzzword-compliant, or are they telling us something we should listen to (even if encoded in buzzspeak)?
You should always listen to your customers, even if it initially sounds silly. It's usually worth at least spending the effort to decide what's going on. Reading between the lines, what your partners probably mean is that they want a way for your service to integrate more easily with their own services and products.
An enterprise service bus handles the messaging between systems in a standard way. This allows you to communicate with the bus in the same exact way across all your platforms and the bus handles the actual translating to individual communication mechanism needed for the specific endpoint. This means you write all your code to talk to the bus using a common messaging scheme and the bus handles taking your common scheme and translating it so the endpoint understands it.
The simplest explanation is to explain what it provides:
For many years companies acquired different platforms and technologies to achieve specific functions in their business from Finance to HR. These systems needed to talk to each other to share data so middleware became the glue that allowed them to connect. Before the business knew it, they were paying for support and maint on each of these systems and the middleware. As needs in the business changed departments decided to create their own custom solutions to address special needs rather than try to make the aging solutions flexible enough to meet their needs. Before they knew it, they were paying to support and maintain the legacy systems, middleware, and custom solutions. With new laws like Sarbanes Oxley, companies need to have better information available for reporting purposes. A single view requires that they capture data from all of the systems. In addition, CIOs are now being pressured to lower costs and increase customer service. One obvious solution is the eliminate redudant systems, expensive support and maint contracts, and high cost legacy solutions which require specialists to support. Moving to a new platform allows for this, but there needs to be a transition. There are no turnkey solutions that can replicate what the business does. To address the needs for moving information around they go with SOA because it allows for information access through a generic entity. If I ask for AllEmployees from the service bus it gets them whether it is from 15 HR systems or 1. When the 15 HR systems becomes 1 system the call and result does not change, just how it was done behind the scenes. The Service Bus concept standardizes the flow of information and allows IT managers to conduct transitions behind the bus with no long term effect on upstream users.

How do I explain APIs to a non-technical audience?

A little background: I have the opportunity to present the idea of a public API to the management of a large car sharing company in my country. Currently, the only options to book a car are a very slow web interface and a hard to reach call center. So I'm excited of the possiblity of writing my own search interface, integrating this functionality into other products and applications etc.
The problem: Due to the special nature of this company, I'll first have to get my proposal trough a comission, which is entirely made up of non-technical and rather conservative people. How do I explain the concept of an API to such an audience?
Don't explain technical details like an API. State the business problem and your solution to the business problem - and how it would impact their bottom line.
For years, sales people have based pitches on two things: Features and Benefit. Each feature should have an associated benefit (to somebody, and preferably everybody). In this case, you're apparently planning to break what's basically a monolithic application into (at least) two pieces: a front end and a back end. The obvious benefits are that 1) each works independently, so development of each is easier. 2) different people can develop the different pieces, 3) it's easier to increase capacity by simply buying more hardware.
Though you haven't said it explicitly, I'd guess one intent is to publicly document the API. This allows outside developers to take over (at least some) development of the front-end code (often for free, no less) while you retain control over the parts that are crucial to your business process. You can more easily [allow others to] add new front-end code to address new market segments while retaining security/certainty that the underlying business process won't be disturbed in the process.
HardCode's answer is correct in that you should really should concentrate on the business issues and benefits.
However, if you really feel you need to explain something you could use the medical receptionist analogue.
A medical practice has it's own patient database and appointment scheduling system used by it's admin and medical staff. This might be pretty complex internally.
However when you want to book an appointment as a patient you talk to the receptionist with a simple set of commands - 'I want an appointment', 'I want to see doctor X', 'I feel sick' and they interface to their systems based on your medical history, the symptoms presented and resource availability to give you an appointment - '4:30pm tomorrow' - in simple language.
So, roughly speaking using the receptionist is analogous to an exterior program using an API. It allows you to interact with a complex system to get the information you need without having to deal with the internal complexities.
They'll be able to understand the benefit of having a mobile phone app that can interact with the booking system, and an API is a necessary component of that. The second benefit of the API being public is that you won't necessarily have to write that app, someone else will be able to (whether or not they actually do is another question, of course).
You should explain which use cases will be improved by your project proposal. An what benefits they can expect, like customer satisfaction.

How is web programming different from back-end programming?

I have worked on single threaded business logic/back-end programming for most of my career. I now wish to learn web programming but would like to know how web programming is different from non-GUI programming (e.g. writing an API or a file processing application). I am not talking about the GUI design aspects (someone has already asked that question here) but more about programming complexity.
On the few occasions when I have worked on a web application, I felt that web applications are relatively more non-deterministic and unpredictable (for example, due to the event driven, multi-threaded model of web applications, there are several permutations and combinations of events and actions one needs to take care of) .
What would you say are some of the basic features of web programming that makes it different from non-GUI applications? What are the pitfalls/mistakes a back-end developer might commit while working on web applications?
EDIT
My definition of back-end programming means non-GUI applications like an API or a file processing batch application that parses a large data file, reads the records, does a lot of number crunching calculations on the data and spews out the results into another file or database. Anothe example could be a library of date and time utilities.
The biggest challenge with web programming is dealing with state. HTTP is a stateless protocol. This can make maintaining state more challenging than in a desktop application. Web applications tend to have a different life cycle due to this. Each web development platform deals with this somewhat differently, but they all need to deal with it in some way.
Web applications generally feel like single threaded applications, as you - the application developer - rarely create threads of your own. If anything, it's actually a lot easier, because the stateless nature of the web transactions means that you have to load the data for the page each time from the database. Therefore, you don't have to worry about concurrency, since 'whatever is there' is usually good enough.
The biggest problem with Web development is all of the background knowledge that you have to accumulate over time. How do you lay out web pages? How do you style things with CSS? How do you get parameters from the query string? How do you validate a field value in JavaScript? All of those things are actually really easy to learn, but there's just so many of them that it can be a real pain.
The biggest pitfalls I've witnessed Application developers make when moving into Web is not considering the costs of their code. Either they abuse MySQL to much too the point of bogging the RDBMS down, they write code that uses too much memory, or they make front end pages that are to big to fit in dialup/cellphones or low end broadband/dsl pipeline.
Sometimes it can't be avoid in writing a heavy duty page, but considerations can be made to attempt to cache as much as possible or when writing a page that will be hit a lot they will make no effort to profile and optimize queries before they go out the door.
Its not that these people are stupid, just a lack of experience and awareness that they need to play nice and write code that's somewhat lean.
Back-end programming is infinitely easier than web programming. (You have been warned!) Web programming is the easiest to show off to everybody.
Most web sites have a back end component as well. Typical structure will be something like:
UI - html/css/javascript
Controller - if using MVC
Business Logic/Services - this is backend
Database - this is also backend
So building web sites will still mean a lot of back end work. In regards to the UI, the main difference is that you will need to have a good eye for design and layout to do it well. The html/css technology is pretty simple in itself.
HTML was actually developed to deliver physics papers. You can still see it in some of the old meta tags. At any rate the difference is web programming is stateless and thick client development is not.
As you have adeptly indicated, its all driven by events. True javascript has mucked up web development a bit by creating the illusion of a stateful enviornment but in the end everything comes down to simple HTML.
Its never too late to start learning, I would say start making some static HTML pages and move your way up to an MVC Framework, I suggest Microsoft MVC Framework. Its pretty fantastic, there are others you could use like ASP.Net Webforms but you won't learn anything by dragging and dropping things onto a designer ;).
web & GUI applications interface with humans .. back-end applications interface with services and databases .. As such your specifications need to include significant consideration of your user's mental model - making things behave as people expect them to. And doing that - understanding how users think - is not always easy or logical. You may have elegant algorithmic solutions that simply fail to engage, because people don't always think logically. Many times, quite elegant UI's are extremely twisted coding-wise .. which is very contrary to system->system programming
Depending on problem-space, much of this can be more art than science.
One consideration (amongst many) with web programming is that users won't just be stupid (not that they all are, but you always have to factor that in), they will sometimes (assume always) be downright malicious and nasty, and will do everything in their power to destroy your application, your database, your weekends, your sanity...
Be as paranoid as a very small nun at a penguin shoot. Do not trust your users.
Another consideration is that Back End programming as per your definition is easier to test.
Once you begin web programming you're at the mercy of the various browsers' different interpretations of the same code. Plus the user, with inputs of mouse and keyboard, has a variety of ways to break what you produce.
Web programming isn't back-end programing. It shows stuff on the front end, the web.
Are you defining it otherwise?
EDIT
Web programming pulls you into presenting data consistently, visually, to everyone. Back end coding means constructing that data, in the same way for presentation, but not presenting it.
Based on your definition of "back end programming," your question applies not only to web applications, but to any GUI application.
It kind of depends what kind of GUI application we're talking about. For example:
Internal business applications tend to involve lots of business process workflow logic, record keeping, and interoperability between separate systems. No fancy alorithms or number crunching are needed. Your audience is limited, so performance is not a big deal, but cross-platform compatibility is important so these tend to be web applications. Your main concerns are making it easy to tie business sytems together, and keeping the API layered to ensure that the GUI code does not have to deal with any of the business logic code.
Public web sites (such as this one) tend to involve less of a formal architecture, and more of a mentality of "just get this cool feature to work so we can get more visitors." Again, no number crunching or algorithms unless performance is an issue. Performance is more of an issue for massively popular sites like Slashdot or Google, so if you anticipate rapid growth it pays to design for scalability in advance.
Public e-commerce web sites are kind of like both of the above: features and performance are important, but equally important is the structured architecture underneath it that ties all of the commerce business systems together (purchasing, supplier, shopping cart, payment gateways, etc.)
For the actual GUI portion, the complexity of the application kind of determines how complex the GUI code will be. For highly complex, nested GUIs where your requirements change often, it's easy to fall into the trap of putting too much GUI stuff into one page. Soon the page exceeds most people's complexity threshold, making the page very difficult to maintain. It pays to think in advance how you can separate different portions of the GUI into separate components, and then tie them together. If you're new to GUI programming, read some articles on the Model-View-Controller (MVC) pattern.
For simple web sites, where most pages are fairly static, this issue doesn't come up so much because each individual page is easy to maintain.
Most web programming is done in the style popular in the early seventies, before Dijkstra's 'goto considered harmful' was well-known.

Is a process design really declarative programming?

I've heard from someone that they´re using a business process automation tool (like Weblogic Integration) as a programming language (what sounds like something kind of stupid) to make things declarative. Then they put all the logic inside a process, every single if and while.
But, isn´t a process a how to step-by-step entity to reach a target?
For me it makes a process completely imperative. What do you think?
Orchestration languages are in fact imperative scripting languages with conditionals, looping and other traditionally imperative constructs, typically expressed through a flowchart-based user interface. They certainly do not (in my experience) implement tail-recursive functional programming, backward chaining or any other paradigm that might reasonably described as declarative in the generally accepted sense.
MS Workflow Foundation is advertised as having a rules engine, but this is fairly simplistic and doesn't really do forward chaining, except in a somewhat roundabout way. ILOG actually makes an adaptor for their rules engine specifically to drop it into MS workflow foundation.
Other workflow tools have better rule engines and a proper forward chaining system that could be viewed as declarative. However, once you get into the workflows themselves with looping and conditional branches you are most definitely in the territory of imperative programming.
However, some systems also implement a petri-net or state change based markup system for workflow, which might reasonably be described as declarative, but they still have an imperative mode of interaction with the underlying system. They still update variables and have side-effects.
I have seen one or two applications (for example TOAD for data anlaysis) actually using MS Workflow Foundation as a scripting language. As such it allows you to add a scripting facility to the application that (at least for marketing purposes) doesn't require programming skill to use.
In practice, a tool designed for writing, editing and running SQL queries being fitted with a scripting framework for 'non-programmers' makes one wonder what audience it's really aimed at. As a scripting language, workflow modelling tools are fairly clumsy and offer very limited opportunities for abstraction; in practice a .Net based scripting language such as IronPython or Boo, particularly in conjunction with a decent templating mechanism, would be a very powerful addition to such a tool.
One point about graphical languages of this sort is that they do not scale well with complexity. A similar issue applies with ETL tools as well. I have seen a provisioning application (see below) that was done (ironically) with Crossworlds (now known as Websphere Integrator). Within a month of starting on the application it became obvious that the graphical workflow language was not going to scale with the complexity of the application and it was re-built, based on a custom rules engine written in Java and a fairly large body of bespoke java code.
This type of issue is not uncommon with EAI and Orchestration systems and is one of the reasons that SOA is hard to implement in practice. What you are doing is actually pushing business logic into a very clumsy programming environment that is not being officially acknowledged as such. This will work in a simple case but is hard to make work on a complex system - this is sort of a guilty secret in SOA circles.
Coda:
A provisioning application is a system that takes plans for telecommunication services contracts (in this case for a mobile phone network) and pushes configuration information
based on rules out to various switches, billing applications and other applications. They tend to be fairly complex. When you buy a mobile phone plan with so many minutes and so many texts per month, a provisioning application is pushing out configuration information to the rest of the system about your access and billing rules.
It is definitely not what people usually mean when they talk about declarative programming, even if it some sense can be called declarative.

Flow Based Programming

I have been doing a little reading on Flow Based Programming over the last few days. There is a wiki which provides further detail. And wikipedia has a good overview on it too. My first thought was, "Great another proponent of lego-land pretend programming" - a concept harking back to the late 80's. But, as I read more, I must admit I have become intrigued.
Have you used FBP for a real project?
What is your opinion of FBP?
Does FBP have a future?
In some senses, it seems like the holy grail of reuse that our industry has pursued since the advent of procedural languages.
1. Have you used FBP for a real project?
We've designed and implemented a DF server for our automation project (dispatcher, component iterface, a bunch of components, DF language, DF compiler, UI). It is written in bare C++, and runs on several Unix-like systems (Linux x86, MIPS, avr32 etc., Mac OSX). It lacks several features, e.g. sophisticated flow control, complex thread control (there is only a not too advanced component for it), so it is just a prototype, even it works. We're now working on a full-featured server. We've learnt lot during implementing and using the prototype.
Also, we'll make a visual editor some day.
2. What is your opinion of FBP?
2.1. First of all, dataflow programming is ultimate fun
When I met dataflow programming, I was feel like 20 years ago, when I met programming first. Altough, DF programming differs from procedural/OOP programming, it's just a kind of programming. There are lot of things to discover, even sooo simple ones! It's very funny, when, as an experienced programmer, you met a DF problem, which is a very-very basic thing, but it was completely unknown for you before. So, if you jump into DF programming, you will feel like a rookie programmer, who first met the "cycle" or "condition".
2.2. It can be used only for specific architectures
It's just a hammer, which are for hammering nails. DF is not suitable for UIs, web server and so on.
2.3. Dataflow architecture is optimal for some problems
A dataflow framework can make magic things. It can paralellize procedures, which are not originally designed for paralellization. Components are single-threaded, but when they're organized into a DF graph, they became multi-threaded.
Example: did you know, that make is a DF system? Try make -j (see man, what -j is used for). If you have multi-core machine, compile your project with and without -j, and compare times.
2.4. Optimal split of the problem
If you're writing a program, you often split up the problem for smaller sub-problems. There are usual split points for well-known sub-problems, which you don't need to implement, just use the existing solutions, like SQL for DB, or OpenGL for graphics/animation, etc.
DF architecture splits your problem a very interesting way:
the dataflow framework, which provides the architecture (just use an existing one),
the components: the programmer creates components; the components are simple, well-separated units - it's easy to make components;
the configuration: a.k.a. dataflow programming: the configurator puts the dataflow graph (program) together using components provided by the programmer.
If your component set is well-designed, the configurator can build such system, which the programmer has never even dreamed about. Configurator can implement new features without disturbing the programmer. Customers are happy, because they have personalised solution. Software manufacturer is also happy, because he/she don't need to maintain several customer-specific branches of the software, just customer-specific configurations.
2.5. Speed
If the system is built on native components, the DF program is fast. The only time loss is the message dispatching between components compared to a simple OOP program, it's also minimal.
3. Does FBP have a future?
Yes, sure.
The main reason is that it can solve massive multiprocessing issues without introducing brand new strange software architectures, weird languages. Dataflow programming is easy, and I mean both: component programming and dataflow configuration building. (Even dataflow framework writing is not a rocket science.)
Also, it's very economic. If you have a good set of components, you need only put the lego bricks together. A DF program is easy to maintain. The DF config building requires no experienced programmer, just a system integrator.
I would be happy, if native systems spread, with doors open for custom component creating. Also there should be a standard DF language, which means that it can be used with platform-independent visual editors and several DF servers.
Interesting discussion! It occurred to me yesterday that part of the confusion may be due to the fact that many different notations use directed arcs, but use them to mean different things. In FBP, the lines represent bounded buffers, across which travel streams of data packets. Since the components are typically long-running processes, streams may comprise huge numbers of packets, and FBP applications can run for very long periods - perhaps even "perpetually" (see a 2007 paper on a project called Eon, mostly by folks at UMass Amherst). Since a send to a bounded buffer suspends when the buffer is (temporarily) full (or temporarily empty), indefinite amounts of data can be processed using finite resources.
By comparison, the E in Grafcet comes from Etapes, meaning "steps", which is a rather different concept. In this kind of model (and there are a number of these out there), the data flowing between steps is either limited to what can be held in high-speed memory at one time, or has to be held on disk. FBP also supports loops in the network, which is hard to do in step-based systems - see for example http://www.jpaulmorrison.com/cgi-bin/wiki.pl?BrokerageApplication - notice that this application used both MQSeries and CORBA in a natural way. Furthermore, FBP is natively parallel, so it lends itself to programming of grid networks, multicore machines, and a number of the directions of modern computing. One last comment: in the literature I have found many related projects, but few of them have all the characteristics of FBP. A list that I have amassed over the years (a number of them closer than Grafcet) can be found in http://www.jpaulmorrison.com/cgi-bin/wiki.pl?FlowLikeProjects .
I do have to disagree with the comment about FBP being just a means of implementing FSMs: I think FSMs are neat, and I believe they have a definite role in building applications, but the core concept of FBP is of multiple component processes running asynchronously, communicating by means of streams of data chunks which run across what are now called bounded buffers. Yes, definitely FSMs are one way of building component processes, and in fact there is a whole chapter in my book on FBP devoted to this idea, and the related one of PDAs (1) - http://www.jpaulmorrison.com/fbp/compil.htm - but in my opinion an FSM implementing a non-trivial FBP network would be impossibly complex. As an example the diagram shown in
is about 1/3 of a single batch job running on a mainframe. Every one of those blocks is running asynchronously with all the others. By the way, I would be very interested to hearing more answers to the questions in the first post!
1: http://en.wikipedia.org/wiki/Pushdown_automaton Push-down automata
Whenever I hear the term flow based programming I think of LabView, conceptually. Ie component processes who's scheduling is driven primarily by a change to its input data. This really IS lego programming in the sense that the labview platform was used for the latest crop of mindstorm products. However I disagree that this makes it a less useful programming model.
For industrial systems which typically involve data collection, control, and automation, it fits very well. What is any control system if not data in transformed to data out? Ie what component in your control scheme would you not prefer to represent as a black box in a bigger picture, if you could do so. To achieve that level of architectural clarity using other methodologies you might have to draw a data domain class diagram, then a problem domain run time class relationship, then on top of that a use case diagram, and flip back and forth between them. With flow driven systems you have the luxury of being able to collapse a lot of this information together accurately enough that you can realistically design a system visually once the components are build and defined.
One question I never had to ask when looking at an application written in labview is "What piece of code set this value?", as it was inherent and easy to trace backwards from the data, and also mistakes like multiple untintended writers were impossible to create by mistake.
If only that was true of code written in a more typically procedural fashion!
1) I build a small FBP framework for an anomaly detection project, and it turns out to have been a great idea.
You can also have a look at some of the KNIME videos, that give a good idea of what a flow based framework feels like when the framework is put together by a great team. Admittedly, it is batch based and not created for continuous operation.
By far the best example of flow based programming, however, is UNIX pipes which is one of the oldest, most overlooked FBP framework. I don't think I have to elaborate on the power of nix pipes...
2) FBP is a very powerful tool for a large set of problems. The intrinsic parallelism is a great advantage, and any FBP framework can be made completely network transparent by using adapter modules. Smart frameworks are also absurdly fault tolerant, and able to dynamically reload crashed modules when necessary. The conceptual simplicity also allows cleaner communication with everybody involved in a project, and much cleaner code.
3) Absolutely! Pipes are here to stay, and are one of the most powerful feature of unix. The power inherent in a FBP framework compared to a static program are many, and trivialise change, to the point where some frameworks can be reconfigured while running with no special measures.
FBP FTW! ;-)
In automotive development, they have a language agnostic messaging protocol which is part of the MOST specification (Media Oriented Systems Transport), this was designed to communicate between components over a network or within the same device. Systems usually have both a real and visualized message bus - therefore you effectively have a form of flow based programming.
That was what made the light bulb go on for me several years ago and brought me here. It really is a fantastic way to work and so much more fun than conventional programming. The message catalog form the central specification and point of reference. It works well for both developers and management. i.e. Management are able to browse the message catalog instead of looking at source.
With integrated logging also referencing the catalog to produce intelligible analysis things can get really productive. I have real world experience of developing commercial products in this way. I am interested in taking things further, particularly with regards to tools and IDEs. Unfortunately I think many people within the automotive sector have missed the point about how great this is and have failed to build on it. They are now distracted by other fads and failed to realize that there was far more to most development than the physical bus.
I've used Spring Web Flow extensively in Java Web applications to model (typically) application processes, which tend to be complex wizard-like affairs with lots of conditional logic as to which pages to display. Its incredibly powerful. A new product was added and I managed to recut the existing pieces into a completely new application process in an hour or two (with adding a couple of new views/states).
I also looked into using OS Workflow to model business processes but that project got canned for various reasons.
In the Microsoft world you have Windows Workflow Foundation ("WWF"), which is becoming more popular, particularly in conjunction with Sharepoint.
FBP is just a means of implementing a finite state machine. It's nothing new.
I realize that it is not exactly the same thing, but this model has been used for years in PLC programming. ISO calls it Sequential Flow Chart, but many people call it Grafcet after a popular implementation. It offers parallel processing and defines transitions between states.
It's being used in the Business Intelligence world these days to mashup and process data. Data processing steps like ETL, querying, joining , and producing reports can be done by the end-user. I'm a developer on an open system - ComposableAnalytics.com In CA, the flow-based apps can be shared and executed via the browser.
This is what MQ Series, MSMQ and JMS are for.
This is cornerstone of Web Services and Enterprise Service Bus implementations.
Products like TIBCO and Sun's JCAPS are basically flow-based without using this particular buzz-word.
Most of the work of the application is done with small modules that pass messages through a processing network.