Is "coupling" related only to code, or can the term be applied to software components and architecture? - separation-of-concerns

For example, when discussing a build or deploy process, and making sure it is independent of the IDE. Is this "coupling", or is that considered Separation of Concerns, or something completely different? The general concept is to introduce the least number of variables into a process or architecture, so that when a failure occurs, the difficulty in identifying the possible points of failure are reduced significantly. Is there another definition for that?

Something completely different.
Coupling is code.
Independent tools are just independent tools.
Microsoft has lead me to believe that independent tools are a bad idea. They tell me that one vendor's integrated tool suite is a good thing.

I would characterise coupling as being something which relates to runtime system stability or system change.
When considering stability, coupling comes into the picture if the failure of one component causes the failure of other components. For example if two software components are communicating directly over a TCP connection, then the failure of one component means the other cannot do its work. The whole system is down. If I decouple by allowing the two components to communicate via a message queue (for example) then each component might continue to work independently in the absence of the other (assuming this makes sense in the application).
When considering system change, coupling comes into the picture when a code change in one module means I have to go and change code in a large number of other modules.
The example you give with build/deploy tools is a form of coupling, but not really what people have in mind when they consider architectural issues such as runtime and code coupling.

Related

Understanding the motivational poster for the Interface Segregation Principle

Why does the `"motivational poster" for the Interface Segregation Principle on this page say "You want me to plug this in, where?"
The Interface Segregation Principle says
Clients should not be forced to depend upon interfaces that they don't
use.
I am not sure how the image and the tagline connect to this principle. It seems to be more of a motto --- albeit vaguely --- for the dependency inversion principle where high level objects should not depend on low level implementations (the plugpoint does not need to know the details of the device being plugged in)
I am aware that these sayings are tongue in cheek, but given how cutely the other
saying illustrate the principles, I don't want to misunderstand this particular poster for the ISP.
If important process A has a reference to not so important process B, you have now made process A dependent upon process B even though process A doesn't really need process B for anything.
"Clients should not be forced to depend upon interfaces they don't use" should read - Clients, classes, modules, systems, people should not know about, have references to, have dependencies on other things that are not needed to meet their functional requirement.
So let's say that you have a high level important thing like a switch/button to launch some nukes. If you plug a coffeemaker into it, then you run the chance of blowing everything up because you made your important missile launcher depend upon whether your coffeemaker is working properly.
Here's a real life example:
A Web client communicates with a web server.
The web server communicates with the database.
If the web client were able to access the database directly, then what would happen if the client goes haywire and sends 1000 requests per second?
The web server in this case acts as the Interface by mediating the communication between the client and the database. Both the client and the database depend upon, know about the web server but not each other. Because the client doesn't know about(is not plugged into) the database, the database can't be screwed over by the client. Why should the database be dependent upon the client? It works just fine on its own.
In the poster, that button looks pretty important. So as a SOLID programmer, you need to say NO when someone asks you to plug a coffeemaker or anything that is not required into a missile launcher.
The poster seems to imply that fat interfaces are more difficult to comprehend and use, which is true, but isn't the point of Interface Segregation.
The ISP is concerned with one aspect of managing coupling and cohesion in an application. Small, cohesive APIs avoid coupling different clients to each other by only serving one client each. In that respect, Interface Segregation overlaps with Single Responsibility, so I find the giant Swiss Army Knife poster for Single Responsibility failure makes equal sense in portraying Interface Segregation failure.
A user who is coupled to some tools they don't need pays the cost for those tools anyway, particularly in the effect those unnecessary tools have on overall design. The useful tools must accommodate the useless ones, and this accommodation affects the user experience.
In software terms, this can be as simple as the useless part of an API pulling in third-party libraries the user doesn't need.
See the tag info for a little more background.

At what point should architecture become layered?

Obviously, "Hello World" doesn't require a separated, modular front-end and back-end. But any sort of Enterprise-grade project does.
Assuming some sort of spectrum between these points, at which stage should an application be (conceptually, or at a design level) multi-layered? When a database, or some external resource is introduced? When you find that the you're anticipating spaghetti code in your methods/functions?
when a database, or some external resource is introduced.
but also:
always (except for the most trivial of apps) separate AT LEAST presentation tier and application tier
see:
http://en.wikipedia.org/wiki/Multitier_architecture
Layers are a mean to keep a design loosely coupled and highly cohesive.
When you start to have a few classes (either implemented or just sketched with UML), they can be grouped logically, into layers - or more generally packages, or modules. This is called the art of separating the concerns.
The sooner the better: if you do not start layering early enough, then you risk to have never do it as the effort can be too important.
Here are some criteria of when to...
Any time you anticipate the need to
replace one part of it with a
different part.
Any time you find
yourself need to divide work amongst
parallel team.
There is no real answer to this question. It depends largely on your application's needs, and numerous other factors. I'd suggest reading some books on design patterns and enterprise application architecture. These two are invaluable:
Design Patterns: Elements of Reusable Object-Oriented Software
Patterns of Enterprise Application Architecture
Some other books that I highly recommend are:
The Pragmatic Programmer: From Journeyman to Master
Refactoring: Improving the Design of Existing Code
No matter your skill level, reading these will really open your eyes to a world of possibilities.
I'd say in most cases dealing with multiple distinct levels of abstraction in the concepts your code deals with would be a strong signal to mirror this with levels of abstraction in your implementation.
This does not override the scenarios that others have highlighted already though.
I think once you ask yourself "hmm should I layer this" the answer is yes.
I've worked on too many projects that probably started off as proof of concept/prototype that ended up being full projects used in production, which are horribly written and just wreak of "get it done quick, we'll fix it later." Trust me, you wont fix it later.
The Pragmatic Programmer lists this as the Broken Window Theory.
Try and always do it right from the start. Separate your concerns. Build it with modularity in mind.
And of course try and think of the poor maintenance programmer who might take over when you're done!
Thinking of it in terms of layers is a little limiting. It's what you see in whitepapers about a product, but it's not how products really work. They have "boxes" that depend on each other in various ways, and you can make it look like they fit into layers but you can do this in several different configurations, depending on what information you're leaving out of the diagram.
And in a really well-designed application, the boxes get very small. They are down to the level of individual interfaces and classes.
This is important because whenever you change a line of code, you need to have some understanding of the impact your change will have, which means you have to understand exactly what the code currently does, what its responsibilities are, which means it has to be a small chunk that has a single responsibility, implementing an interface that doesn't cause clients to be dependent on things they don't need (the S and the I of SOLID).
You may find that your application can look like it has two or three simple layers, if you narrow your eyes, but it may not. That isn't really a problem. Of course, a disastrously badly designed application can look like it has layers tiers if you squint as hard as you can. So those "high level" diagrams of an "architecture" can hide a multitude of sins.
My generic rule of thumb is to at least to separate the problem into a model and view layer, and throw in a controller if there is a possibility of more than one ways of handling the model or piping data to the view.
(Or as the first answer, at least the presentation tier and the application tier).
Loose coupling is all about minimising dependencies, so I would say 'layer' when a dependency is introduced. i.e. a database, third party application, etc.
Although 'layer' is probably the wrong term these days. Most of the time I use Dependency Injection (DI) through an Inversion of Control container such as Castle Windsor. This means that I can code on one part of my system without worrying about the rest. It has the side effect of ensuring loose coupling.
I would recommend DI as a general programming principle all of the time so that you have the choice on how to 'layer' your application later.
Give it a look.
R

Everything is a flow?

Some of my recent web projects that I worked on, use a flow engine as the central abstraction in the presentation and/or (more or less the) business layer. Reflecting on my experiences, I can honestly say that I am not a fan of the flow-centric approach. On the contrary even. I see the same symptoms pop up in projects that use flows as central abstraction.
Everything is a flow. You don't just start an application, no, you "enter the main flow" even if it is just to show a menu with a huge dispatcher behind it. I am not against flows as such. Some use cases keep popping up everywhere and need to be included at various points in other use cases (LookupCustomer, ...), but for flow-centric people everything is a flow, even things that are... not flows.
Fragmentation. Flow-based applications tend to have many pieces of logic (actions, commands, fragments of code to prepare the view...) dispersed throughout the code. Mapping in and out of these actions adds overhead, is tedious and bloats the code. Although it is easy to follow the abstract flow, actually figuring out what is happening inside these little (or big) chunks of code is another thing. While every style of application allows people to write bad and inconsistent code, flow-centric applications make it particularly easy to do so.
Config files. Most applications use some XML format to declare flows and actions that accompany state changes. The language in which the application is written (say Java, C#, Ruby, ...) is probably far more richer and expressive than the XML format ever will be. Why bother?
Flows break encapsulation. If you give me a component that has a certain embedded flow logic, then the flow should be part of the component, and should not be an external abstraction. In other words: the flow is part of the component and the component is self-contained. It is a detail. Sure, it can be parameterized and stuff, but a component should "just work". People writing a Swing, GWT, or whatever fat or rich interface application, don't bother with explicit flow abstractions. Why should my web application have one? Give me the flow diagram of GMail.
(Edit) Flows are procedural. If you look at "rich" patterns like MVC with events and everything, flows really pale in comparison. You are using an modern and expressive language to implement your application, right? So you can do better than the rigid "do this, then that, and that, and ..." way from the time when punchcards and assembler were in fashion.
Examples of frameworks that promote flow-centric development are Struts, BTT, Spring Webflow, and JSF. I've also come across homegrown flow engines built on top of ordinary servlets.
This is also an interesting article: http://chillenious.wordpress.com/2006/07/16/on-page-navigation/
Do you (still) think a flow-centric approach for (the front-end of) a web-application is a good idea?
In general, flows seem to be an unnecessarily enterprisey approach to what should be a relatively simple problem: we would like to ensure that users take one of several particular paths through our application. What's more instructive and insightful is to examine why we need this path to occur. Is it because...
... we don't want them to interact with our application except in rigidly predefined ways? Then we've limited the utility of our application, and we make our application much harder to change and use.
... we're worried about the ability of our application to handle unexpected input or deal with states we haven't anticipated if people stray off the beaten path? Then that says a lot about our technical choices for a validation framework.
... we can't envision a scenario other than the predefined ones under which someone would use the site? Then we are implicitly assuming that only we know how best to use it; we limit the ability of the user to control their interaction.
Notice how each of these underscores an issue intrinsic to the application's development and team members, and one that's not the fault of a user. So I support your general premise that flow-based approaches tend to have a number of issues.
The primary problem is that flows unnecessarily increase brittleness that is already better abstracted by other mechanisms. For example, to achieve a rule like "you need to fill out your order form before you confirm checkout", don't make a workflow; have a better CustomerOrder model that knows when it doesn't have all the information necessary to allow an OrderConfirmation. If you try to skip ahead, your model and controller should take care of failing validation on the next POST.
Essentially, flows extract out disparate fragments of each participating controller and collect them into a new "flow controller" that's specific to each flow. That's not necessarily a bad idea, but it suggests that the original controllers may have been taking on too much responsibility to begin with if that sort of path was so easy to define separately. For example, if you previously had OrderConfirmation, CustomerOrder, and OrderCheckout controllers, and you're thinking about an Order flow to link all three together, what you should probably be thinking about is an Order controller instead.
I think defining flows is useful in a web application. In answer to your main points:
Everything is a Flow.
There is nothing intrinsically wrong with that, it's just a name to give something. A flow can be short or long - I agree it's a bit weird that there is a "main" flow that starts everything but it doesn't really cause any problems in practice.
Fragmentation
You have some valid points here, although I get the feeling that the greatest contributor to this is the design of the DSL. For example, Spring WebFlow v2 is a vast improvement over SWF v1 in terms of readability and understandability.
Config files
I strongly disagree with this point. I feel that xml is the best way to express this code. If you think about it - managing controllers, views, state changes and actions is really just "configuration" rather than "code". And xml (in my opinion) is the best way to express configuration. Just think about the word "Controller". All a controller does is direct and configure things - call services, return views and models etc. There is no need for any richness or expressiveness of Java to define what is basically just configuration of your web application.
Flows break encapsulation
GMail could expressed in a series of flows. Think about the number of steps it takes to compose and send an email. Flows really just define the wiring of how the application works - sure you could have a number of components that interact with each other, but the way that you configure them to work together is essentially the flow you have defined in your application. Making this flow explicit in a separate DSL seems like a good idea to me, as fundamentally it is separate.
The first question that should be answered is whether a flow framework is really the best tool for your specific web application. I'm a fan of Spring Web Flow, myself, but I'll only use it if my web app can easily be broken down into flows, and if navigation should be tightly controlled. If the navigation is very loose, where you can get to almost any page from any other page, then SWF isn't the right tool for the job.
As you mention, there are other drawbacks to flow frameworks. They usually aren't RESTful, and thus not bookmarkable. If that conflicts with what you want for your application, then SWF probably isn't for you.
That said, SWF, and some of the other flow frameworks, offer some features that few other web frameworks deliver. This includes complete solutions for double-submit issues and browser back button and history handling. SWF's implementation of these features lends some additional security. Since the flow execution IDs for each page change as the application is used, you get immunity to forced browsing and some protection against cross-site request forgery.
The concept of flows is quite nice, in my opinion, since flows tend to mirror use cases. Scoping data to a flow or a conversation removes the responsibility for its cleanup from the developer, which I think is a very good thing. It's like the difference between manual memory management and garbage collection. Not only does it make less work for me, but it eliminates the possibility of introducing bugs should I forget to cleanup attributes. One thing I hated about Struts was that I needed to duplicate my cleanup code in several actions to ensure correctness. It's much easier just to scope the data to the use case.
Flows also present a context for related actions and views. If I look at a struts-config or faces-config file, I can see all kinds of navigation rules or action mappings, but there is no immediate context for me to mentally group related items together. I have to manually trace through the configuration, and even then sometimes I get stuck. With Struts, I need to look at the specific web pages in order to figure out which actions can be invoked from a view.
With SWF, I can clearly see all the actions, views, and models related to a flow. With Eclipse plugins, I can see this as a state diagram. Even if you're not using eclipse, it's very easy to translate a flow definition to a state diagram. These diagrams are useful for myself, my project manager, and pretty much anyone who wants to understand the high-level of how a use case is performed. In short, chunking related things together allows for easier understanding, and a shallower learning curve. That's one reason why OOP is so popular. With web apps, the idea of chunking these elements together to form a use case just feels natural.
Everything is a flow
Everything really is a flow. Computer programs had always been a flow and will always be a flow containing of theese processes:
input -> process -> output
The MVC design pattern in fact corresponds with this..
controller -> model -> view
Fragmentation
You're right. But I think this "issue" might be reduced by a good suppport in IDEs.
Config files
There's no doubt xml is the best way to express configuration.
Flows break encapsulation
I would disagree with this. You can make black boxes using flows and then use these black boxes in another flows.
IMHO, web apps are best developed as independent modules rather than modules that are "bound by flow".
Since most web apps today are ajaxy apps, having independent modules on the page helps a lot.
Configurations can be handled by XML or JSON files.
Web 2.0 presents a serious challenge to the notion that "everything is a flow". And when the presentation tier is fully transposed to the client layer, we'll be back on the solid, and familiar (from GUIs of yore) ground of event based processing.
Flows arise because of the inherit mismatch between traditional application interaction and the way web applications actually work. Flows are merely a convenient way to describe what would be more traditionally modeled as a series of GUI dialogs (think wizard) in a way that is compatible with the way in which web pages are delivered and interacted with. Imagine if you will that you were writing a traditional program, but every time the user ran the program you could only display a single dialog box, and when the user clicked "Ok" (or "Cancel", or "Next", or "Previous") your program would terminate. In that situation, how would you go about modeling the expected behavior of the program (to further complicate matters, assume many users are running the program at different times)? I think you would find you would rather quickly arrive at something similar to flows.
I think perhaps what you're really asking is, "Why are most flow frameworks so easy to abuse?", which naturally leads to the followup question "What can be done to fix that?".

Flow Based Programming

I have been doing a little reading on Flow Based Programming over the last few days. There is a wiki which provides further detail. And wikipedia has a good overview on it too. My first thought was, "Great another proponent of lego-land pretend programming" - a concept harking back to the late 80's. But, as I read more, I must admit I have become intrigued.
Have you used FBP for a real project?
What is your opinion of FBP?
Does FBP have a future?
In some senses, it seems like the holy grail of reuse that our industry has pursued since the advent of procedural languages.
1. Have you used FBP for a real project?
We've designed and implemented a DF server for our automation project (dispatcher, component iterface, a bunch of components, DF language, DF compiler, UI). It is written in bare C++, and runs on several Unix-like systems (Linux x86, MIPS, avr32 etc., Mac OSX). It lacks several features, e.g. sophisticated flow control, complex thread control (there is only a not too advanced component for it), so it is just a prototype, even it works. We're now working on a full-featured server. We've learnt lot during implementing and using the prototype.
Also, we'll make a visual editor some day.
2. What is your opinion of FBP?
2.1. First of all, dataflow programming is ultimate fun
When I met dataflow programming, I was feel like 20 years ago, when I met programming first. Altough, DF programming differs from procedural/OOP programming, it's just a kind of programming. There are lot of things to discover, even sooo simple ones! It's very funny, when, as an experienced programmer, you met a DF problem, which is a very-very basic thing, but it was completely unknown for you before. So, if you jump into DF programming, you will feel like a rookie programmer, who first met the "cycle" or "condition".
2.2. It can be used only for specific architectures
It's just a hammer, which are for hammering nails. DF is not suitable for UIs, web server and so on.
2.3. Dataflow architecture is optimal for some problems
A dataflow framework can make magic things. It can paralellize procedures, which are not originally designed for paralellization. Components are single-threaded, but when they're organized into a DF graph, they became multi-threaded.
Example: did you know, that make is a DF system? Try make -j (see man, what -j is used for). If you have multi-core machine, compile your project with and without -j, and compare times.
2.4. Optimal split of the problem
If you're writing a program, you often split up the problem for smaller sub-problems. There are usual split points for well-known sub-problems, which you don't need to implement, just use the existing solutions, like SQL for DB, or OpenGL for graphics/animation, etc.
DF architecture splits your problem a very interesting way:
the dataflow framework, which provides the architecture (just use an existing one),
the components: the programmer creates components; the components are simple, well-separated units - it's easy to make components;
the configuration: a.k.a. dataflow programming: the configurator puts the dataflow graph (program) together using components provided by the programmer.
If your component set is well-designed, the configurator can build such system, which the programmer has never even dreamed about. Configurator can implement new features without disturbing the programmer. Customers are happy, because they have personalised solution. Software manufacturer is also happy, because he/she don't need to maintain several customer-specific branches of the software, just customer-specific configurations.
2.5. Speed
If the system is built on native components, the DF program is fast. The only time loss is the message dispatching between components compared to a simple OOP program, it's also minimal.
3. Does FBP have a future?
Yes, sure.
The main reason is that it can solve massive multiprocessing issues without introducing brand new strange software architectures, weird languages. Dataflow programming is easy, and I mean both: component programming and dataflow configuration building. (Even dataflow framework writing is not a rocket science.)
Also, it's very economic. If you have a good set of components, you need only put the lego bricks together. A DF program is easy to maintain. The DF config building requires no experienced programmer, just a system integrator.
I would be happy, if native systems spread, with doors open for custom component creating. Also there should be a standard DF language, which means that it can be used with platform-independent visual editors and several DF servers.
Interesting discussion! It occurred to me yesterday that part of the confusion may be due to the fact that many different notations use directed arcs, but use them to mean different things. In FBP, the lines represent bounded buffers, across which travel streams of data packets. Since the components are typically long-running processes, streams may comprise huge numbers of packets, and FBP applications can run for very long periods - perhaps even "perpetually" (see a 2007 paper on a project called Eon, mostly by folks at UMass Amherst). Since a send to a bounded buffer suspends when the buffer is (temporarily) full (or temporarily empty), indefinite amounts of data can be processed using finite resources.
By comparison, the E in Grafcet comes from Etapes, meaning "steps", which is a rather different concept. In this kind of model (and there are a number of these out there), the data flowing between steps is either limited to what can be held in high-speed memory at one time, or has to be held on disk. FBP also supports loops in the network, which is hard to do in step-based systems - see for example http://www.jpaulmorrison.com/cgi-bin/wiki.pl?BrokerageApplication - notice that this application used both MQSeries and CORBA in a natural way. Furthermore, FBP is natively parallel, so it lends itself to programming of grid networks, multicore machines, and a number of the directions of modern computing. One last comment: in the literature I have found many related projects, but few of them have all the characteristics of FBP. A list that I have amassed over the years (a number of them closer than Grafcet) can be found in http://www.jpaulmorrison.com/cgi-bin/wiki.pl?FlowLikeProjects .
I do have to disagree with the comment about FBP being just a means of implementing FSMs: I think FSMs are neat, and I believe they have a definite role in building applications, but the core concept of FBP is of multiple component processes running asynchronously, communicating by means of streams of data chunks which run across what are now called bounded buffers. Yes, definitely FSMs are one way of building component processes, and in fact there is a whole chapter in my book on FBP devoted to this idea, and the related one of PDAs (1) - http://www.jpaulmorrison.com/fbp/compil.htm - but in my opinion an FSM implementing a non-trivial FBP network would be impossibly complex. As an example the diagram shown in
is about 1/3 of a single batch job running on a mainframe. Every one of those blocks is running asynchronously with all the others. By the way, I would be very interested to hearing more answers to the questions in the first post!
1: http://en.wikipedia.org/wiki/Pushdown_automaton Push-down automata
Whenever I hear the term flow based programming I think of LabView, conceptually. Ie component processes who's scheduling is driven primarily by a change to its input data. This really IS lego programming in the sense that the labview platform was used for the latest crop of mindstorm products. However I disagree that this makes it a less useful programming model.
For industrial systems which typically involve data collection, control, and automation, it fits very well. What is any control system if not data in transformed to data out? Ie what component in your control scheme would you not prefer to represent as a black box in a bigger picture, if you could do so. To achieve that level of architectural clarity using other methodologies you might have to draw a data domain class diagram, then a problem domain run time class relationship, then on top of that a use case diagram, and flip back and forth between them. With flow driven systems you have the luxury of being able to collapse a lot of this information together accurately enough that you can realistically design a system visually once the components are build and defined.
One question I never had to ask when looking at an application written in labview is "What piece of code set this value?", as it was inherent and easy to trace backwards from the data, and also mistakes like multiple untintended writers were impossible to create by mistake.
If only that was true of code written in a more typically procedural fashion!
1) I build a small FBP framework for an anomaly detection project, and it turns out to have been a great idea.
You can also have a look at some of the KNIME videos, that give a good idea of what a flow based framework feels like when the framework is put together by a great team. Admittedly, it is batch based and not created for continuous operation.
By far the best example of flow based programming, however, is UNIX pipes which is one of the oldest, most overlooked FBP framework. I don't think I have to elaborate on the power of nix pipes...
2) FBP is a very powerful tool for a large set of problems. The intrinsic parallelism is a great advantage, and any FBP framework can be made completely network transparent by using adapter modules. Smart frameworks are also absurdly fault tolerant, and able to dynamically reload crashed modules when necessary. The conceptual simplicity also allows cleaner communication with everybody involved in a project, and much cleaner code.
3) Absolutely! Pipes are here to stay, and are one of the most powerful feature of unix. The power inherent in a FBP framework compared to a static program are many, and trivialise change, to the point where some frameworks can be reconfigured while running with no special measures.
FBP FTW! ;-)
In automotive development, they have a language agnostic messaging protocol which is part of the MOST specification (Media Oriented Systems Transport), this was designed to communicate between components over a network or within the same device. Systems usually have both a real and visualized message bus - therefore you effectively have a form of flow based programming.
That was what made the light bulb go on for me several years ago and brought me here. It really is a fantastic way to work and so much more fun than conventional programming. The message catalog form the central specification and point of reference. It works well for both developers and management. i.e. Management are able to browse the message catalog instead of looking at source.
With integrated logging also referencing the catalog to produce intelligible analysis things can get really productive. I have real world experience of developing commercial products in this way. I am interested in taking things further, particularly with regards to tools and IDEs. Unfortunately I think many people within the automotive sector have missed the point about how great this is and have failed to build on it. They are now distracted by other fads and failed to realize that there was far more to most development than the physical bus.
I've used Spring Web Flow extensively in Java Web applications to model (typically) application processes, which tend to be complex wizard-like affairs with lots of conditional logic as to which pages to display. Its incredibly powerful. A new product was added and I managed to recut the existing pieces into a completely new application process in an hour or two (with adding a couple of new views/states).
I also looked into using OS Workflow to model business processes but that project got canned for various reasons.
In the Microsoft world you have Windows Workflow Foundation ("WWF"), which is becoming more popular, particularly in conjunction with Sharepoint.
FBP is just a means of implementing a finite state machine. It's nothing new.
I realize that it is not exactly the same thing, but this model has been used for years in PLC programming. ISO calls it Sequential Flow Chart, but many people call it Grafcet after a popular implementation. It offers parallel processing and defines transitions between states.
It's being used in the Business Intelligence world these days to mashup and process data. Data processing steps like ETL, querying, joining , and producing reports can be done by the end-user. I'm a developer on an open system - ComposableAnalytics.com In CA, the flow-based apps can be shared and executed via the browser.
This is what MQ Series, MSMQ and JMS are for.
This is cornerstone of Web Services and Enterprise Service Bus implementations.
Products like TIBCO and Sun's JCAPS are basically flow-based without using this particular buzz-word.
Most of the work of the application is done with small modules that pass messages through a processing network.

Can we achieve 100% decoupling?

Can we achieve 100% decoupling between components of a system or different systems that communicate with each other? I don't think its possible. If two systems communicate with each other then there should be some degree of coupling between them. Am I right?
Right. Even if you write to an interface or a protocol, you are committing to something. You can peacefully forget about 100% decoupling and rest assured that whatever you do, you cannot just snap out one component and slap another in its place without at least minor modifications anyway, unless you are committing to very basic protocols such as HTTP (and even then.)
We human beings, after all, just LOOVE standards. That's why we have... well, nevermind.
If components are 100% decoupled, it means that they don't communicate with each other.
Actually there are different types of coupling. But the general idea is that objects are not coupled if they don't depend on each other.
You can achieve that. Think of two components that communicate with each other through network. One component can run on Windows while other on Unix. Isn't that 100% decoupling?
At minimum, firewall protection, from a certain interface at least, needs to allow the traffic from each machine to go to the other. That alone can be considered a form of 'coupling' and therefore, coupling is inherent to machines that communicate, at least to a certain level.
This is achievable by introducing a communication interface or protocol which both components understand and not passing data directly between the components.
Well two webservices that don't reference each other might be a good example of 100% decoupled.
The coupling would then arrive in the form of an app util that "couples" them together by using them both.
Coupling isn't inherently bad but you do have to make solid judgement calls about when to do it (is it only at Implementation, or in your framework itself?) and if the coupling is reasonable.
If the components are designed to be 100% orthogonal, it should be possible. A clear separation of concerns can achieve this. All a component needs to know is the interface of its input.
The coupling should be one-directional: components know the semantics of their parameters, but should be agnostic of each other.
As soon as you have 1% coupling between components, the 1% starts growing (in a system which lasts a little longer)
However, often knowledge is injected in peer components to achieve higher performance.
Even if two components do not comunicate directly, the third component, which uses the other two is part of the system and it is coupled to them.
#Vadmyst: If your components communicate over network they must use some kind of protocol which is the same as the interface of two local components.
That's a painfully abstract question to answer. If said system is the components of a single application, then there are various techniques such as those involving MVC (Model View Controller) and interfaces for IoC / Dependency Injection that facilitate decoupling of components.
From the perspective of physically isolated software architectures, CORBA and COM support local or networked interop and use a "common tongue" of things like ATL. These have been deprecated by XML services such as SOAP, which uses WSDL for performing coupling. There's nothing that stops a SOAP client from using a WSDL for run-time late coupling, although I rarely see it. Then there are things like JSON, which is like XML but optimized, and Google Protocol Buffers which optimizes the interop but is typically precompiled and not late-coupled.
When it comes to IPC (interprocess communications), two systems need only to speak a common "protocol". This could be XML, it could be a shared class library, or it could be something proprietary. Even at the proprietary level, you're still "coupled" by memory streams, TCP/IP networking, shared file (memory or hard disk), or some other mechanism, and you're still using bytes, and ultimately 1's and 0's.
So ultimately the question really cannot be answered fairly; strictly speaking, 100% is only attained by systems that have zilch to do with each other. Refine your question to a context.
It's important to distinguish between direct, and indirect components. Strive to remove direct connections (one class referencing another) and to use indirect connections instead. Bind two 'ignorant' classes with a third which manages their interactions.
This would be something like a set of user controls sitting on a form, or a pool of database connections and a connection pooling class. The more fundamental components (controls and connections) are managed by the higher piece (form and connection pool), but no fundamental component knows about another. The fundamental components expose events and methods, and the other piece 'pulls the strings'.
No, we can't. Read Joel's excellent article The Laws of Leaky Abstraction, it is an eye-opener for many people. However, this isn't necessarily a bad thing, it just is. Leaky abstractions offer great opportunity because they make the underlying platform exploitable.
Think of the API very hard for a very long time, then make sure it's as small as it can possibly be, until it's at the place where it has almost disappeared...
The Lego Software Process proposes this ... :) - and actually quite well achieves this...
How "closely coupled" are two cells of an organism...?
The cells in an organism can still communicate, but instead of doing it by any means that requires any knowledge about the receiving (or sending) part, they do it by releasing chemicals into the body ... ;)