Translating "plumbing" from english(explanation) - terminology

What means:
Support for several bindings (e.g., raw HTTP, TCP, MSMQ, and named pipes)
allows to choose the most appropriate PLUMBING to transport message data.

'Plumbing' is a pipe system (like the one for water in your house).
It's often used in IT to mean a support infrastructure. It's a particularly suitable term in this case, since the support infrastructure is actually a transport infrastructure, kinda like pipes indeed.

In this context, plumbing refers the communication layer. If you think of your data/information as "water" then "plumbing" refers to the way the data/information moves from various parts of your system.

In this case it means - Underlying transport mechanism.
The idea is that it equates to low level infrastructure, like indoor plumbing does.
That is, you don't normally think about the pipes underground that transport water to and from your house (and other houses in the neighborhood) and they may be constructed and use different types of materials and techniques. The same can be thought about the different bindings (do you care how they work?).

Related

Why should I prefer HTTP REST over HTTP RPC JSON-Messaging style in conjunction with CQRS?

Every time I read about how web service should communicate the first thing that comes up is:
Use REST, because it decouples client and server!
I would like to build a web service where each Query and Command is an Http-Endpoint. With REST I would have fewer endpoints, because of its nature of thinking of resources instead of operations (You typically have more operations than resources).
Why do I have a stronger coupling by using RPC over REST?
What is the benefit of using REST over RPC Json-Messaging style?
Additional information: With messaging I mean synchronous messaging (request/response)
Update: I think it would be also possible/better to only have one single Http endpoint that can handle a Query or Command depending on the given Http verb.
Before I get to the CQRS part, I'd like to spend a little time talking about the advantages and disadvantages of REST. I don't think it's possible to answer the question before we've established a common understanding of when and why to use REST.
REST
As with most other technology options, REST, too, isn't a silver bullet. It comes with advantages and disadvantages.
I like to use Richardson's Maturity Model, with Martin Fowler's additional level 0, as a thinking tool.
Level 0
Martin Fowler also calls level 0 the swamp of POX, but I think that what really distinguishes this level is simply the use of RPC over HTTP. It doesn't have to be XML; it could be JSON instead.
The primary advantage at this level is interoperability. Most system can communicate via HTTP, and most programming platforms can handle XML or JSON.
The disadvantage is that systems are difficult to evolve independently of clients (see level 3).
One of the distinguishing traits of this style is that all communication goes through a single endpoint.
Level 1
At level 1, you start to treat various parts of your API as separate resources. Each resource is identified by a URL.
One advantage is that you can now start to use off-the-shelf software, such as firewalls and proxy servers, to control access to various distinct parts of the system. You can also use HTTP redirects to point clients to different endpoints, although there are some pitfalls in that regard.
I can't think of any disadvantages, apart from those of level 0.
Level 2
At this level, not only do you have resources, but you also use HTTP verbs, such as GET, POST, DELETE, etc.
One advantage is that you can now begin to take more advantage of HTTP infrastructure. For instance, you can instruct clients to cache responses to GET requests, whereas other requests typically aren't cacheable. Again, you can use standard HTTP firewalls and proxies to implement caching. You can get 'web-scale' caching for free.
The reason that level 2 builds on level 1 is that you need each resource to be separate, because you want to be able to cache resources independently from each other. You can't do this if you can't distinguish various resources from each other, or if you can't distinguish reads from writes.
The disadvantage is that it may involve more programming work to implement this. Also, all the previous disadvantages still apply. Clients are tightly coupled to your published API. If you change your URL structure, clients break. If you change data formats, clients break.
Still, many so-called REST APIs are designed and published at this level, so in practice it seems that many organisations find this a good trade-off between advantages and disadvantages.
Level 3
This is the level of REST design that I consider true REST. It's nothing like the previous levels; it's a completely different way to design APIs. In my mind, there's a hard divide between levels 0-2, and level 3.
One distinguishing feature of level 3 is that you must think content negotiation into the API design. Once you have that, though, the reasons to choose this API design style become clearer.
To me, the dominant advantage of level 3 APIs is that you can evolve them independently of clients. If you're careful, you can change the structure, even the navigation graph, of your API without breaking existing clients. If you need to introduce breaking changes, you can use content negotiation to ensure that clients can opt-in to the breaking change, whereas legacy clients will keep working.
Basically, when I'm asked to write an API where I have no control over clients, my default choice is level 3.
Designing a level 3 REST API requires you to design in a way that's unusual and alien to many, so that's a disadvantage. Another disadvantage is that client developers often find this style of API design unfamiliar, so they often try to second-guess, or retro-engineer, your URL structure. If they do, you'll have to expend some effort to prevent them from doing that as well, since this will prevent you from being able to evolve the API.
In other words, level 3 APIs require considerable development effort, particularly on the server-side, but clients also become more complex.
I will, though, reiterate the advantage: you can evolve an level 3 REST API independently of clients. If you don't control clients, backwards compatibility is critical. Level 3 enables you to evolve APIs while still retaining compatibility. I'm not aware of a way you can achieve this with any of the other styles.
CQRS
Now that we've identified some advantages and disadvantages of REST, we can start to discuss whether it's applicable to CQRS.
The most fundamental agreement between Greg Young and Udi Dahan concerning CQRS is that it's not a top-level architecture.
In a nutshell, the reason for this is that the messages (commands and events) and queries that make up a CQRS system are sensitive to interpretation. In order to do something, a client must know which command to issue, and the server must know how to interpret it. The command, thus, is part of the system.
The system may be distributed across clients and servers, but the messages and data structures are coupled to each other. If you change how your server interprets a given message, that change will impact your clients. You can't evolve clients and servers independently in a CQRS architecture, which is the reason why it's not a top-level architecture.
So, given that it's not a top-level architecture, the transport architecture becomes fairly irrelevant. In a sense, the only thing you need in order to send messages is a single 'service bus' endpoint, which could easily be a level 0 endpoint, if all you need is interoperability. After all, the only thing you do is to put a message on a queue.
The final answer, then, is, as always: it depends.
Is speed of delivery the most important criterion? And can you control all clients and servers at the same time? Then, perhaps level 0 is all you need. Perhaps level 2 is fine.
On the other hand, if you have clients out of your control (mobile apps, hardware (IoT), business partners using your public API, etc.), you must consider how to deal with backwards and forwards compatibility, in which case level 3 is (IMO) required. In that case, though, I'd suggest keeping CQRS an implementation detail.
The best answer would probably be "it depends", but one of the big things about real REST is that it is stateless. And it being stateless means all sorts of things.
Basically, if you look at the HATEOAS constraint you have the reason for the decoupling 1
I think that RPC-JSON is not statefull per se, but it definately is not defined as stateless. So there you have the strongest argument of why the decoupling is very high with REST.
1: https://en.wikipedia.org/wiki/HATEOAS , http://restfulapi.net/hateoas/

What factors do I need to consider to determine whether I should "trust the defaults" with respect to encryption

Background
With respect to cryptography in general, the following advice is so common that it may even be platform and language-agnostic.
Cryptography is an incredibly complex subject which developers should leave to security experts`
I understand and agree with the reasoning behind this statement, and therefore follow the advice when using cryptography in an application.
That being said, because cryptography is tread upon so lightly in all but crypto-specific reference material, I do not know enough about how cryptography works in order to be able to determine whether the default provided to me is adequate for the situation I'm in.There are thousands of crypto frameworks out there in a myriad of different languages, I refuse to believe that every one of those implementations is secure because I don't believe every crypto implementation was created by a crypto expert, principally because if popular opinion is to be believed there just aren't that many of them.
Question:
What information do I need to know about a given encryption algorithm to be able to determine for myself whether an algorithm is a reasonable choice?
You need to know the current estimates of time-to-break for each algorithm variant.
You need to know the certifications for particular libraries.
You need to know the required effective security level for the data you are encrypting. Health information in the USA has particular requirements, for example. So do electric utilities.
The more technical you want to get with crypto algorithm evaluation, the more you are wanting the services of an expert. :-/
Consider http://www.cryptopp.com as an example of information provided. For instance, it is certified by NIST.
What information do I need to know
about a given encryption algorithm to
be able to determine for myself
whether an algorithm is a reasonable
choice?
Once you identify what you do need, there are very few peer-reviewed solutions you can trust. For example:
Symmetric Encryption: AES (Rijndael), Triple DES
Asymmetric Encryption: Diffie-Hellman, RSA
Hashing: The SHA family of functions
These are proven, battle-tested solutions. Until someone proves otherwise, they can be used safely. It's been a while since cryptography departed from security through obscurity and "roll your own" implementations.
There's a lot of cryptographic quackery out there, just be careful when choosing your solution. Make sure it's built on proven technologies, and if it sounds too good or has words like "unbreakable," "revolutionary" or the like, you can be 99% sure that it's bogus.
The effective methods are well documented and extensively used. I tend to think of three situations relative to cryptography:
If a government sized entity wants your stuff, they'll get it.
For confidential personal or business stuff, social engineering and non-cryptographic means are almost always more effective than code-breaking for almost any imaginable situation.
For hiding stuff from friends, relations, and mere interlopers, anything off the shelf is sufficient. In these scenarios that you have hidden stuff is typically more damning than the stuff itself might be.
There was a time when railroad boxcars switched from heavy-duty padlocks to easily defeated but hard to forge loops of wire. Make the lock stronger and they just go in through the walls. Turn the lock into an intrusion detector and you've gained something.
Signing and authentication are turning out to be better uses of cryptography than mere encryption.

Can someone explain an Enterprise Service Bus to me in non-buzzspeak?

Some of our partners are telling us that our software needs to interact with an Enterprise Service Bus. After researching this a bit, my instinct is to say that this is just buzz speak for saying that we need to have a platform-indpendent way to pass messages back and forth. I'm just trying to get a feel for what our partners are telling us. Am I correct in dismissing our partners' request as just trying to get our software to be more buzzword-compliant, or are they telling us something we should listen to (even if encoded in buzzspeak)?
Although ESB is based on messaging, it is not "just" messaging and not just a buzzword.
So if you start with plain old async messaging, the early networks tended to be very point-to-point. You had to wire up (i.e. configure through some admin interface) each connection and each pair of destinations and if you dared to move anything around invariably something broke. Because the connection points were wired by hand these networks never achieved high connection density. The incremental cost was too high and did not scale. There was also a lot of access control and policy embedded in the topology. The lack of connection density actually favors this approach to security, even though it inhibits flexibility.
The ESB attempts to address these issues with...
Run-time resolution of destinations/services/resources
Location transparency
Any-to-any connectivity and maximum connection density
Architected for redundancy, horizontal scalability, failover
Policy, access control, rules externalized from topology
Logical messaging network layer implemented atop the physical messaging network layer
Common namespace
So when your customer asks for ESB compatibility, they want things like the above. From an application standpoint, this also implies...
Avoiding message affinities such as requirements to process in strict sequence or to address requests only to specific nodes instead of to a generic network destination
Ability to resolve destinations dynamically at run time (i.e. add another instance of a queue and it automatically starts getting traffic, delete one and traffic routes to the remaining nodes)
Requestor and provider apps decoupled from knowing where each other "lives". Requestor makes one connection, regardless of how many services it might need to call
Authorize by policy rather than by topology
Service provider apps able to recognize and handle dupes (as per JMS spec, see "functional duplicate" due to session handling)
Ability to run multiple active instances of a service provider application
Instrument the service provider applications so you can inquire on the status of the network or perform a test without sending an actual transaction
On the other hand, if your client cannot articulate these things then they may just want to be able to check a box that says "works with the ESB."
I'll try & keep it buzzword free (but a buzz acronym may creep in).
When services/applications/mainframes/etc... want to integrate (so send messages to each other) you can end up with quite a mess. An ESB hides that mess inside of itself (or itselves) so that an organisation can pretend that there isn't a mess and that it has something manageable. It then wraps a whole load of features around this to make this box even more enticing to the senior people in an organisation who'll make the decision to buy such an expensive product. These people typically will want to introduce a large initiative which costs a lot of money to prove that they are 'doing something' and know how to spend large amounts of money. If this is an SOA initiative then vendors various will have told them that an ESB is required to make the vendors vision of what SOA is work (typically once the number of services which they might want passes a trivial number).
So an ESB is:
A vehicle for vendors to make lots of money;
A vehicle for consultants to make lots of money;
A way for senior executives (IT Directors & the like) to show they can spend lots of money;
A box to hide a mess in;
A total PITA for a technical team to work with.
After researching this a bit, my
instinct is to say that this is just
buzz speak for saying that we need to
have a platform-indpendent way to pass
messages back and forth
You are correct, partially because the term ESB is always nice word that fits well with another buzzword, legitimate or not - which is governance (i.e. helps you manage who is accessing your endpoints and reporting metrics - Metrics btw is what all the suits like to see, so that may be a contributor)
Another reason they might want a platform neutral device is so that any services they consume are always exposed as endpoints from a central location, instead of a specific machine resource. The ESB makes the actual physical endpoints of your services irrelevant to them, which they shouldn't care much about anyway, but that enables you to move services around however they will only consume the ESB Endpoint.
Apart from a centralized repository for Discovery, an ESB also makes side by side versioning of services easier. If I had a choice and my company had the budget, we would have purchased IBM's x150 appliance :(
Thirdly, a lot of more advanced buses, like SoftwareAG's product if I recall, is natively able to expose legacy data, like from data sitting on main frames as services without the need for coding via adapters
I don't know if their intent is to leverage all the benefits an ESB provides, or as you said, make it buzzword compliant.
After researching this a bit, my instinct is to say that this is just buzz speak for saying that we need to have a platform-indpendent way to pass messages back and forth
That's about right. Sometimes an ESB will go a little bit further and include additional features like message delivery guarantees, confirmation/acknowledgement messages, and so on. The presence of an ESB also usually explicitly or implicitly creates a new protocol where none previously existed, which is another important consideration. (That is, some sort of standard or interface has to be set regarding the format of the messages.)
Am I correct in dismissing our partners' request as just trying to get our software to be more buzzword-compliant, or are they telling us something we should listen to (even if encoded in buzzspeak)?
You should always listen to your customers, even if it initially sounds silly. It's usually worth at least spending the effort to decide what's going on. Reading between the lines, what your partners probably mean is that they want a way for your service to integrate more easily with their own services and products.
An enterprise service bus handles the messaging between systems in a standard way. This allows you to communicate with the bus in the same exact way across all your platforms and the bus handles the actual translating to individual communication mechanism needed for the specific endpoint. This means you write all your code to talk to the bus using a common messaging scheme and the bus handles taking your common scheme and translating it so the endpoint understands it.
The simplest explanation is to explain what it provides:
For many years companies acquired different platforms and technologies to achieve specific functions in their business from Finance to HR. These systems needed to talk to each other to share data so middleware became the glue that allowed them to connect. Before the business knew it, they were paying for support and maint on each of these systems and the middleware. As needs in the business changed departments decided to create their own custom solutions to address special needs rather than try to make the aging solutions flexible enough to meet their needs. Before they knew it, they were paying to support and maintain the legacy systems, middleware, and custom solutions. With new laws like Sarbanes Oxley, companies need to have better information available for reporting purposes. A single view requires that they capture data from all of the systems. In addition, CIOs are now being pressured to lower costs and increase customer service. One obvious solution is the eliminate redudant systems, expensive support and maint contracts, and high cost legacy solutions which require specialists to support. Moving to a new platform allows for this, but there needs to be a transition. There are no turnkey solutions that can replicate what the business does. To address the needs for moving information around they go with SOA because it allows for information access through a generic entity. If I ask for AllEmployees from the service bus it gets them whether it is from 15 HR systems or 1. When the 15 HR systems becomes 1 system the call and result does not change, just how it was done behind the scenes. The Service Bus concept standardizes the flow of information and allows IT managers to conduct transitions behind the bus with no long term effect on upstream users.

Do formal methods of program verfication have a place in industry?

I took a glimpse on Hoare Logic in college. What we did was really simple. Most of what I did was proving the correctness of simple programs consisting of while loops, if statements, and sequence of instructions, but nothing more. These methods seem very useful!
Are formal methods used in industry widely?
Are these methods used to prove mission-critical software?
Well, Sir Tony Hoare joined Microsoft Research about 10 years ago, and one of the things he started was a formal verification of the Windows NT kernel. Indeed, this was one of the reasons for the long delay of Windows Vista: starting with Vista, large parts of the kernel are actually formally verified wrt. to certain properties like absence of deadlocks, absence of information leaks etc.
This is certainly not typical, but it is probably the single most important application of formal program verification, in terms of its impact (after all, almost every human being is in some way, shape or form affected by a computer running Windows).
This is a question close to my heart (I'm a researcher in Software Verification using formal logics), so you'll probably not be surprised when I say I think these techniques have a useful place, and are not yet used enough in the industry.
There are many levels of "formal methods", so I'll assume you mean those resting on a rigourous mathematical basis (as opposed to, say, following some 6-Sigma style process). Some types of formal methods have had great success - type systems being one example. Static analysis tools based on data flow analysis are also popular, model checking is almost ubiquitous in hardware design, and computational models like Pi-Calculus and CCS seem to be inspiring some real change in practical language design for concurrency. Termination analysis is one that's had a lot of press recently - The SDV project at Microsoft and work by Byron Cook are recent examples of research/practice crossover in formal methods.
Hoare Reasoning has not, so far, made great inroads in the industry - this is for more reasons than I can list, but I suspect is mostly around the complexity of writing then proving specifications for real programs (they tend to get big, and fail to express properties of many real world environments). Various sub-fields in this type of reasoning are now making big inroads into these problems - Separation Logic being one.
This is partially the nature of ongoing (hard) research. But I must confess that we, as theorists, have entirely failed to educate the industry on why our techniques are useful, to keep them relevant to industry needs, and to make them approachable to software developers. At some level, that's not our problem - we're researchers, often mathematicians, and practical usage is not foremost in our minds. Also, the techniques being developed are often too embryonic for use in large scale systems - we work on small programs, on simplified systems, get the math working, and move on. I don't much buy these excuses though - we should be more active in pushing our ideas, and getting a feedback loop between the industry and our work (one of the main reasons I went back to research).
It's probably a good idea for me to resurrect my weblog, and make some more posts on this stuff...
I cannot comment much on mission-critical software, although I know that the avionics industry uses a wide variety of techniques to validate software, including Hoare-style methods.
Formal methods have suffered because early advocates like Edsger Dijkstra insisted that they ought to be used everywhere. Neither the formalisms nor the software support were up to the job. More sensible advocates believe that these methods should be used on problems that are hard. They are not widely used in industry, but adoption is increasing. Probably the greatest inroads have been in the use of formal methods to check safety properties of software. Some of my favorite examples are the SPIN model checker and George Necula's proof-carrying code.
Moving away from practice and into research, Microsoft's Singularity operating-system project is about using formal methods to provide safety guarantees that ordinarily require hardware support. This in turn leads to faster performance and stronger guarantees. For example, in singularity they have proved that if a third-party device driver is allowed into the system (which means basic verification conditions have been proved), then it cannot possibly bring down that whole OS–he worst it can do is hose its own device.
Formal methods are not yet widely used in industry, but they are more widely used than they were 20 years ago, and 20 years from now they will be more widely used still. So you are future-proofed :-)
Yes, they are used, but not widely in all areas. There are more methods than just hoare logic, some are used more, some less, depending on suitability for given task. The common problem is that sofware is biiiiiiig and verifying that all of it is correct is still too hard a problem.
For example the theorem-prover (a software that aids humans in proving program correctness) ACL2 has been used to prove that a certain floating-point processing unit does not have a certain type of bug. It was a big task, so this technique is not too common.
Model checking, another kind of formal verification, is used rather widely nowadays, for example Microsoft provides a type of model checker in the driver development kit and it can be used to verify the driver for a set of common bugs. Model checkers are also often used in verifying hardware circuits.
Rigorous testing can be also thought of as formal verification - there are some formal specifications of which paths of program should be tested and so on.
"Are formal methods used in industry?"
Yes.
The assert statement in many programming languages is related to formal methods for verifying a program.
"Are formal methods used in industry widely ?"
No.
"Are these methods used to prove mission-critical software ?"
Sometimes. More often, they're used to prove that the software is secure. More formally, they're used to prove certain security-related assertions about the software.
There are two different approaches to formal methods in the industry.
One approach is to change the development process completely. The Z notation and the B method that were mentioned are in this first category. B was applied to the development of the driverless subway line 14 in Paris (if you get a chance, climb in the front wagon. It's not often that you get a chance to see the rails in front of you).
Another, more incremental, approach is to preserve the existing development and verification processes and to replace only one of the verification tasks at a time by a new method. This is very attractive but it means developing static analysis tools for exiting, used languages that are often not easy to analyse (because they were not designed to be).
If you go to (for instance)
http://dblp.uni-trier.de/db/indices/a-tree/d/Delmas:David.html
(sorry, only one hyperlink allowed for new users :( )
you will find instances of practical applications of formal methods to the verification of C programs (with static analyzers Astrée, Caveat, Fluctuat, Frama-C) and binary code (with tools from AbsInt GmbH).
By the way, since you mentioned Hoare Logic, in the above list of tools, only Caveat is based on Hoare logic (and Frama-C has a Hoare logic plug-in). The others rely on abstract interpretation, a different technique with a more automatic approach.
My area of expertise is the use of formal methods for static code analysis to show that software is free of run-time errors. This is implemented using a formal methods technique known "abstract interpretation". The technique essentially enables you to prove certain atributes of a s/w program. E.g. prove that a+b will not overflow or x/(x-y) will not result in a divide by zero. An example static analysis tool that uses this technique is Polyspace.
With respect to your question: "Are formal methods used in industry widely?" and "Are these methods used to prove mission-critical software?"
The answer is yes. This opinion is based on my experience and supporting the Polyspace tool for industries that rely on the use of embedded software to control safety critical systems such as electronic throttle in an automobile, braking system for a train, jet engine controller, drug delivery infusion pump, etc. These industries do indeed use these types of formal methods tools.
I don't believe all 100% of these industry segments are using these tools, but the use is increasing. My opinion is that the Aerospace and Automotive industries lead with the Medical Device industry quickly ramping up use.
Polyspace is a a (hideously expensive, but very good) commercial product based on program verification. It's fairly pragmatic, in that it scales up from 'enhanced unit testing that will probably find some bugs' to 'the next three years of your life will be spent showing these 10 files have zero defects'.
It is based more on negative verification ('this program won't corrupt your stack') instead positive verification ('this program will do precisely what these 50 pages of equations say it will').
To add to Jorg's answer, here's an interview with Tony Hoare. The tools Jorg's referring to, I think, are PREfast and PREfix. See here for more information.
Besides of other more procedural approaches, Hoare logic was in the basis of Design by Contract, introduced as an object oriented technique by Bertrand Meyer in Eiffel (see Meyer's article of 1992, page 4). While Design by Contract is not the same as formal verification methods (for one thing, DbC doesn't prove anything until the software is executed), in my opinion it provides a more practical use.

Can we achieve 100% decoupling?

Can we achieve 100% decoupling between components of a system or different systems that communicate with each other? I don't think its possible. If two systems communicate with each other then there should be some degree of coupling between them. Am I right?
Right. Even if you write to an interface or a protocol, you are committing to something. You can peacefully forget about 100% decoupling and rest assured that whatever you do, you cannot just snap out one component and slap another in its place without at least minor modifications anyway, unless you are committing to very basic protocols such as HTTP (and even then.)
We human beings, after all, just LOOVE standards. That's why we have... well, nevermind.
If components are 100% decoupled, it means that they don't communicate with each other.
Actually there are different types of coupling. But the general idea is that objects are not coupled if they don't depend on each other.
You can achieve that. Think of two components that communicate with each other through network. One component can run on Windows while other on Unix. Isn't that 100% decoupling?
At minimum, firewall protection, from a certain interface at least, needs to allow the traffic from each machine to go to the other. That alone can be considered a form of 'coupling' and therefore, coupling is inherent to machines that communicate, at least to a certain level.
This is achievable by introducing a communication interface or protocol which both components understand and not passing data directly between the components.
Well two webservices that don't reference each other might be a good example of 100% decoupled.
The coupling would then arrive in the form of an app util that "couples" them together by using them both.
Coupling isn't inherently bad but you do have to make solid judgement calls about when to do it (is it only at Implementation, or in your framework itself?) and if the coupling is reasonable.
If the components are designed to be 100% orthogonal, it should be possible. A clear separation of concerns can achieve this. All a component needs to know is the interface of its input.
The coupling should be one-directional: components know the semantics of their parameters, but should be agnostic of each other.
As soon as you have 1% coupling between components, the 1% starts growing (in a system which lasts a little longer)
However, often knowledge is injected in peer components to achieve higher performance.
Even if two components do not comunicate directly, the third component, which uses the other two is part of the system and it is coupled to them.
#Vadmyst: If your components communicate over network they must use some kind of protocol which is the same as the interface of two local components.
That's a painfully abstract question to answer. If said system is the components of a single application, then there are various techniques such as those involving MVC (Model View Controller) and interfaces for IoC / Dependency Injection that facilitate decoupling of components.
From the perspective of physically isolated software architectures, CORBA and COM support local or networked interop and use a "common tongue" of things like ATL. These have been deprecated by XML services such as SOAP, which uses WSDL for performing coupling. There's nothing that stops a SOAP client from using a WSDL for run-time late coupling, although I rarely see it. Then there are things like JSON, which is like XML but optimized, and Google Protocol Buffers which optimizes the interop but is typically precompiled and not late-coupled.
When it comes to IPC (interprocess communications), two systems need only to speak a common "protocol". This could be XML, it could be a shared class library, or it could be something proprietary. Even at the proprietary level, you're still "coupled" by memory streams, TCP/IP networking, shared file (memory or hard disk), or some other mechanism, and you're still using bytes, and ultimately 1's and 0's.
So ultimately the question really cannot be answered fairly; strictly speaking, 100% is only attained by systems that have zilch to do with each other. Refine your question to a context.
It's important to distinguish between direct, and indirect components. Strive to remove direct connections (one class referencing another) and to use indirect connections instead. Bind two 'ignorant' classes with a third which manages their interactions.
This would be something like a set of user controls sitting on a form, or a pool of database connections and a connection pooling class. The more fundamental components (controls and connections) are managed by the higher piece (form and connection pool), but no fundamental component knows about another. The fundamental components expose events and methods, and the other piece 'pulls the strings'.
No, we can't. Read Joel's excellent article The Laws of Leaky Abstraction, it is an eye-opener for many people. However, this isn't necessarily a bad thing, it just is. Leaky abstractions offer great opportunity because they make the underlying platform exploitable.
Think of the API very hard for a very long time, then make sure it's as small as it can possibly be, until it's at the place where it has almost disappeared...
The Lego Software Process proposes this ... :) - and actually quite well achieves this...
How "closely coupled" are two cells of an organism...?
The cells in an organism can still communicate, but instead of doing it by any means that requires any knowledge about the receiving (or sending) part, they do it by releasing chemicals into the body ... ;)