HP NonStop systems (previously known as "Tandem") are known for their high availability and reliability, and higher price.
How do Linux or Unix based clusters compare with them, in these respects and others?
On a fault-tolerant machine the fault tolerance is handled directly in hardware and transparent to the application. Programming a cluster requires you to explicitly handle the fault tolerance in the application.
In practice, a clustered application architecture is much more complex to build and error prone than an application built for a fault-tolerant platform such as NonStop. This means that there is a far greater scope for unreliability driven by application bugs, as the London Stock Exchange found out the hard way. They had an incumbent Tandem-based system, which was quite a common architecture for stock exchange trading applications. Their new CEO had the bright idea that Microsoft was the way forward and had a big-5 consultancy build a .Net system based on a cluster of 120 servers.
The problem with clustered applications is that the failures can be correlated. If an application or configuration bug exists in the system it will typically be replicated on all of the nodes. This means that you can get a single situation or event that can take out the whole cluster. The additional complexity of clustered applications makes them more error-prone to develop and deploy, which raises the odds of this happening. A clustered system built on (for example) Linux and J2EE is vulnerable to the same types of failure modes.
IMHO this is a major advantage of older-style mainframe architectures. Several vendors (IBM, HP, DEC and probably several others I can't think of) made fault tolerant systems. The underlying programming model for this type of system is somewhat simpler than a clustered n-tier application server. This means that there is comparatively little to go wrong and for a given amount of effort you can achieve a more reliable system. A surprising number of older architectures are still alive and well and living quite comfortably in their market niches. IBM still sell plenty of Z and I series machines; Unisys still makes the A Series and 2200 series; VMS and NonStop are still alive within HP. The sales of these systems are not all to existing clients - for example a Commercial Underwriting system (GENIUS) runs on the ISeries and is still a market leader in this niche with new rollouts going on as I write this. The application has survived two attempts to rewrite it (1 in in Java and 1 in .Net) that I am aware of and the 'Old School' platform doesn't really seem to be cramping its style.
I wouldn't go shorting any screen-scraper vendors just yet ...
Gray & Reuter's Transaction Processing: Concepts and Techniques is somewhat dry and academic, but has a good treatment of fault-tolerant systems architecture. One of the authors was a key player in the design of Tandem's systems.
Related
I am trying to understand what is the use case of a queue in distributed system.
And also how it scales and how it makes sure it's not a single point of failure in the system?
Any direct answer or a reference to a document is appreciated.
Use case:
I understand that queue is a messaging system. And it decouples the systems that communicate between each other. But, is that the only point of using a queue?
Scalability:
How does the queue scale for high volumes of data? Both read and write.
Reliability:
How does the queue not becoming a single point of failure in the system? Does the queue do a replication, similar to data-storage?
My question is not specified to any particular queue server like Kafka or JMS. Just in general.
Queue is a mental concept, the implementation decides about 1 + 2 + 3
A1: No, it is not the only role -- a messaging seems to be main one, but a distributed-system signalling is another one, by no means any less important. Hoare's seminal CSP-paper is a flagship in this field. Recent decades gave many more options and "smart-behaviours" to work with in designing a distributed-system signalling / messaging services' infrastructures.
A2: Scaling envelopes depend a lot on implementation. It seems obvious that a broker-less queues can work way faster, that a centralised, broker-based, infrastructure. Transport-classes and transport-links account for additional latency + performance degradation as the data-flow volumes grow. BLOB-handling is another level of a performance cliff, as the inefficiencies are accumulating down the distributed processing chain. Zero-copy ( almost ) zero-latency smart-Queue implementations are still victims of the operating systems and similar resources limitations.
A3: Oh sure it is, if left on its own, the SPOF. However, Theoretical Cybernetics makes us safe, as we can create reliable systems, while still using error-prone components. ( M + N )-failure-resilient schemes are thus achievable, however the budget + creativity + design-discipline is the ceiling any such Project has to survive with.
my take:
I would be careful with "decouple" term - if service A calls api on service B, there is coupling since there is a contract between services; this is true even if the communication is happening over a queue, file or fax. The key with queues is that the communication between services is asynchronous. Which means their runtimes are decoupled - from practical point of view, either of systems may go down without affecting the other.
Queues can scale for large volumes of data by partitioning. From clients point of view, there is one queue, but in reality there are many queues/shards and number of shards helps to support more data. Of course sharding a queue is not "free" - you will lose global ordering of events, which may need to be addressed in you application.
A good queue based solution is reliable based on replication/consensus/etc - depends on set of desired properties. Queues are not very different from databases in this regard.
To give you more direction to dig into:
there an interesting feature of queues: deliver-exactly-once, deliver-at-most-once, etc
may I recommend Enterprise Architecture Patterns - https://www.enterpriseintegrationpatterns.com/patterns/messaging/Messaging.html this is a good "system design" level of information
queues may participate in distributed transactions, e.g. you could build something like delete a record from database and write it into queue, and that will be either done/committed or rolledback - another interesting topic to explore
I saw a job description with the term Real-Time Software Development:
Software Engineers at Boeing develop solutions that provide world class performance and capability to customers around the world.
Boeing Defense, Space and Security in St. Louis is looking for
software engineers to join the growing and talented teams developing
modeling and simulation software for a variety of applications,
including flight control and aerodynamic performance, weapon and
sensor systems, simulation tools and more. The software is integrated
with live assets to enable a next-generation virtual battle
environment to explore new system concepts and optimal engineering
solutions.
Our software engineers are responsible for full life-cycle software development which means you will have a hand in defining the
requirements; designing, implementing and testing the software. You
will work with a team in a casual but professional environment where
there is long-term potential for career growth into management or
technical leadership positions.
**Languages & Databases**
Real-time SW Development Tool
Real-time Target Environment
Job:*Software Engineer
I can't figure out what that means in this context, what does Real Time Software development mean?
The links in comments give some useful information. The real problem with Real Time is that there are far less usages than ordinary scientific or data processing applications and so less specialists around.
I used a Real Time development environment many years ago, a a friend of mine used another one more recently. The generic characteristics were :
the development system is an IDE more or less like any other IDE
you have the ability to get the precise time that will last any routine, because if you use a RT system, it is because you need deterministic processing times
you have an emulator that allows you to run the program or more exactly simulate it running on the real system with different inputs (including hardware inputs) and control both the outputs and the times
you generally mix high level programming (C or others) for non critical parts and low level assembly routines in time critical parts.
The remaining really depended on the simulated system.
Real time in this context means software that always run in the same time. Normal server and desktop OSes such as Mac, Linux, and Windows have multitasking without exact scheduling, making it impossible to say exactly how long time it will take for a piece of code to run. In a real time OS, the time it will take a piece of code is always the same.
This is used in space craft, aircraft and similar areas.
Not to be confused with real time processing speed, eg. encoding video in real time means to encode it as fast as the frames are coming.
I am having hard time to grab what etcd (in CoreOS) really does, because all those "distributed key-value storage" thingy seems intangible to me. Further reading into etcd, it delves into into Raft consensus algorithm, and then it becomes really confusing to understand.
Let's say that what happen if a cluster system doesn't have etcd?
Thanks for your time and effort!
As someone with no CoreOS experience building a distributed system using etcd, I think I can shed some light on this.
The idea with etcd is to give some very basic primitives that are applicable for building a wide variety of distributed systems. The reason for this is that distributed systems are fundamentally hard. Most programmers don't really grok the difficulties simply because there are orders of magnitude more opportunity to learn about single-system programs; this has really only started to shift in the last 5 years since cloud computing made distributed systems cheap to build and experiment with. Even so, there's a lot to learn.
One of the biggest problems in distributed systems is consensus. In other words, guaranteeing that all nodes in a system agree on a particular value. Now, if hardware and networks were 100% reliable then it would be easy, but of course that is impossible. Designing an algorithm to provide some meaningful guarantees around consensus is a very difficult problem, and one that a lot of smart people have put a lot of time into. Paxos was the previous state of the art algorithm, but was very difficult to understand. Raft is an attempt to provide similar guarantees but be much more approachable to the average programmer. However, even so, as you have discovered, it is non-trivial to understand it's operational details and applications.
In terms of what etcd is specifically used for in CoreOS I can't tell you. But what I can say with certainty is that any data which needs to be shared and agreed upon by all machines in a cluster should be stored in etcd. Conversely, anything that a node (or subset of nodes) can handle on its own should emphatically not be stored in etcd (because it incurs the overhead of communicating and storing it on all nodes).
With etcd it's possible to have a large number of identical machines automatically coordinate, elect a leader, and guarantee an identical history of data in its key-value store such that:
No etcd node will ever return data which is not agreed upon by the majority of nodes.
For cluster size x any number of machines > x/2 can continue operating and accepting writes even if the others die or lose connectivity.
For any machines losing connectivity (eg. due to a netsplit), they are guaranteed to continue to return correct historical data even though they will fail to write.
The key-value store itself is quite simple and nothing particularly interesting, but these properties allow one to construct distributed systems that resist individual component failure and can provide reasonable guarantees of correctness.
etcd is a reliable system for cluster-wide coordination and state management. It is built on top of Raft.
Raft gives etcd a total ordering of events across a system of distributed etcd nodes. This has many advantages and disadvantages:
Advantages include:
any node may be treated like a master
minimal downtime (a client can try another node if one isn't responding)
avoids split-braining
a reliable way to build distributed locks for cluster-wide coordination
users of etcd can build distributed systems without ad-hoc, buggy, homegrown solutions
For example: You would use etcd to coordinate an automated election of a new Postgres master so that there remains only one master in the cluster.
Disadvantages include:
for safety reasons, it requires a majority of the cluster to commit writes - usually to disk - before replying to a client
requires more network chatter than a single master system
I'm talking about a library that would allow me to log events from different machines and would align these events on a "global" time axis with sufficiently high precision.
Actually, I'm asking because I've written such a thing myself in the course of a cluster computing project, I found it terrifically useful, and I was surprised that I couldn't find any analogues.
Therefore, the point is whether something like this exists (and I better contribute to it) or nothing exists (and I better write an open-source analogue of my solution).
Here are the features that I'd expect from such a library:
Independence on the clock offset between different machines
Timing precision on the order of at least milliseconds, preferably microseconds
Scalability to thousands of concurrent logging processes, with at least several megabytes of aggregated logs per second
Soft real-time operation (t.i. I don't want to collect 200 big logs from 200 machines and then compute clock offsets and merge them - I want to see what happens "live", perhaps with a small lag like 10s)
Facebook's contribution in the matter is called 'Scribe'.
Excerpt:
Scribe is a server for aggregating streaming log data. It is designed to scale to a very large number of nodes and be robust to network and node failures. There is a scribe server running on every node in the system, configured to aggregate messages and send them to a central scribe server (or servers) in larger groups.
...
Scribe is implemented as a thrift service using the non-blocking C++ server. The installation at facebook runs on thousands of machines and reliably delivers tens of billions of messages a day.
The API is Thrift-based, so you have a good platform coverage, but in case you're looking for simple integration for Java you may want to have a look at Digg's log4j appender for Scribe.
You could use log4j/log4net targeting a central syslog daemon. log4j has a builtin SyslogAppender, and in log4net you can do it as shown here. log4cpp docs here.
There are Windows implementations of Syslog around if you don't have a Unix system to hand for this.
Use Chukwa, Its Open source and Large scale Log Monitoring System
I have recently begun working on a project to establish how best to leverage the processing power available in modern graphics cards for general programming. It seems that the field general purpose GPU programming (GPGPU) has a large bias towards scientific applications with a lot of heavy math as this fits well with the GPU computational model. This is all good and well, but most people don't spend all their time running simulation software and the like so we figured it might be possible to create a common foundation for easily building GPU-enabled software for the masses.
This leads to the question I would like to pose; What are the most common types of work performed by programs? It is not a requirement that the work translates extremely well to GPU programming as we are willing to accept modest performance improvements (Better little than nothing, right?).
There are a couple of subjects we have in mind already:
Data management - Manipulation of large amounts of data from databases
and otherwise.
Spreadsheet type programs (Is somewhat related to the above).
GUI programming (Though it might be impossible to get access to the
relevant code).
Common algorithms like sorting and searching.
Common collections (And integrating them with data manipulation
algorithms)
Which other coding tasks are very common? I suspect a lot of the code being written is of the category of inventory management and otherwise tracking of real 'objects'.
As I have no industry experience I figured there might be a number of basic types of code which is done more often than I realize but which just doesn't materialize as external products.
Both high level programming tasks as well as specific low level operations will be appreciated.
General programming translates terribly to GPUs. GPUs are dedicated to performing fairly simple tasks on streams of data at a massive rate, with massive parallelism. They do not deal well with the rich data and control structures of general programming, and there's no point trying to shoehorn that into them.
General programming translates terribly to GPUs. GPUs are dedicated to performing fairly simple tasks on streams of data at a massive rate, with massive parallelism. They do not deal well with the rich data and control structures of general programming, and there's no point trying to shoehorn that into them.
This isn't too far away from my impression of the situation but at this point we are not concerning ourselves too much with that. We are starting out by getting a broad picture of which options we have to focus on. After that is done we will analyse them a bit deeper and find out which, if any, are plausible options. If we end up determining that it is impossible to do anything within the field, and we are only increasing everybody's electricity bill then that is a valid result as well.
Things that modern computers do a lot of, where a little benefit could go a long way? Let's see...
Data management: relational database management could benefit from faster relational joins (especially joins involving a large number of relations). Involves massive homogeneous data sets.
Tokenising, lexing, parsing text.
Compilation, code generation.
Optimisation (of queries, graphs, etc).
Encryption, decryption, key generation.
Page layout, typesetting.
Full text indexing.
Garbage collection.
I do a lot of simplifying of configuration. That is I wrap the generation/management of configuration values inside a UI. The primary benefit is I can control work flow and presentation to make it simpler for non-techie users to configure apps/sites/services.
The other thing to consider when using a GPU is the bus speed, Most Graphics cards are designed to have a higher bandwidth when transferring data from the CPU out to the GPU as that's what they do most of the time. The bandwidth from the GPU back up to the CPU, which is needed to return results etc, isn't as fast. So they work best in a pipelined mode.
You might want to take a look at the March/April issue of ACM's Queue magazine, which has several articles on GPUs and how best to use them (besides doing graphics, of course).