Electing a new leader in distributed systems - message-queue

I have the following problem:
I have a distributed system where I need to reach a consensus in one way or another when choosing a leader.
I have a group of players that communicate with each other via messages. In order for these players to progress from a stage to another someone has to keep track of their progress. Currently, there are 2 types of players:
leader---when he receives N-1 done messages (for N-1 players) he is responsible for broadcasting to all other users state change
follower ---he is responsible for getting the messages of the leader and updating his internal state-machine.
Each player receives messages from 2 pipelines:
-Status pipeline - He receives an array of type
[user1,user2,user3...userN] where each element is the user that is online.
-Message pipeline -Push based notification. Follower users will post here messages that they are ready for the next step. The leader will keep track of the DONE counter and when the threshold is reached he will broadcast ADVANCE to next step.
For a better idea i included a picture:
I do not know how to deal with leader reelection. In case the leader disconnects (this can be implemented with a timeout), how can the other nodes decide who is the next leader and if they pick randomly, should the current leader be stored in the database? I mean they only exchange messages there's nothing on the server, like a global variable or something.

What you basically need is to implement both 2 phase commit and a leader election recipe. Now, either you can implement them on your own (2 phase commit is well documented, and yes, you would need a shared storage), or if you have the flexibility to use a distributed open source co-ordination service, zookeeper would be your best bet. Have a look at the below article on apache zookeeper's page where they discuss both the recipes which you need. Hope this helps.
https://zookeeper.apache.org/doc/current/recipes.html#sc_recipes_twoPhasedCommit

Related

Assigning a task to a team

I'd like to assigne a task to a whole team of users, instead of a single user. Then anybody of that team should be able to execute the task. I do run the self hosted version of activecollab.
For example:
My team has 10 members, and therefore a capacity of 10* 8h/day = 80h
I'm assigning 100h of work/tasks to that team --> 80h stay for the day, 20h get pushed to the next day
Any member of that team can grab a task, track time and finally finish it.
Is that something which can be done right now via the api?
If not, is sth. like that on the roadmap?
ActiveCollab does not support task assignment to a team, only to an individual user. API can't be used to work around assignment of a single task to a single user. What you can do is implement a routine that will create a copy of a task for each team member and assign it to a member, but that can easily clutter your projects.
Thanks for the quick response.
Then I'm continuing with my workaround:
I'm getting the workload from all users and store them in a seperate table.
When I distribute the tasks, I look this table up, and see who is available for that activity, and finally assigne that task to that user.
Under /workload the users can still reassign and reschedule, I'll run the sync from 1. from time to time
E/thing else can be also done via the frontend

Dealing with exceptions in an event driven world

I'm trying to understand how exceptions are handled in an event driven world using micro-services (using apache kafka). For example, if you take the following order scenario whereby the following actions need to happen before the order can be completed.
1) Authorise the payment with the payment service provider
2) Reserve the item from stock
3.1) Capture the payment with the payment service provider
3.2) Order the item
4) Send a email notification accepting the order with a receipt
At any stage in this scenario, there could be a failure such as:
The item is no longer in stock
The payment information was incorrect
The account the payee is using doesn't have the funds available
External calls such as those to the payment service provider fail, such as downtime
How do you track that each stage has been called for and/or completed?
How do you deal with issues that arise? How would you notify the frontend of the failure?
Some of the things you describe are not errors or exceptions, but alternative flows that you should consider in your distributed architecture.
For example, that an item is out of stock is a perfectly valid alternative flow in your business process. One that possibly requires human intervention. You could move the message to a separate queue and provide some UI where a human operator can deal with the problem, solve it and cause the flow of events to continue.
A similar thing could be said of the payment problems you describe. If an order cannot successfully be settled, a human operator will need to investigate the case and solve it. For that matter, your design must contemplate that alternative flow as part of it, and make it so a human can intervene somehow when the messages end up in a queue that requires a person to review them.
Those cases should be differentiated from errors or exceptions being thrown by the program. Those cases, depending on the circumstance, might in fact require to move the message to a dead letter queue (DLQ) for an engineer to take a look at them.
This is a very broad topic and entire books could written about this.
I believe you could probably benefit from gaining more understanding of concepts like:
Compensating Transactions Pattern
Try/Cancel/Confirm Pattern
Long Running Transactions
Sagas
The idea behind compensating transactions is that every ying has its yang: if you have one transaction that can place an order, then you could undo that with a transaction that cancels that order. This latter transaction is a compensating transaction. So, if you carry out a number of successful transactions and then one of them fails, you can trace back your steps and compensate every successful transaction you did and, as a result, revert their side effects.
I particularly liked a chapter in the book REST from Research to Practice. Its chapter 23 (Towards Distributed Atomic Transactions over RESTful Services) goes deep in explaining the Try/Cancel/Confirm pattern.
In general terms it implies that when you do a group of transactions, their side effects are not effective until a transaction coordinator gets a confirmation that they all were successful. For example, if you make a reservation in Expedia and your flight has two legs with different airlines, then one transaction would reserve a flight with American Airlines and another one would reserve a flight with United Airlines. If your second reservation fails, then you want to compensate the first one. But not only that, you want to avoid that the first reservation is effective until you have been able to confirm both. So, initial transaction makes the reservation but keeps its side effects pending to confirm. And the second reservation would do the same. Once the transaction coordinator knows everything is reserved, it can send a confirmation message to all parties such that they confirm their reservations. If reservations are not confirmed within a sensible time window, they are automatically reversed by the affected system.
The book Enterprise Integration Patterns has some basic ideas on how to implement this kind of event coordination (e.g. see process manager pattern and compare with routing slip pattern which are similar ideas to orchestration vs choreography in the Microservices world).
As you can see, being able to compensate transactions might be complicated depending on how complex is your distributed workflow. The process manager may need to keep track of the state of every step and know when the whole thing needs to be undone. This is pretty much that idea of Sagas in the Microservices world.
The book Microservices Patterns has an entire chapter called Managing Transactions with Sagas that delves in detail on how to implement this type of solution.
A few other aspects I also typically consider are the following:
Idempotency
I believe that a key to a successful implementation of your service transactions in a distributed system consists in making them idempotent. Once you can guarantee a given service is idempotent, then you can safely retry it without worrying about causing additional side effects. However, just retrying a failed transaction won't solve your problems.
Transient vs Persistent Errors
When it comes to retrying a service transaction, you shouldn't just retry because it failed. You must first know why it failed and depending on the error it might make sense to retry or not. Some types of errors are transient, for example, if one transaction fails due to a query timeout, that's probably fine to retry and most likely it will succeed the second time; but if you get a database constraint violation error (e.g. because a DBA added a check constraint to a field), then there is no point in retrying that transaction: no matter how many times you try it will fail.
Embrace Error as an Alternative Flow
As mentioned at the beginning of my answer, not everything is an error. Some things are just alternative flows.
In those cases of interservice communication (computer-to-computer interactions) , when a given step of your workflow fails, you don't necessarily need to undo everything you did in previous steps. You can just embrace error as part of you workflow. Catalog the possible causes of error and make them an alternative flow of events that simply requires human intervention. It is just another step in the full orchestration that requires a person to intervene to make a decision, resolve an inconsistency with the data or just approve which way to go.
For example, maybe when you're processing an order, the payment service fails because you don't have enough funds. So, there is no point in undoing everything else. All we need is to put the order in a state that some problem solver can address it in the system and, once fixed, you can continue with the rest of the workflow.
Transaction and Data Model State are Key
I have discovered that this type of transactional workflows require a good design of the different states your model has to go through. As in the case of Try/Cancel/Confirm pattern, this implies initially applying the side effects without necessarily making the data model available to the users.
For example, when you place an order, maybe you add it to the database in a "Pending" status that will not appear in the UI of the warehouse systems. Once payments have been confirmed the order will then appear in the UI such that a user can finally process its shipments.
The difficulty here is discovering how to design transaction granularity in way that even if one step of your transaction workflow fails, the system remains in a valid state from which you can resume once the cause of the failure is corrected.
Designing for Distributed Transactional Workflows
So, as you can see, designing a distributed system that works in this way is a bit more complicated than individually invoking distributed transactional services. Now every service invocation may fail for a number of reasons and leave your distributed workflow in a inconsistent state. And retrying the transaction may not always solve the problem. And your data needs to be modeled like a state machine, such that side effects are applied but not confirmed until the entire orchestration is successful.
That‘s why the whole thing may need to be designed in a different way than you would typically do in a monolithic client–server application. Your users may now be part of the designed solution when it comes to solving conflicts, and contemplate that transactional orchestrations could potentially take hours or even days to complete depending on how their conflicts are resolved.
As I was originally saying, the topic is way too broad and it would require a more specific question to discuss, perhaps, just one or two of these aspects in detail.
At any rate, I hope this somehow helped you with your investigation.

CQRS / communication between contexts / eventstore / push or pull?

Communications between bounded context in CQRS/ES architecture is achieved through events; context A generates events as response to commands, and these events is then forwarded to context B through event bus (message queue).
Or... you can store the events in eventstore (that belongs to context A).
Or... both (store and forward).
My question is: from context B, should I pull the events from the context store? or simply consume the events pushed through the event bus?
I'm leaning toward the pulling approach. Because then we can do some catching up in context B. In contrast, in the push approach, context B might be unaware of events that were delivered while B is experiencing downtime.
So... does it mean... when we have eventstore, we can simply forget about the message queue (seems redundant)?
Or am I missing something here?
You'll want to review Consume event stream without Pub/Sub
At the DDD Europe conference, I realized that the speakers I talked with where (sic) avoiding Pub/Sub whenever possible.
The discussion that follows may have value. TL;DR: not many fans of pub/sub there.
Konrad Garus on Push or Pull?, describing the Pull design:
In the latter (and simpler) design, they only spread the information that a new event has been saved, along with its sequential ID (so that all projections can estimate how much behind they are). When awakened, the executor can continue along its normal path, starting with querying the event store.
Why? Because handling events coming from a single source is easier, but more importantly because a DB-backed event store trivially guarantees ordering and has no issues with lost or duplicate messages. Querying the database is very fast, given that we’re reading a single table sequentially by primary key, and most of the time the data is in RAM cache anyway. The bottleneck is in the projection thread updating its read model database.
In the large, it comes down to this: when people are thinking about event sourcing, they are really thinking about histories, rather than events in isolation. If what you really want is an ordered sequence of events with no gaps, querying the authority for that sequence is much better than trying to reconstruct if from a bunch of disjoint event messages.
But - once you decide to do that, then suddenly the history, and all of the events that appear within it, becomes part of the api of context A. What happens when team A decides that a different event store implementation is more suitable? Can they just roll out a new version of their own services, or do we need a grand outage because every consumer also has to get updated?
Similarly, what happens if we decide to refactor context A into context C and context D? Again, do we have to screw around in context B to get the data we need?
Maybe the real problem is that context B is coupled to the histories in context A, and those histories should really be private? Should context B be accessing context A's data, or should it instead be delegating that work to context A's capabilities?
Udi Dahan essays on SOA may jump start your thinking in that direction.

Message queuing solution for millions of topics

I'm thinking about system that will notify multiple consumers about events happening to a population of objects. Every subscriber should be able to subscribe to events happening to zero or more of the objects, multiple subscribers should be able to receive information about events happening to a single object.
I think that some message queuing system will be appropriate in this case but I'm not sure how to handle the fact that I'll have millions of the objects - using separate topic for every of the objects does not sound good [or is it just fine?].
Can you please suggest approach I should should take and maybe even some open source message queuing system that would be reasonable?
Few more details:
there will be thousands of subscribers [meaning not plenty of them],
subscribers will subscribe to tens or hundreds of objects each,
there will be ~5-20 million of the objects,
events themselves dont have to carry any message. just information that that object was changed is enough,
vast majority of objects will never be subscribed to,
events occur at the maximum rate of few hundreds per second,
ideally the server should run under linux, be able to integrate with the rest of the ecosystem via http long-poll [using node js? continuations under jetty?].
Thanks in advance for your feedback and sorry for somewhat vague question!
I can highly recommend RabbitMQ. I have used it in a couple of projects before and from my experience, I think it is very reliable and offers a wide range of configuraions. Basically, RabbitMQ is an open-source ( Mozilla Public License (MPL) ) message broker that implements the Advanced Message Queuing Protocol (AMQP) standard.
As documented on the RabbitMQ web-site:
RabbitMQ can potentially run on any platform that Erlang supports, from embedded systems to multi-core clusters and cloud-based servers.
... meaning that an operating system like Linux is supported.
There is a library for node.js here: https://github.com/squaremo/rabbit.js
It comes with an HTTP based API for management and monitoring of the RabbitMQ server - including a command-line tool and a browser-based user-interface as well - see: http://www.rabbitmq.com/management.html.
In the projects I have been working with, I have communicated with RabbitMQ using C# and two different wrappers, EasyNetQ and Burrow.NET. Both are excellent wrappers for RabbitMQ but I ended up being most fan of Burrow.NET as it is easier and more obvious to work with ( doesn't do a lot of magic under the hood ) and provides good flexibility to inject loggers, serializers, etc.
I have never worked with the amount of amount of objects that you are going to work with - I have worked with thousands ( not millions ). However, no matter how many objects I have been playing around with, RabbitMQ has always worked really stable and has never been the source to errors in the system.
So to sum up - RabbitMQ is simple to use and setup, supports AMQP, can be managed via HTTP and what I like the most - it's rock solid.
Break up the topics to carry specific events for e.g. "Object updated topic" "Object deleted"...So clients need to only have to subscribe to the "finite no:" of event based topics they are interested in.
Inject headers into your messages when you publish them and put intelligence into the clients to use these headers as message selectors. For eg, client knows the list of objects he is interested in - and say you identify the object by an "id" - the id can be the header, and the client will use the "id header" to determine if he is interested in the message.
Depending on whether you want, you may also want to consider ensuring guaranteed delivery to make sure that the client will receive the message even if it goes off-line and comes back later.
The options that I would recommend top of the head are ActiveMQ, RabbitMQ and Redis PUB SUB ( Havent really worked on redis pub-sub, please use your due diligance)
Finally here are some performance benchmarks for RabbitMQ and Redis
Just saw that you only have few 100 messages getting pushed out / sec, this is not a big deal for activemq, I have been using Amq on a system that processes 240 messages per second , and it just works fine. I use a thread pool of workers to asynchronously process the messages though . Look at a framework like akka if you are in the java land, if not stick with nodejs and the cool Eco system around it.
If it has to be open source i'd go for ActiveMQ, and an application server to provide the JMS functionality for topics and it has Ajax Support so you can access them from your client
So, you would use the JMS infrastructure to publish the topics for the objects, and you can create topis as you need them
Besides, by using an java application server you may be able to take advantages from clustering, load balancing and other high availability features (obviously based on the selected product)
Hope that helps!!!
Since your messages are very small might want to consider MQTT, which is designed for small devices, although it works fine on powerful devices as well. Key consideration is the low overhead - basically a 2 byte header for a small message. You probably can't use any simple or open source MQTT server, due to your volume. You probably need a heavy duty dedicated appliance like a MessageSight to handle your volume.
Some more details on your application would certainly help. Also you don't mention security at all. I assume you must have some needs in this area.
Though not sure about your work environment but here are my bits. Can you identify each object with unique ID in your system. If so, you can have a topic per each event type. for e.g. you want to track object deletion event, object updation event and so on. So you can have topic for each event type. These topics would be published with Ids of object whenever corresponding event happened to the object. This will limit the no of topics you needed.
Second part of your problem is different subscribers want to subscribe to different objects. So not all subscribers are interested in knowing events of all objects. This problem statement scoped to message selector(filtering) mechanism provided by messaging framework. So basically you need to seek on what basis a subscriber interested in particular object. Have that basis as a message filtering mechanism. It could be anything: object type, object state etc. So ultimately your system would consists of one topic for each event type with someone publishing event messages : {object-type:object-id} information. Subscribers could subscribe to any topic and with an filtering criteria.
If above solution satisfy, you can use any messaging solution: activeMQ, WMQ, RabbitMQ.

Which message queue can handle private queues that survive subscriber disconnects?

I have some requirements for a system in need of a message queue:
The subscribers shall get individual queues.
The individual queues shall NOT be deleted when the subscriber disconnects
The subscriber shall be able to reconnect to its own queue if it looses connection
Only the subscriber shall be able to use the queue assigned to it
Nice to have: the queues survive a server restart
Can RabbitMQ be used to implement this, and in that case how?
I have only recently started using Rabbit but I believe your requirements can be addressed fairly easily.
1) I have implemented specific queues for individual subscribers by having the subscriber declare the queue (and related routing key) using its machine name as part of the queue name. The exchange takes care of routing messages appropriately by way of the binding/routing keys. In my case, all subscribers get a copy of the same message posted by the publisher and an arbitrary number of subscribers can declare their own queues and start receiving messages.
2) That's pretty much standard. If you declare a queue then it will remain in the exchange, and if it is set as durable then it will survive broker restarts. In any case, your subscriber should call queue.Declare() at startup to ensure that the queue exists but in terms of the subscriber disconnecting, the queue will remain.
3) If the queue is there and a subscriber is listening to that queue by name then there's no reason why it shouldn't be able to reconnect.
4) I haven't really delved in to the security aspects of Rabbit yet. There may be a means of securing individual queues though I'll let someone else comment on this as I'm no authority.
5) See (2). Messages will also survive a restart if set as durable as they are then written to disk. This incurs a performance penalty as there's disk I/O but that's kind of what you'd expect.
So basically, yes. Rabbit can do as you ask. In terms of 'how', there are varying degrees of 'how'. Will happily try to provide you with code-level answers should you have trouble implementing any of the above. In the meantime, and if you haven't already done so, I suggest reading through the docs:
http://www.rabbitmq.com/documentation.html
HTH. Steve