Does 'publish-subscribe' messaging pattern require as many queues as there are consumers? - message-queue

Say I need to publish the price to a given stock ticker everytime the price changes, and on the other end there are subscribers (consumers) that need to consume that price.
This is a typical pub-sub pattern (instead of point-to-point). Is it typical to set up N queues if there N consumers, and have the publisher publish the same message to each of the N queues?
I don't see how this can be accomplished with only one queue, as each message will be gone as soon as a single consumer picks it up (which is a point-to-point model)

Yes, message passing communication system require queue per customer.
If customers observe changing event, you need queue per customer.
You publish event to some message broker. Consumers come and create own queue. After than, consumer bind own queue to your event exchange. Only this way every cunsumer reach and proccess every event.
Or
You create one consumer. That consumer consume event and send notification requst to every customer. This way not feasable, because you have to know every customer and their endpoint address for accepting notification. Moreover, you have to handle errors during integration etc. etc.

Related

How do we maintain consistent read promise to clients when using a fallback queue?

In my company, we are using Event Sourcing pattern to implement a storage for all changes to the price of a booking. Across the company, different services might try to append events to a booking identified by a booking code.
We use DynamoDB to store the event and it does support consistent read. The thing is in the case when a booking is initially made and the very 1st event is created for a booking code, if we fail to save into DynamoDB for whatever reasons, we put the event into a fallback queue and simply return a success to the client to acknowledge that we already received the event. Client can then move on with their business logic flow and in turn, show a success message to end users. The goal is to not block booking creation at all costs.
The problem is, for a very short period of time, when the event is still in the fallback queue, if clients try to fetch the event using the booking code, they will get back an error although we told them that the write on the 1st event was a success earlier. In a way, we're breaking the consistent read promise here.
I'm trying to find a way where we can improve the design and keep this promise while remaining out of the way of the main booking flow (i.e. not blocking the booking on failure).
I'd be very grateful if someone could throw me an idea to look into.
One solution might be to have the fallback queue be durable (i.e. reading from it doesn't remove elements from the queue) up to some retention period (broadly: the maximum allowable time between initial booking and persisting the creation event to DynamoDB) and instead of being a fallback queue be the actual source of truth for which bookings have been created.
Services can then consume this queue: one of these services is responsible for writing the initial creation event to DynamoDB (which is longer-lived than the queue). If that service is falling behind and approaching the retention limit, that's an operational emergency, but you're buying yourself time. Another of these services maintains an in-memory view based on the queue of created bookings which haven't yet made it to Dynamo.

Message bus vs. Service bus vs. Event hub vs Event grid

I'm learning the messaging system and got confused by those terminology.
All the messaging system below provides loose coupling between services with different sets of features.
queue - FIFO, pulling mechanism, 1 consumer each queue but any number of producers?
message bus - pub/sub model, any number of consumers with any number of producers processing messages? Is Azure Service Bus an implementation of message bus?
event bus - pub/sub model, any number of consumers with any number of producers processing events?
Do people use message bus and event bus interchangeably as far as terminology goes?
What are the difference between events and messages? Are those just synonyms in this context?
event hub - pub/sub model, partition, replay, consumers can store events in external storage or close to real-time data analysis. What exactly is event hub?
event grid - it can be used as a downstream service of event hub. What does it exactly do that event hub doesn't do?
Can someone provide some historical context as how each technology evolve to another each tied with some practical use cases?
I've found message bus vs. message queue helpful
Even thou all these services deal with the transfer of data from source to target and might seem similar falling under the umbrella messaging services they do differ in their intent.
High-level definition:
Azure Event Grids – Event-driven publish-subscribe model (think reactive programming)
Azure Event Hubs – Multiple source big data streaming pipeline (think telemetry data)
Azure Service Bus- Traditional enterprise broker messaging system (Similar to Azure Queue but provide many advanced features depending on use case full comparison)
Difference between Event Grids & Event Hubs
Event Grids doesn’t guarantee the order of the events, but Event Hubs use partitions which are ordered sequences, so it can maintain the order of the events in the same partition.
Event Hubs are accepting only endpoints for the ingestion of data and they don’t provide a mechanism for sending data back to publishers. On the other hand, Event Grids sends HTTP requests to notify events that happen in publishers.
Event Grid can trigger an Azure Function. In the case of Event Hubs, the Azure Function needs to pull and process an event.
Event Grids is a distribution system, not a queueing mechanism. If an event is pushed in, it gets pushed out immediately and if it doesn’t get handled, it’s gone forever. Unless we send the undelivered events to a storage account. This process is known as dead-lettering.
In Event Hubs the data can be kept for up to seven days and then replayed. This gives us the ability to resume from a certain point or to restart from an older point in time and reprocess events when we need it.
Difference between Event Hubs & Service Bus
To the external publisher or the receiver Service Bus and Event Hubs can look very similar and this is what makes it difficult to understand the differences between the two and when to use what.
Event Hubs focuses on event streaming where Service Bus is more of a traditional messaging broker.
Service Bus is used as the backbone to connects applications running in the cloud to other applications or services and transfers data between them whereas Event Hubs is more concerned about receiving massive volume of data with high throughout and low latency.
Event Hubs decouples multiple event-producers from event-receivers whereas Service Bus aims to decouple applications.
Service Bus messaging supports a message property ‘Time to Live’ whereas Event Hubs has a default retention period of 7 days.
Service Bus has the concept of message session. It allows relating messages based on their session-id property whereas Event Hubs does not.
Service Bus the messages are pulled out by the receiver & cannot be processed again whereas Event Hubs message can be ingested by multiple receivers.
Service Bus uses the terminology of queues and topics whereas Event Hubs partitions terminology is used.
Use this loose general rule of thumb.
SOMETHING HAS HAPPENED – Event Hubs
DO SOMETHING or GIVE ME SOMETHING – Service Bus
As #Louie Almeda stated you may find this link to the official Azure documentation useful.
I found this comparison from Azure docs extremely helpful. Here's the key distinction between events and messages.
Event vs. message services
There's an important distinction to note
between services that deliver an event and services that deliver a
message.
Event
An event is a lightweight notification of a condition or a state
change. The publisher of the event has no expectation about how the
event is handled. The consumer of the event decides what to do with
the notification. Events can be discrete units or part of a series.
Discrete events report state change and are actionable. To take the
next step, the consumer only needs to know that something happened.
The event data has information about what happened but doesn't have
the data that triggered the event. For example, an event notifies
consumers that a file was created. It may have general information
about the file, but it doesn't have the file itself. Discrete events
are ideal for serverless solutions that need to scale.
Series events
report a condition and are analyzable. The events are time-ordered and
interrelated. The consumer needs the sequenced series of events to
analyze what happened.
Message
A message is raw data produced by a service to be consumed or stored
elsewhere. The message contains the data that triggered the message
pipeline. The publisher of the message has an expectation about how
the consumer handles the message. A contract exists between the two
sides. For example, the publisher sends a message with the raw data,
and expects the consumer to create a file from that data and send a
response when the work is done.
Comparison of those different services were also discussed, so be sure to check it out.
I agree with your remarks about overloaded terms, especially with cloud-service marketing jargon....
Historically, I events and messages had more distinct meanings
- event was term used to refer to communication within the same process whereas
- message referred to communication across different processes.
Regarding the "bus", I can give you some "historical" information, as I learned to be a sound engineer. In a music mixer, you also have a "bus" and "routing" for mixing signals. In the case of a mixer, we are talking about electrical signals, either being in the mix or not!
Regarding the messaging system, think of "bus", "hub" and "grid" to be synonyms! They are all fancy words for the same thing. They are trying to express some kind of transportation system that includes some kind of routing, because you always have producers and consumers - and this can be an N:M relation. Depending on the use case.
A queue is typically a bit different, but its effect can be the same. A queue means something where things are in line, like a queue of people to buy something! (Theatre tickets....)
Nowadays, everything is digital, which in its essence means it can be countable. That's how "messages" came into existance! A music mixer would traditionally mix analogous signals, which are not countable but continuous, so the information would be f.ex. spoken voices or any kind of sound. Today, a "message" means some kind of information package, which is unique and countable. So it is a "thing" you can add to and remove from a queue, or send it to a hub for consumers to consume it.
Don't worry, you'll get used to those terms! I hope I was able to give you an idea.

Electing a new leader in distributed systems

I have the following problem:
I have a distributed system where I need to reach a consensus in one way or another when choosing a leader.
I have a group of players that communicate with each other via messages. In order for these players to progress from a stage to another someone has to keep track of their progress. Currently, there are 2 types of players:
leader---when he receives N-1 done messages (for N-1 players) he is responsible for broadcasting to all other users state change
follower ---he is responsible for getting the messages of the leader and updating his internal state-machine.
Each player receives messages from 2 pipelines:
-Status pipeline - He receives an array of type
[user1,user2,user3...userN] where each element is the user that is online.
-Message pipeline -Push based notification. Follower users will post here messages that they are ready for the next step. The leader will keep track of the DONE counter and when the threshold is reached he will broadcast ADVANCE to next step.
For a better idea i included a picture:
I do not know how to deal with leader reelection. In case the leader disconnects (this can be implemented with a timeout), how can the other nodes decide who is the next leader and if they pick randomly, should the current leader be stored in the database? I mean they only exchange messages there's nothing on the server, like a global variable or something.
What you basically need is to implement both 2 phase commit and a leader election recipe. Now, either you can implement them on your own (2 phase commit is well documented, and yes, you would need a shared storage), or if you have the flexibility to use a distributed open source co-ordination service, zookeeper would be your best bet. Have a look at the below article on apache zookeeper's page where they discuss both the recipes which you need. Hope this helps.
https://zookeeper.apache.org/doc/current/recipes.html#sc_recipes_twoPhasedCommit

Locks and batch fetch messages with RabbitMq

I'm trying to use RabbitMq in a more unconventional way (though at this point i can pick any other message queue implementation if needed). Instead of leaving Rabbit push messages to my consumers, the consumer connects to a queue and fetches a batch of N messages (during which it consumes some and possible rejects some), after which it jumps to another queue and so on. This is done for redundancy. If some consumers crash all messages are guaranteed to be consumed by some other consumer.
The problem is that I have multiple consumers and I don't want them to compete over the same queue. Is there a way to guarantee a lock on a queue? If not, can I at least make sure that if 2 consumers are connected to the same queue they don't read the same message? Transactions might help me to some degree but I've heard talk that they'll get removed from RabbitMQ.
Other architectural suggestions are welcomed too.
Thanks!
EDIT:
As pointed in the comment there's an a particularity in how I need to process the messages. They only make sense taken in groups and there's a high probability that related messages are clumped together in a queue. If for example I pull a batch of 100 messages, there's a high probability that I'll be able to do something with messages 1-3, 4-5,6-10 etc. If I fail to find a group for some messages I'll resubmit them to the queue. WorkQueue wouldn't work because it would spread messages from the same group to multiple workers that wouldn't know what to do with them.
Have you had a look at this free online book on Enterprise Integration Patterns?
It sounds like you really need a workflow where you have a batcher component before the messages get to your workers. With RabbitMQ there are two ways to do that. Either use an exchange type (and message format) that can do the batching for you, or have one queue, and a worker that sorts out batches and places each batch on its own queue. The batcher should probably also send a "batch ready" message to a control queue so that a worker can discover the existence of the new batch queue. Once the batch is processed the worker could delete the batch queue.
If you have control over the message format, you might be able to get RabbitMQ to do the batching implicitly in a couple of ways. With a topic exchange, you could make sure that the routing key on each message is of the format work.batchid.something and then a worker that learns of the existence of batch xxyzz would use a binding key like #.xxyzz.# to only consume those messages. No republishing needed.
The other way is to include a batch id in a header and use the newer headers exchange type. Of course you can also implement your own custom exchange types if you are willing to write a small amount of Erlang code.
I do recommend checking the book though, because it gives a better overview of messaging architecture than the typical worker queue concept that most people start with.
Have your consumers pull from just one queue. They will be guaranteed not to share messages (Rabbit will round-robin the messages among the currently-connected consumers) and it's heavily optimized for that exact usage pattern.
It's ready-to-use, out of the box. In the RabbitMQ docs it's called the Work Queue model. One queue, multiple consumers, with none of them sharing anything. It sounds like what you need.
You can set a channel/consumer level prefetch count to consume messages in batches. In order to re-submit messages, you should use the basic.reject AMQP method and those messages can be chosen to be requeued or forwarded to a dead letter queue. Multiple consumers trying to pull messages from the same queue is not an issue asthe AMQP basic.get method will be synchronized to handle concurrent consumers.
https://groups.google.com/forum/#!topic/rabbitmq-users/hJ8f5du-GCA

Why is queue to queue transfer not supported in MQFTE Monitors

Why is queue to queue transfer not supported in MQFTE Monitors ? I have set an monitor for a queue and when any message is dropped in the queue , a transfer from queue to queue must be triggered. But MQFTE doesnt have this option. Is there any other alternative ?
I can't really answer the questions as written - i.e. "why" it works the way it does. I can only speculate that because FTE is written to move files there are file name metadata and semantics in the queue-to-file and file-to-queue that don't make sense in queue-to-queue.
What you can do though is write up your use case in detail and submit a formal requirement. Then at least you have a chance to see that functionality in a future release.
In the meantime what you are doing sounds like a job for triggering. WMQ has the ability to fire an external process on the arrival of a message. Given your requirements, I'd trigger an ANT job to initiate the transfer when the message arrives on the queue. If the queue-to-queue transfer needs to be recorded in the FTE logs then the processing flow would be something like this:
Message arrives on the queue
Trigger monitor starts job
Job browses message on the queue
Job passes message ID to an ANT task
ANT task moves files.
A pre- or post- transfer task uses SupportPac MA01 to move the message in the queue based on MsgID.
Triggered program loops over any messages in the queue and initiates a separate ANT task for each until the queue is empty.
If the queue-to-queue transfer doesn't need to be recorded in the FTE logs, the flow would be similar except that the triggered job would consume the message and move it straight away instead of passing it to the ANT task.