We use SQS for queueing use-cases in our company. All developers connect to the same queue for local development. If we're producing some messages for testing in local development, it can happen that the message is consumed on other person's locally running consumer, if that person has the app running at the same time.
How do you make sure that messages produced by one person don't end up getting lost by consumption on other person's locally running consumer. Is using different different queues for each person the only solution? Wondering what is standard followed to avoid this in the industry?
This is very open-ended IMO. Would recommend adding some context as to how you're using SQS.
But from what I could understand:
Yes, I would recommend creating queues per "developer"
OR
Although not elegant, you can maybe add an SQS message attribute (this is metadata other than message body) with a developer's username.
And each developer should then only process a message if it's meant for them. Arguably, you could also add a flag in the message itself, but, I am not sure about the constraints on your message format. Message attributes are meant to be used for these situations, where you want to know if you really need to process a message before even parsing the message body.
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-metadata.html#sqs-message-attributes
But you'll have to increase the maxReceives to a high number (so that message does not move to dead letter queue, if you have configured one). This is not exhaustive, it will just decrease the chances of your messages being deleted by someone else. Because if say, 10 people read the message and did not delete it because username was not their username, and maxReceives is 8, it will still move to DLQ and cause unnecessary confusion.
Related
I have been fighting the same very simple problem with NServiceBus all day today. The problem is that there is lots of documentation on how to change the configuration, but almost nothing that helps me to know what configuration I need.
There are sample applications, and they work, but there is nothing explaining how they work, what limitations they have, or how to do something just a little bit different than the sample. The sample applications also present a "Hello world" type simplicity, and in any real application you need something different from the sample application, but again there is no help on how to make these changes, or the implications of configuration choices.
From all the things that are very difficult to guess from the documentation, it is the relationship between the endpoint name, the UnicastBusConfig mappings, and pub/sub persistence that is causing the most frustration right now.
Is the endpoint name the name of the MSMQ queue? Does that mean that every application has only one input queue for all message types? Does adding a mapping in UnicastBusConfig cause a subscription message to be sent to the publisher, or does it add a subscription record in subscription DB? Why can't you add the same message type more than once to UnicastBusConfig? Why can't I just subscribe to messages of a certain type without having to know which server they come from?
For someone that understands NServiceBus this probably seems so simple that it wasn't worth documenting, but for someone coming to this for the first time, it's the very simple stuff that's the most difficult to infer from the morass of low level detail.
Is the endpoint name the name of the MSMQ queue?
Yes.
Does that mean that every application has only one input queue for all message types?
Yes. Each endpoint has a single queue associated with it, so all messages for that endpoint go through the same queue.
Does adding a mapping in UnicastBusConfig cause a subscription message to be sent to the publisher, or does it add a subscription record in subscription DB?
Neither really. The UnicastBusConfig section is for setting up the relationship between types (or assemblies) and endpoints. So it doesn't actually cause a subscription to be set up (per se), but it tells the framework where the messages will be coming from (and therefore how to subscribe to them).
The actual subscription gets created when the system starts up and NSB finds a handler for a particular type of message that matches a section in the UnicastBusConfig (assuming auto-subscribing is turned on).
This also works for sending Commands--the config section lets the framework know to which endpoint to Send() a Command.
Why can't you add the same message type more than once to UnicastBusConfig?
Because a Command can have only one (logical) endpoint that handles it, and an Event can have only one (logical) endpoint that publishes it.
Why can't I just subscribe to messages of a certain type without having to know which server they come from?
This question is a bit more difficult to answer definitively, as it gets into the philosophy of having a central broker (hub and spoke) vs. bus-style architecture.
But in a nutshell, something, somewhere needs to know how to find the publisher in order to subscribe to it. Because NServiceBus does not have a central broker or routing table, it is left to the client to be configured with knowledge of the endpoints it consumes.
You might want to check out the NServiceBus documentation at http://docs.particular.net/nservicebus/, it's quite comprehensive and should provide answers to most of your questions.
When using distributed and scalable architecture, eventual consistency is often a requirement.
Graphically, how to deal with this eventual consistency?
Users are used to click save, and see the result instantaneously... with eventual consistency it's not possible.
How to deal with the GUI for such scenarios?
Please note the question applies both for desktop applications and web applications.
PS: I'm working with the Microsoft platform, but I imagine the question applies to any technology...
A Task Based UI fits this model great. You create and execute tasks from the UI. You can also have something like a task status monitor to show the user when a task has executed.
Another option is to use some kind of pooling from the client. You send the command, and pool from the client until the command completed and the new data is available. You will have a delay in some cases from when the user presses save to when he will see the new record, but in most cases it should be almost synchronous.
Another (good?) option is to assume/design commands that don't fail. This is not trivial but you can have a cache on the client and add the data from the command to that cache and display it to the user even before the command has been executed. If the command fails for some unexpected situation, well then just design a good "we are sorry" message for misleading the user for a few seconds.
You can also combine the methods above.
Usually eventual consistency is more of a business/domain problem, and you should have your domain experts handle it.
I think that other answers mix together CQRS in general and eventual consistency in particular. Task-based UI is very suitable for CQRS but it does not resolve the issue with eventually consistent read model.
First, I would like to challenge your statement:
Users are used to click save, and see the result instantaneously... with eventual consistency it's not possible.
What do you by this? Why is it not possible to see the result immediately? I think the issue here is your definition of result.
The result of any action is that that action has been performed. There are numerous of ways to show this! It depends on what kind of action do you want to complete. Examples:
Send an email: if user has entered a correct email address, it is almost guaranteed that the action will complete successfully. To prevent unexpected failures one might use durable queues since this kind of actions do not need to be done synchronously. So you just say "email sent". Typically you see this kind of response when you ask to reset your password.
Update some information in a user profile: after you have validated the new data on the client, most probably the command will succeed too since the only thing that could happen is the database error (if you use database). Again, even this can be mitigated by using durable queues. In this case you just show the updated field in the same form. The good practice for SPA is to have a comprehensive data store on the client side, like Redux does. In this case you can safely update the server by sending a command and also updating the client-side store, which will result in UI to shows the latest data. Disclaimer: some answers refer to this technique as "tricking the user", but I disagree with this definition.
If you have commands that are prone to error, you can use techniques that are already described in other answers like Websockets or Server-side events to communicate errors back. This requires quite a lot of additional work. You can also send a command and wait for reply or execute commands synchronously. Some would say "this is not CQRS" but this would be just another dogma to be challenged. Ensuring the command has completed the execution in combination with the previous point (client-side data store) will be a good solution.
I am not sure if there is any 100% bullet proof technique that allows you to always show non-stale data from the read model. I think it goes against the principles of CQRS. Even with real-time events you will only get events that indicate that you write model has been updated. Still, your projections could have failed and reacting on this is a whole other story.
However, I would not concentrate that much on this issue. The fact is that well-tested projections and almost-guaranteed commands will work very well. For error handling in 90% of situations it is enough to have some manual or half-manual process to recover from those errors. For the last 10% you can combine generic "error" messages pushed from the server saying "sorry, your action XXX has failed to execute" and the top priority actions could have some creative process behind them but in reality those situations would be very very rare.
There are 2 ways:
To trick a user (just to show that things has happened then they
really hasn't happened yet)
Show that system is processing request
and use polling in background (not good) or just timer with value of
your SLA.
I prefer the 1st option.
As someone has already mentioned, task based UI's fit well for this, and what I would do is employ a technique that 'buys you time' for the command to propagate.
For example, imagine we are on a list screen, where the user can perform various actions, one of which being to add a new item to the list. After choosing to add an item you could display a "What would you like to do next?" which could have 'Add another item', 'Do this task', 'Do some other task', 'Go back to list'.
By the time they have clicked on an option, the data would have hopefully been refreshed.
Also, if you're using a task based UI, you can analyse the patterns of task execution and use these "what would you like to do next" screens to streamline the UI. Similar to amazon's "other people also bought these items".
As previously stated, it is fine to tell the user that the request (command) has been acknowledged (successfully issued). In case of some failure, the system should communicate this to the requester, by means of:
email;
SMS;
custom inbox (e.g. like the SO inbox);
whatever.
E.g., mail client / service:
I am sending a mail to a wrong address;
the mail service says: "email sent successfully :)";
after few minutes, I receive a mail from the service: "email could not be delivered".
I believe a great way to inform the user about a recent failure is to present him an error panel while he's navigating through the application. A user gesture might be required in order to dismiss that alert etc.
For example:
I wouldn't go with tricking the user or blocking him from committing some other actions. I would rather go for streaming data toward UI after they are being acknowledged by a read side. Let's consider these two cases:
Users saves data and expects result. Connection is established toward server. After they are being acknowledged by a read side, they are streamed toward UI and UI is being updated.
User saves data and refreshes web page. Upon reload, data are being fetched from data store and connection for streaming is established. If read side didn't update the data store in the meantime, there's still an opened stream and UI should be updated after data reaches the read side.
Why streaming from read side and not directly from write side? Simply, that would be a confirmation that read side has been reached.
From technical aspect, Server-Sent Events could be used.
Disadvantage:
Results will still not be reflected immediately by a read side. But at least, in most cases, user will be able to continue with his work without being blocked by a UI.
There are several ways to handle eventual consistency. All of them are really to occupy the time from the User's action until the backend refresh.
User Reads A given user can only read from the same database node that they write to. Other users read from the replicated nodes. PROS: UI is quick enough, and application stays in sync. CONS: Your service architecture has to track and route Users to specific database nodes.
Disable the UI until the action has completed, and refresh it. Java Server Faces has a classic example of this. One could create a modal with a loading spinner to cover the UI until the refresh was completed. PROS: UI stays in sync with application state. CONS: Most every action creates a blocked UI. Users get very frustrated by the restricted UI, and will complain of application slowness.
Confirmation Immediately thank the user for their submission. Then let them know later (email, SMS, in-app notification) whether or not the action was completed. PROS: It's fast up front. CONS: UI lags behind system until refresh. Even with a notice, the User may get confused that they don't see the updates. It also requires integration of various communication channels. Users won't see their changes right away. If the action fails, they may not know until it's too late.
Fake it Optimistically assume that the action will complete. Show the User the resulting UI (upvote, comment, credit card confirmation, etc) and allow them to continue as if it succeeded. If there were failures, immediately show them as contextual errors: alerts next to the undone upvotes, in-app alert on the post with the failed comment, email for the declined credit card. PROS: UI feels much faster. CONS: UI is temporarily out of sync with application state, and you must resolve that. One case: you might fake creation of content with temp IDs. But after content is created, then the temp IDs will be wrong until the refresh. Second case, you might need to store all state changes on the UI after the action until the refresh. Then you need some Resolver to apply all the local state changes since the action was issued. This is resolution is non-trivial.
Web Sockets Subscribe the UI to an event stream so that when the action is completed on the backend, it is pushed to the front end. Is it one-way or two-way streaming? PROS: UI feels fast, and it's in sync with the application state. CONS: Consistent browser support, need a backend source of streaming events, and socket server scalability.
I'm thinking about system that will notify multiple consumers about events happening to a population of objects. Every subscriber should be able to subscribe to events happening to zero or more of the objects, multiple subscribers should be able to receive information about events happening to a single object.
I think that some message queuing system will be appropriate in this case but I'm not sure how to handle the fact that I'll have millions of the objects - using separate topic for every of the objects does not sound good [or is it just fine?].
Can you please suggest approach I should should take and maybe even some open source message queuing system that would be reasonable?
Few more details:
there will be thousands of subscribers [meaning not plenty of them],
subscribers will subscribe to tens or hundreds of objects each,
there will be ~5-20 million of the objects,
events themselves dont have to carry any message. just information that that object was changed is enough,
vast majority of objects will never be subscribed to,
events occur at the maximum rate of few hundreds per second,
ideally the server should run under linux, be able to integrate with the rest of the ecosystem via http long-poll [using node js? continuations under jetty?].
Thanks in advance for your feedback and sorry for somewhat vague question!
I can highly recommend RabbitMQ. I have used it in a couple of projects before and from my experience, I think it is very reliable and offers a wide range of configuraions. Basically, RabbitMQ is an open-source ( Mozilla Public License (MPL) ) message broker that implements the Advanced Message Queuing Protocol (AMQP) standard.
As documented on the RabbitMQ web-site:
RabbitMQ can potentially run on any platform that Erlang supports, from embedded systems to multi-core clusters and cloud-based servers.
... meaning that an operating system like Linux is supported.
There is a library for node.js here: https://github.com/squaremo/rabbit.js
It comes with an HTTP based API for management and monitoring of the RabbitMQ server - including a command-line tool and a browser-based user-interface as well - see: http://www.rabbitmq.com/management.html.
In the projects I have been working with, I have communicated with RabbitMQ using C# and two different wrappers, EasyNetQ and Burrow.NET. Both are excellent wrappers for RabbitMQ but I ended up being most fan of Burrow.NET as it is easier and more obvious to work with ( doesn't do a lot of magic under the hood ) and provides good flexibility to inject loggers, serializers, etc.
I have never worked with the amount of amount of objects that you are going to work with - I have worked with thousands ( not millions ). However, no matter how many objects I have been playing around with, RabbitMQ has always worked really stable and has never been the source to errors in the system.
So to sum up - RabbitMQ is simple to use and setup, supports AMQP, can be managed via HTTP and what I like the most - it's rock solid.
Break up the topics to carry specific events for e.g. "Object updated topic" "Object deleted"...So clients need to only have to subscribe to the "finite no:" of event based topics they are interested in.
Inject headers into your messages when you publish them and put intelligence into the clients to use these headers as message selectors. For eg, client knows the list of objects he is interested in - and say you identify the object by an "id" - the id can be the header, and the client will use the "id header" to determine if he is interested in the message.
Depending on whether you want, you may also want to consider ensuring guaranteed delivery to make sure that the client will receive the message even if it goes off-line and comes back later.
The options that I would recommend top of the head are ActiveMQ, RabbitMQ and Redis PUB SUB ( Havent really worked on redis pub-sub, please use your due diligance)
Finally here are some performance benchmarks for RabbitMQ and Redis
Just saw that you only have few 100 messages getting pushed out / sec, this is not a big deal for activemq, I have been using Amq on a system that processes 240 messages per second , and it just works fine. I use a thread pool of workers to asynchronously process the messages though . Look at a framework like akka if you are in the java land, if not stick with nodejs and the cool Eco system around it.
If it has to be open source i'd go for ActiveMQ, and an application server to provide the JMS functionality for topics and it has Ajax Support so you can access them from your client
So, you would use the JMS infrastructure to publish the topics for the objects, and you can create topis as you need them
Besides, by using an java application server you may be able to take advantages from clustering, load balancing and other high availability features (obviously based on the selected product)
Hope that helps!!!
Since your messages are very small might want to consider MQTT, which is designed for small devices, although it works fine on powerful devices as well. Key consideration is the low overhead - basically a 2 byte header for a small message. You probably can't use any simple or open source MQTT server, due to your volume. You probably need a heavy duty dedicated appliance like a MessageSight to handle your volume.
Some more details on your application would certainly help. Also you don't mention security at all. I assume you must have some needs in this area.
Though not sure about your work environment but here are my bits. Can you identify each object with unique ID in your system. If so, you can have a topic per each event type. for e.g. you want to track object deletion event, object updation event and so on. So you can have topic for each event type. These topics would be published with Ids of object whenever corresponding event happened to the object. This will limit the no of topics you needed.
Second part of your problem is different subscribers want to subscribe to different objects. So not all subscribers are interested in knowing events of all objects. This problem statement scoped to message selector(filtering) mechanism provided by messaging framework. So basically you need to seek on what basis a subscriber interested in particular object. Have that basis as a message filtering mechanism. It could be anything: object type, object state etc. So ultimately your system would consists of one topic for each event type with someone publishing event messages : {object-type:object-id} information. Subscribers could subscribe to any topic and with an filtering criteria.
If above solution satisfy, you can use any messaging solution: activeMQ, WMQ, RabbitMQ.
We've been using SysV Message Queue for our distributed data processing system for over 15 years. For some reason, we want to replace it with newer Message Queue mechanism. Is there any suggestions?
Requirements:
Fast response, minimizing message queue system overhead
Multiple client language library support, mainly c, c# and java
Can do some HA configuration to prevent SPOF
Have logging ability to check who sends message and who receives message
I've found Apache ActiveMQ and RabbitMQ, but it seems RabbitMQ lacks of stable C client library support?
While I have not used it personally, the toolkit from 0MQ is quite impressive.
It seems to meet all of your criteria, although #4 you would have to implement yourself, but that seems straightforward.
My question back would be why you are moving away from SysV Message Queue? The "for some reason" is a disconcerting statement.
That said, there are many excellent messaging products out there, having a useful set of selection criteria is key.
I would suggest extending your requirements list a bit, then doing website bench-marking against that list. Take the top two or three only, and do some real-world project spikes (or a bake-off if you prefer the term) to give you some actual feedback on which to base your final decision.
Good Luck
I come from a web background where I only have to deal with HTTP so please excuse my ignorance.
I have an app which where clients listen for changes in a message queue which uses stomp. Previously the client only needed to listen to the relevant channels for messages telling them about changes on the server and update themselves accordingly. Simple stuff.
There is now a requirement for the client to be able to edit data and push those changes back to the server. The data on the server is already exposed via restful resources so my first thought was just to make REST put requests to change the data on the server, but then I started to wonder whether I could find a solution using messaging. I could just open up another channel which the clients could publish changes to and the server could subscribe to that channel and update itself accordingly. Implementing this would obviously be simple but I would love to have some of the potential pitfalls pointed out to me ahead of time.
I am familiar with REST so I want to ask some questions in the context of REST:
Would I map a group of queues to REST/CRUD verbs for each resource i.e. itemPostQueue, itemPutQueue, itemDeleteQueue?
What about GET's how can I request data to read using a queue?
What do I use to replace my status code mechanism to catch problems or do I just fire and forget (gulp) or use error/receipt headers in Stomp somehow?
Any answers and advise will be much appreciated.
Regards,
Chris
While I am not clear on why you must use messaging here, a few thoughts:
You could map to REST on the wire like itemPostQueue, but this would likely feel unnatural to a message-oriented person. If you are using some kind of queue with a guaranteed semantic and deliver-once built in, then go ahead and use that mechanism. For a shopping-cart example, then you could put an AddItem message on the wire, and you trust the infrastructure to deliver it once to the server.
There is no direct GET like concept here in message queuing. You can simulate it with a pair of messages, I send you a request and you send me back a response. This is much like RPC, but even further decoupled. So I send you a PublishCart request and later on, the server sends a CartContents message on a channel that the client is listening to.
Status codes are more complex, and generally fall into two camps. First are the actual queue-library messages - deal with them just as you would any normal system message. Second you may have your own messages you want to put on the wire that signal failure at some place in the chain.
One thing that messaging does do is significantly decouple your app. Unlike HTTP, where you know that something happened, with a queue, you send a letter to somebody. It may get there. The postman might drop it in the snow. The dog might eat it. If you don't get a response in some period of time, you try other means to contact your relatives, or to pull back the analogy, to contact the server. Monitoring of the health of the queue infrastructure and depth of queues and the like take on added importance, as they are the plumbing that you are now depending upon.
Good Luck