When sending messages with SSB we'll initialize conversations by specifying to and from services.
But when reading, all we do is to RECEIVE without specifying services. So how do I make sure that I read messages which are only for service X?
Or have I missed something fundemental?
To RECEIVE for service A, RECEIVE from service A's queue. To RECEIVE from service B, RECEIVE from service B's queue.
You should only place two services on the same queue if the processing is identical and you really do not care about which service does the message belong to. You can even project the service name in the RECEIVE result set so you can know that your message belongs to A or B, if is important in processing. As a general rule there is no way to declare 'RECEIVE messages that meet criteria X and ignore the rest'. The idea is that messages are events that require handling, so you cannot choose what event you look at next.
Related
I would like to create web service(s) that I can publish to external facing network to allow our customers team to send us CRUD operations on customer orders.
What would be the best practice in this case of using Microsoft or Open-Source technologies to serve the customer reqeusts?
Option1:
The web service accepts data XML/JSON
Stores the data locally in a file
A task picks up the file and attempts data load it in the background
Send an email for records that failed
Drawback here is the response from the web service will not be realtime and validation will be limited.
Option2:
The web service accepts data XML/JSON
Attempt data load
Respond immediately if load was success or failure
Drawback here is if the volume of orders increases increases several folds in near future if the infrastructure can handle it.
I am open to using REST with WCF or Web API and any other helpful technologies that can be scaled when demand grows.
Have you tried message queueing?
Basically, in this architecture, there is a client application (called producer) that submits a message to the message broker (message queue), and there is another application (called consumer) that connects to the broker and subscribes for the message to be processed.
The message can be just simple information or a task that will be processed by another application.
The application can act both as producer and consumer.
There are many message queue software, one of them is rabbitmq.
Here is the complete intro about this: https://www.cloudamqp.com/blog/2015-05-18-part1-rabbitmq-for-beginners-what-is-rabbitmq.html
Since the communication is done through the middleman (aka the message queue) it will not provide an immediate response. But you don't need to send the process result (i.e. Order processing in your case) to the email since the application can subscribe for the message of the result.
It is perfect to handle a huge load of processes. As always you can start small (even free) and scale up in the future.
Take a look at the pricing details https://www.cloudamqp.com/ that provides rabbitmq software as a service.
I do this using ActiveMQ as a central message broker (you can also use Azure Service Bus if you have an Azure subscription) and pre-baked domain objects. For your scenario it might look like:
The web service accepts data XML/JSON
yes, you have a REST service that accepts multipart requests, lets say JSON as it's easier to work with on the client side. Before you can send messages it's usually best to convert the incoming client message to a domain message, so all message consumers know the exact format to expect and can therefore validate the message. I usually create these using xsd.exe on Windows using an XSD file that describes the format of the object. xsd.exe turns that XSD into a C# class. It's then just a process of taking the JSON fields and populating an Order class. That Order then gets sent as a message to the broker. At that point you're now in guaranteed messaging land as JMS will take care of that and ActiveMQ will take care of message persistence.
Stores the data locally in a file
rather than a file, you convert the incoming JSON to a domain class, e.g. an Order instance. You'll never see JSON or XML beyond this point as it's all domain classes from here.
A task picks up the file and attempts data load it in the background
yes, the broker has routes defined in a Camel config that tells it to, for example, send messages coming in on the /client topic to the /orders topic. The task is set up as a durable topic subscriber so automatically gets that Order domain object.
Send an email for records that failed
if the Order object contains information about the client (email etc) then the task can send the email on failure but a better pattern is to route the failed Order to the /error topic where a different task, which again is a durable topic subscriber picks it up and logs/sends email/audits etc
if the volume of orders increases increases several folds in near
future
you can cluster the brokers and run multiple Order consumers. If you separate the failure handling into another route, all the order task has to do is process the order and route the message to either the /error or /success topic depending on the outcome. So each route provides a small piece of the puzzle and you can scale up the pieces if the puzzle gets too big.
we have an production issue where the order is submitted twice. Currently we have an API for order and we are exposing this to client using API management and in these we have policies for URL mapping for customer facing to actual .
Now , our actual API got 2 request so we thought customer submitted twice but they have confirmed that they have not submitted twice , so either there is issue with API management which fired 2 request.
How can i Identify the request received by the API management ?
Is there any chance that API management will fire the request twice ?
Appreciate any pointers
The only way to fire request twice in APIM would be by the means of Retry policy or manually using SendRequest. Otherwise it should be a client calling your API two times. Each request in APIM get it's own unique id accessible in policies as context.RequestId, this is the main way to track and identify them. But these ids are produced inside APIM itself thus are useful only if you're tracking a call from APIM and into backend.
Your best option now is to try to identify requests by client ip, method, uri, and time frame. APIM allows you to grab logs for certain periods of time (better if kept short) in JSON or CSV with data I mentioned above. To do that look into byRequest report (https://learn.microsoft.com/en-us/rest/api/apimanagement/reports#ReportByRequest), grab JSON/CSV and try to identify calls of interest,
For future you could look into onboarding your service to azure monitor (https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-use-azure-monitor) or log analytics those provide easier way to traverse logs.
We are running a Node.JS app on Bluemix that uses the mqlight service.
Are the messages really being queued in case there are no receivers? Is it possible for a receiver to fetch messages that were sent before it connected?
Are messages really being queued in case messages are being sent faster than they are handled?
Using the Time-To-Live property of messages, messages can be persisted on the destination topic until a subscriber connects. This will allow you to buffer messages whilst they are no subscribers.
Full details for this property are available here.
There is some way to subscribe to Orion (e.g. Car entity), without receive the old entities?
Example:
Orion has -> Car A, Car B.
I do onchange subscription and Orion sends me at same time:
Car A notification and Car B notification.
We need the following:
Orion has -> Car A, Car B.
I do onchange subscription but don't receive nothing at this moment. If in a future Orion receives Car B, or changes some attribute of Car A or Car B, then send the notification.
Is that possible?
The behaviour is explained in the the user manual:
You may wonder why accumulator-server.py is getting this message if you don't actually do any update. This is because the Orion Context Broker considers the transition from "non existing subscription" to "subscribed" as a change.
We understand that for some uses cases this is not convenient. However, behaving in the opossite way ruins another uses cases which need to know the "inicial state" before starting getting notifications corresponding to actual changes. The best solution to make everybody happy is to make this configurable, so each client can chose what it prefers. This feature is currently in our roadmap (see this issue in github.com).
While this gets implemented in Orion, in your case maybe a possible workaround is just ignore the first received nofitication belonging to a subscription (you can identify the subscription to which one notification belongs by the subscriptionId field in the notification payload). All the following notifications beloning to that subscription will correspond to actual changes.
EDIT: the posibility of avoiding initial notification has been finally implemented at Orion. Details are at this section of the documentation. It is now in the master branch (so if you use fiware/orion:latest docker you will get it) and will be include in next Orion version (2.2.0).
Let's say a user accesses a resource and it maps to a handler foo().
In foo() I want to check if the user's session is valid.
For maximum decoupling (and for the sake of the example) I put the provided session ID into a message and push that into the queue VERIFY_SESSION where it is picked up by a subscribed worker.
The worker takes the session ID out of the message, checks the database, etc and then adds some data indicating the session is valid to the message before pushing it to VERIFIED_SESSIONS.
Question:
How do I get the information that the session is valid back into the worker that handles the user's connection?
If I subscribe all frontend workers to the queue VERIFIED_SESSIONS, there is no way of telling which worker will receive it.
All I can think of would be to basically implement RPC on top of the message queue, but that would defeat the purpose of having the queue to begin with.
What is the common pattern here?