IBM WebSphere MQ has functionality called triggering that allows for an iSeries program to be called when a new message arrives on the queue. Is there a way to allow for the same functionality with a native iSeries data queue?
Yes, set up a CL program that waits on the QRCVDTAQ() API and when a message comes in, call the program, submit a job, etc. The sender would be sending messages that contain the library/program or perhaps even the entire CALL command. You can use QCMDEXC() in the CL to run/submit the program.
Related
I have a situation where i have to call an Azure function periodically.
When i call the function, i need to check the state of the azure function.
If the Azure function is running, then i need to postpone the call until it is completed.
I am trying to look in an Email Queue (as the emails are coming in), I need to send the email using Amazon SES
I am using a HTTPtrigger and the email part is working fine.
I don't want the function to be called, when it is already running.
If you consider the serverless architecture, each time whenever you invoke a service endpoint, a new instance will be created and scaling is managed by scaling controller.
There is no way to check if the function is running or not.
Without understanding more about your use-case, I think this is possible with Durable Functions. Look up Eternal Orchestrations that call themselves on an interval indefinitely. You can then query the status if required and have a workflow in the eternal orchestration that changes depending on certain criteria.
I would like to create web service(s) that I can publish to external facing network to allow our customers team to send us CRUD operations on customer orders.
What would be the best practice in this case of using Microsoft or Open-Source technologies to serve the customer reqeusts?
Option1:
The web service accepts data XML/JSON
Stores the data locally in a file
A task picks up the file and attempts data load it in the background
Send an email for records that failed
Drawback here is the response from the web service will not be realtime and validation will be limited.
Option2:
The web service accepts data XML/JSON
Attempt data load
Respond immediately if load was success or failure
Drawback here is if the volume of orders increases increases several folds in near future if the infrastructure can handle it.
I am open to using REST with WCF or Web API and any other helpful technologies that can be scaled when demand grows.
Have you tried message queueing?
Basically, in this architecture, there is a client application (called producer) that submits a message to the message broker (message queue), and there is another application (called consumer) that connects to the broker and subscribes for the message to be processed.
The message can be just simple information or a task that will be processed by another application.
The application can act both as producer and consumer.
There are many message queue software, one of them is rabbitmq.
Here is the complete intro about this: https://www.cloudamqp.com/blog/2015-05-18-part1-rabbitmq-for-beginners-what-is-rabbitmq.html
Since the communication is done through the middleman (aka the message queue) it will not provide an immediate response. But you don't need to send the process result (i.e. Order processing in your case) to the email since the application can subscribe for the message of the result.
It is perfect to handle a huge load of processes. As always you can start small (even free) and scale up in the future.
Take a look at the pricing details https://www.cloudamqp.com/ that provides rabbitmq software as a service.
I do this using ActiveMQ as a central message broker (you can also use Azure Service Bus if you have an Azure subscription) and pre-baked domain objects. For your scenario it might look like:
The web service accepts data XML/JSON
yes, you have a REST service that accepts multipart requests, lets say JSON as it's easier to work with on the client side. Before you can send messages it's usually best to convert the incoming client message to a domain message, so all message consumers know the exact format to expect and can therefore validate the message. I usually create these using xsd.exe on Windows using an XSD file that describes the format of the object. xsd.exe turns that XSD into a C# class. It's then just a process of taking the JSON fields and populating an Order class. That Order then gets sent as a message to the broker. At that point you're now in guaranteed messaging land as JMS will take care of that and ActiveMQ will take care of message persistence.
Stores the data locally in a file
rather than a file, you convert the incoming JSON to a domain class, e.g. an Order instance. You'll never see JSON or XML beyond this point as it's all domain classes from here.
A task picks up the file and attempts data load it in the background
yes, the broker has routes defined in a Camel config that tells it to, for example, send messages coming in on the /client topic to the /orders topic. The task is set up as a durable topic subscriber so automatically gets that Order domain object.
Send an email for records that failed
if the Order object contains information about the client (email etc) then the task can send the email on failure but a better pattern is to route the failed Order to the /error topic where a different task, which again is a durable topic subscriber picks it up and logs/sends email/audits etc
if the volume of orders increases increases several folds in near
future
you can cluster the brokers and run multiple Order consumers. If you separate the failure handling into another route, all the order task has to do is process the order and route the message to either the /error or /success topic depending on the outcome. So each route provides a small piece of the puzzle and you can scale up the pieces if the puzzle gets too big.
I want to create a hook which is called whenever an offline message is created. Example: user-a(online) send a message to user-b(offline). Then as per ejabberd the message is stored and sent when user-b comes online. But in this situation, I want to call a local server POST URL with the data. How to create such a hook?
This hook is created so that I can from the local server send a push notification. Thus whenever a user comes online he/she will get the offline message as the push notification.
There is no step by step implementation for this. If anyone knows this it will be of great help.
There are two hooks called when a message is sent to a local account that is offline: ejabberd_sm calls offline_message_hook. And mod_offline calls store_offline_message. Search for that in the ejabberd source code, and you will find example code to use them.
Building on Badlop's answer, I have created a module that does exactly what you need to achieve from an offline_hook. Only with a difference that you'll have to connect a component to ejabberd instead of getting messages on REST API.
Let's say a user accesses a resource and it maps to a handler foo().
In foo() I want to check if the user's session is valid.
For maximum decoupling (and for the sake of the example) I put the provided session ID into a message and push that into the queue VERIFY_SESSION where it is picked up by a subscribed worker.
The worker takes the session ID out of the message, checks the database, etc and then adds some data indicating the session is valid to the message before pushing it to VERIFIED_SESSIONS.
Question:
How do I get the information that the session is valid back into the worker that handles the user's connection?
If I subscribe all frontend workers to the queue VERIFIED_SESSIONS, there is no way of telling which worker will receive it.
All I can think of would be to basically implement RPC on top of the message queue, but that would defeat the purpose of having the queue to begin with.
What is the common pattern here?
When sending messages with SSB we'll initialize conversations by specifying to and from services.
But when reading, all we do is to RECEIVE without specifying services. So how do I make sure that I read messages which are only for service X?
Or have I missed something fundemental?
To RECEIVE for service A, RECEIVE from service A's queue. To RECEIVE from service B, RECEIVE from service B's queue.
You should only place two services on the same queue if the processing is identical and you really do not care about which service does the message belong to. You can even project the service name in the RECEIVE result set so you can know that your message belongs to A or B, if is important in processing. As a general rule there is no way to declare 'RECEIVE messages that meet criteria X and ignore the rest'. The idea is that messages are events that require handling, so you cannot choose what event you look at next.