Let's say a user accesses a resource and it maps to a handler foo().
In foo() I want to check if the user's session is valid.
For maximum decoupling (and for the sake of the example) I put the provided session ID into a message and push that into the queue VERIFY_SESSION where it is picked up by a subscribed worker.
The worker takes the session ID out of the message, checks the database, etc and then adds some data indicating the session is valid to the message before pushing it to VERIFIED_SESSIONS.
Question:
How do I get the information that the session is valid back into the worker that handles the user's connection?
If I subscribe all frontend workers to the queue VERIFIED_SESSIONS, there is no way of telling which worker will receive it.
All I can think of would be to basically implement RPC on top of the message queue, but that would defeat the purpose of having the queue to begin with.
What is the common pattern here?
Related
I am searching for an efficient way to track contract transactions.
Specifically, I want to receive an immediate notification when a specific function with a specific parameter is executed.
Any ideas or suggestions?
The most efficient approach is to listen for new blocks and fetch every transaction from the block via GraphQL (I assume you use geth) which then gives you the block header, transaction and transaction receipt in a single http call.
From there you can ABI decode any transaction input which matches your function signature to obtain the function parameters and join that with the tx status from the receipt.
I am personally writing a similar component (https://github.com/grassrootseconomics/cic-chain-events/) to track ERC20 transfers and notify users (SMS, Telegram, e.t.c). You can borrow and extend concepts from it.
Run your own node
Subscribe to WebSocket hook to receive a notification for every transaction
Check if the transaction matches your parameters
if within your smart contract function there are events emitted with the inputs that your user provide, then it will be possible to listen to those events (and thus your function call) and the corresponding transaction in real-time with Moralis Streams API.
Essentially to work with this, you will need a webhook where Moralis will be able to stream those events and transaction data constantly. To test it out really quickly, you can use https://webhook.site as a test webhook.
To get started with Streams API, you can follow this tutorial right here https://docs.moralis.io/streams-api/getting-started
Hope this helps!
I would like to create web service(s) that I can publish to external facing network to allow our customers team to send us CRUD operations on customer orders.
What would be the best practice in this case of using Microsoft or Open-Source technologies to serve the customer reqeusts?
Option1:
The web service accepts data XML/JSON
Stores the data locally in a file
A task picks up the file and attempts data load it in the background
Send an email for records that failed
Drawback here is the response from the web service will not be realtime and validation will be limited.
Option2:
The web service accepts data XML/JSON
Attempt data load
Respond immediately if load was success or failure
Drawback here is if the volume of orders increases increases several folds in near future if the infrastructure can handle it.
I am open to using REST with WCF or Web API and any other helpful technologies that can be scaled when demand grows.
Have you tried message queueing?
Basically, in this architecture, there is a client application (called producer) that submits a message to the message broker (message queue), and there is another application (called consumer) that connects to the broker and subscribes for the message to be processed.
The message can be just simple information or a task that will be processed by another application.
The application can act both as producer and consumer.
There are many message queue software, one of them is rabbitmq.
Here is the complete intro about this: https://www.cloudamqp.com/blog/2015-05-18-part1-rabbitmq-for-beginners-what-is-rabbitmq.html
Since the communication is done through the middleman (aka the message queue) it will not provide an immediate response. But you don't need to send the process result (i.e. Order processing in your case) to the email since the application can subscribe for the message of the result.
It is perfect to handle a huge load of processes. As always you can start small (even free) and scale up in the future.
Take a look at the pricing details https://www.cloudamqp.com/ that provides rabbitmq software as a service.
I do this using ActiveMQ as a central message broker (you can also use Azure Service Bus if you have an Azure subscription) and pre-baked domain objects. For your scenario it might look like:
The web service accepts data XML/JSON
yes, you have a REST service that accepts multipart requests, lets say JSON as it's easier to work with on the client side. Before you can send messages it's usually best to convert the incoming client message to a domain message, so all message consumers know the exact format to expect and can therefore validate the message. I usually create these using xsd.exe on Windows using an XSD file that describes the format of the object. xsd.exe turns that XSD into a C# class. It's then just a process of taking the JSON fields and populating an Order class. That Order then gets sent as a message to the broker. At that point you're now in guaranteed messaging land as JMS will take care of that and ActiveMQ will take care of message persistence.
Stores the data locally in a file
rather than a file, you convert the incoming JSON to a domain class, e.g. an Order instance. You'll never see JSON or XML beyond this point as it's all domain classes from here.
A task picks up the file and attempts data load it in the background
yes, the broker has routes defined in a Camel config that tells it to, for example, send messages coming in on the /client topic to the /orders topic. The task is set up as a durable topic subscriber so automatically gets that Order domain object.
Send an email for records that failed
if the Order object contains information about the client (email etc) then the task can send the email on failure but a better pattern is to route the failed Order to the /error topic where a different task, which again is a durable topic subscriber picks it up and logs/sends email/audits etc
if the volume of orders increases increases several folds in near
future
you can cluster the brokers and run multiple Order consumers. If you separate the failure handling into another route, all the order task has to do is process the order and route the message to either the /error or /success topic depending on the outcome. So each route provides a small piece of the puzzle and you can scale up the pieces if the puzzle gets too big.
I'm working on a project that consumes Service Now API (Rest). To do so our client has registered us as a user in order to login and make all service calls we need to. This project has an interface where users can login once they have an account on Service Now as well, the username they type to log in has nothing to do with service now by the way, but later they associate theirs service now users to it. They can do some operations through this interface, where all of them are done using the integration user/pass not their service now users theirselves, even because they do not need to share their passwords with us. But it's needed to track the correct user to register on service now and I'm in trouble specifically about commenting on an incident. The endpoint to comment is the following :
http://hostname/api/now/table/incident/{sys_id}
where request body is a json object just as simple as :
{
"comments": "My comment is foo bar"
}
but when this comment is registered on Service Now it is under integration user instead the user which commented. Is there any way I could keep a specific user, considering I already have the user id on Service Now ready to inform it on the request the way it should be.
I tried reading Service Now documentation but had no clue how to solve it, altought I've found something about impersonate
This is happening because you're being proxied through the "Integration User" instead of your own account. As long as this is the case, your comments are going to be attributed to the Integration User.
I can think of two ways to fix this issue.
Ask the client to log you into their system directly as a user.
Implement a special API (Scripted REST API, available in Geneva or later) that allows you to identify the Incident and enter the comment, and then the script forges the comment on your behalf, attributing authorship correctly.
The first solution can be expensive due to possible additional licensing costs.
The second solution will require a willing client to devote 2-3 hours of development time, depending on the programmer.
Firstly, you need an integration user with suffient rights. Our integration user has suffient rights out of the box, but your story could be different. A quick check is to try impersonate as other user using menu.
Login as integration user to ServiceNow instance.
Go to https://{instance}.service-now.com/nav_to.do
Click on username at top right corner. This is a drop down.
There should be at least three menu items: "Profile", "Impersonate User", and "Logout". If you do not have "Impersonate User" in this menu, your integration user miss some permissions. Contact system administrator if you miss this menu item to configure appropriate permissions.
Then you need to find sys_id of user that you want to impersonate. For example:
https://{instance}.service-now.com/api/now/table/sys_user?sysparm_query=user_name={username}&sysparm_fields=sys_id
If you have suffient privileges, you could invoke the folling endpoint with sys id of user that you want to impersonate:
HTTP POST to https://{instance}.service-now.com/api/now/ui/impersonate/{user_sys_id} with body "{}" and content type "application/json". You need to provide HTTP basic authentication to this query as your integration user.
The response code on success is 200. The response body could be ignored. The interesting result of this response is a set of cookies for impersonated user in response headers. These cookies could be used for subsequent REST API calls until they expire. Use some HTTP rest client dependent method to capture them and to provide them to next calls.
For Apache HTTP Client (Java), I'm creating http client context using:
HttpClientContext context = HttpClientContext.create();
context.setCookieStore(new BasicCookieStore());
Pass thing context to impersonation request and to subsequent API calls until I get 401 reply, after that I'm reaquiring cookies. Setting new cookie store is important, as otherwise some default cookies store is used.
Two things to note:
This API looks like internal one, so it could change at any time. If it happens, look for what "Impresonate User" menu item does, and repeat it youselves.
ServiceNow permissions are quite fine-grained, so the target user could lack permissions to perform operation. In some cases, if there is no permission to update the field the operation PATCH on object returns reponse 200, but field is not updated. This introduces a surprising mode of failure when you use impersonation.
I'm reverse engineering some code that sends a message to an Amazon SQS queue. I know the name of the queue, and can find it in the my AWS console. However, I don't know what is subscribed to the queue. I'd like to see how the message is being processed. Is there an easy way to find that? I can't see anything in the console, or in the CLI... I was hoping for something comparable to rabbitmqctl, which can show you a list of subscribers.
You don't subscribe to an SQS queue. SQS queues have listeners that poll (usually long-poll) the queue for messages.
Anybody anywhere with valid, authorized credentials possessing the permission to receive messages from the can poll it (or not poll it) at any time.
Queues don't have subscribers -- topics, like in SNS -- have subscribers, where messages are broadcast to all subscribers each time a message is published.
There are several Cloudwatch metrics for SQS queues that you can use to determine whether the queue is being polled, but the interactions between listeners and queues is a different model than some other message queue platforms, where listeners to the queue maintain a persistent connection (and can therefore potentially be enumerated). An SQS listener connects, receives any available messages up to the max allowed or requested, disconnects¹, processes the messag(es), then reconnects to delete the messages (otherwise the messages eventually become visible for another listener to receive... or the same listener... SQS has no concept of "who" is listening, because everything works over HTTP which is, of course, stateless.
¹Of course, with HTTP keep-alive, the listener may not technically disconnect the TCP connection to the SQS API endpoint, but there is no state preserved when this happens and SQS has no sense that the listener is "still connected."
When sending messages with SSB we'll initialize conversations by specifying to and from services.
But when reading, all we do is to RECEIVE without specifying services. So how do I make sure that I read messages which are only for service X?
Or have I missed something fundemental?
To RECEIVE for service A, RECEIVE from service A's queue. To RECEIVE from service B, RECEIVE from service B's queue.
You should only place two services on the same queue if the processing is identical and you really do not care about which service does the message belong to. You can even project the service name in the RECEIVE result set so you can know that your message belongs to A or B, if is important in processing. As a general rule there is no way to declare 'RECEIVE messages that meet criteria X and ignore the rest'. The idea is that messages are events that require handling, so you cannot choose what event you look at next.