Message Queue Where Messages Can Be Filtered By Tags - message-queue

I'm in need of a message queue where I can associate messages with tags and receive only the messages that are associated with a certain tag.
For example let's say that {id:1, tags: "tag1", "tag2"} is a message with id 1 and associated with tags "tag1" and "tag2". So I would like the receive 1 when I ask for "tag1" or "tag2" from the queue but not for "tag3".
I also need this feature to support one-time delivery, which means when I receive the message above it won't be served again when asked for tag1 or tag2 (at least within the visibility timeout)
An MQ which enables filtering messages with a user-defined property would also work; but it should guarantee a one-time delivery of the message. So routing in AMQP (such as in RabbitMQ) would not work for me, since I believe it creates a copy of the message in each queue.
I've investigated several MQ implementations (RabbitMQ, ActiveMQ, SQS, MSMQ etc.) but failed to find an implementation of this feature. Is there an MQ which supports this type message filtering?

Since you were looking at RabbitMQ, ActiveMQ, SQS, MSMQ you may be also interested to check out ZeroMQ, nanomsg or YAMI4.
They have PUB/SUB mechanism with the filtering capabilities on the client side.
The client could receive messages with a particular tag.
Listening to many tags could be arranged by using dedicated threads or multiple connections.
PUB/SUB in ZeroMQ
PUB/SUB in nanomsg
PUB/SUB in YAMI4
I use nanomsg in the production in C/C++ application and in Java app with Java bindings.

Related

Reading smart contract from malicious node

With smart contracts, I know transactions are verified by multiple nodes, but reading only requires one node. What if that one node is malicious and gives out corrupted data? Is this possible?
Yes, it is technically possible for a node to be malicious and to return modified results (to either all queries or just selected ones).
Apart from non-technical ways to minimize the risk of retrieving data from a malicious node (e.g. request data only from reputable providers, ...), you can set up your own node that you have control over. Here are two widely used open-source Ethereum clients that you can run on your machine:
https://geth.ethereum.org/docs/getting-started
https://openethereum.github.io/index
Both are capable of communicating with external apps using the standardized JSON RPC API (there are wrappers over this API, for example web3 and ethers.js libraries).

Implement Spring JMSTemplate without acknowledgement

We have a requirement is to build spring boot command line applicarion where we have to send messages to queue.
Only request queue has been setup.
As there is no response queue setup, we are not getting any acknowledgement from client side if they receive a message or not.
Right now I am using Spring's JMSTemplate send() method to send message to request queue and SingleConnectionFactory to create one shared connection as this is commmand line application
As there is no acknowledgement/response to message we send to request queue, End to end testing is difficult.
If destination/request queue connection is obtained and message is sent without any exception, I consider it a successful test.
Is it a right to implement Spring JMS templates send() method only ? and not following jms template send/receive pattern
Note: It is not possible to setup a response queue and get any acknowledgement from client side.
In JMS (and in most other messaging systems) producers and consumers are logically separated (i.e. de-coupled). This is part of the fundamental design of the system to reduce complexity and increase scalability. With these constraints your producers shouldn't care whether or not the message is consumed. The producers simply send messages. Likewise, the consumers shouldn't care who sends the messages or how often, etc. Their job is simply to consume the messages.
Assuming your application is actually doing something with the message (i.e. there is some kind of functional output of message processing) then that is what your end-to-end test should measure. If you get the ultimate result you're looking for then you may deduce that the steps in between (e.g. sending a message, receiving a message, etc.) were completed successfully.
To be clear, it's perfectly fine to send a message with Spring's JMSTemplate without using a request/response pattern. Generally speaking, if you get no exceptions then that means the message was sent successfully. However, there are other caveats when using JMSTemplate. For example, Spring's JavaDoc says this:
The ConnectionFactory used with this template should return pooled Connections (or a single shared Connection) as well as pooled Sessions and MessageProducers. Otherwise, performance of ad-hoc JMS operations is going to suffer.
That said, it's important to understand the behavior of your specific JMS client implementation. Many implementations will send non-persistent JMS messages asynchronously (i.e. fire and forget) which means they may not make it to the broker and no exception will be thrown on the client. Sending persistent messages is generally sufficient to guarantee that the client will throw an exception in the event of any problem, but consult your client implementation documentation to confirm.

How to serialize into JSON message in MassTransit and store it, so that it can be sent at a later time without any changes

I am trying to find a good way to serialize MassTransit messages (envelope and all) and store the messages outside of MassTransit. I then want to be able to send the messages at some point in time with MassTransit without any extra serialization.
Is there a way to do this with MassTransit and RabbitMQ.
You can use Quartz.NET, MassTransit has built-in support for scheduling messages.
The usage is documented: http://docs.masstransit-project.com/en/latest/scheduling/scheduling_api.html
There is also a self-hosting (using Topshelf) service for Quartz.
https://github.com/MassTransit/MassTransit/tree/develop/src/MassTransit.Host.Quartz

reliability of spring integration esb

How the reliability of message transmission be protected in the spring intergration?
For example, the server crashed when messages transforming in the router, or messages were processed failed in splitter and transformer?
How the mechanism handles those situation?Is there any references or documents?
Any help will be appreciated!
Also, if your entry point is a channel adapter or gateway that supports transactions (e.g. JMS, AMQP, JDBC, JPA,..) and you use default channels, the entire flow will take place within the scope of that transaction, as the transaction context is bound to the thread. If you add any buffering channels or a downstream aggregator, then you would want to consider what Gary mentioned so that you are actually completing the initial transaction by handing responsibility to another reliable resource (as opposed to leaving a Message in an in-memory Map and then committing, for example).
Hope that makes sense.
Shameless plug: there's a good overview of transactions within the Spring Integration in Action book, available now through MEAP: http://manning.com/fisher/
Regards,
Mark
By default, messages are held in memory but you can declare channels to be persistent, as needed. Persistent channels use JMS, AMQP (rabbit), or a message store. A number of message stores are provided, including JDBC, MongoDB, Redis, or you can construct one that uses a technology of your choice.
http://static.springsource.org/spring-integration/docs/2.1.1.RELEASE/reference/html/

What are the best practices to log an error?

Many times I saw logging of errors like these:
System.out.println("Method aMethod with parameters a:"+a+" b: "+b);
print("Error in line 88");
so.. What are the best practices to log an error?
EDIT:
This is java but could be C/C++, basic, etc.
Logging directly to the console is horrendous and frankly, the mark of an inexperienced developer. The only reason to do this sort of thing is 1) he or she is unaware of other approaches, and/or 2) the developer has not thought one bit about what will happen when his/her code is deployed to a production site, and how the application will be maintained at that point. Dealing with an application that is logging 1GB/day or more of completely unneeded debug logging is maddening.
The generally accepted best practice is to use a Logging framework that has concepts of:
Different log objects - Different classes/modules/etc can log to different loggers, so you can choose to apply different log configurations to different portions of the application.
Different log levels - so you can tweak the logging configuration to only log errors in production, to log all sorts of debug and trace info in a development environment, etc.
Different log outputs - the framework should allow you to configure where the log output is sent to without requiring any changes in the codebase. Some examples of different places you might want to send log output to are files, files that roll over based on date/size, databases, email, remoting sinks, etc.
The log framework should never never never throw any Exceptions or errors from the logging code. Your application should not fail to load or fail to start because the log framework cannot create it's log file or obtain a lock on the file (unless this is a critical requirement, maybe for legal reasons, for your app).
The eventual log framework you will use will of course depend on your platform. Some common options:
Java:
Apache Commons Logging
log4j
logback
Built-in java.util.logging
.NET:
log4net
C++:
log4cxx
Apache Commons Logging is not intended for applications general logging. It's intended to be used by libraries or APIs that don't want to force a logging implementation on the API's user.
There are also classloading issues with Commons Logging.
Pick one of the [many] logging api's, the most widely used probably being log4j or the Java Logging API.
If you want implementation independence, you might want to consider SLF4J, by the original author of log4j.
Having picked an implementation, then use the logging levels/severity within that implementation consistently, so that searching/filtering logs is easier.
The easiest way to log errors in a consistent format is to use a logging framework such as Log4j (assuming you're using Java). It is useful to include a logging section in your code standards to make sure all developers know what needs to be logged. The nice thing about most logging frameworks is they have different logging levels so you can control how verbose the logging is between development, test, and production.
A best practice is to use the java.util.logging framework
Then you can log messages in either of these formats
log.warning("..");
log.fine("..");
log.finer("..");
log.finest("..");
Or
log.log(Level.WARNING, "blah blah blah", e);
Then you can use a logging.properties (example below) to switch between levels of logging, and do all sorts of clever stuff like logging to files, with rotation etc.
handlers = java.util.logging.ConsoleHandler
.level = WARNING
java.util.logging.ConsoleHandler.level = ALL
com.example.blah = FINE
com.example.testcomponents = FINEST
Frameworks like log4j and others should be avoided in my opinion, Java has everything you need already.
EDIT
This can apply as a general practice for any programming language. Being able to control all levels of logging from a single property file is often very important in enterprise applications.
Some suggested best-practices
Use a logging framework. This will allow you to:
Easily change the destination of your log messages
Filter log messages based on severity
Support internationalised log messages
If you are using java, then slf4j is now preferred to Jakarta commons logging as the logging facade.
As stated slf4j is a facade, and you have to then pick an underlying implementation. Either log4j, java.util.logging, or 'simple'.
Follow your framework's advice to ensuring expensive logging operations are not needlessly carried out
The apache common logging API as mentioned above is a great resource. Referring back to java, there is also a standard error output stream (System.err).
Directly from the Java API:
This stream is already open and ready
to accept output data.
Typically this stream corresponds to
display output or another output
destination specified by the host
environment or user. By convention,
this output stream is used to display
error messages or other information
that should come to the immediate
attention of a user even if the
principal output stream, the value of
the variable out, has been redirected
to a file or other destination that is
typically not continuously monitored.
Aside from technical considerations from other answers it is advisable to log a meaningful message and perhaps some steps to avoid the error in the future. Depending on the errors, of course.
You could get more out of a I/O-Error when the message states something like "Could not read from file X, you don't have the appropriate permission."
See more examples on SO or search the web.
There really is no best practice for logging an error. It basically just needs to follow a consistent pattern (within the software/company/etc) that provides enough information to track the problem down. For Example, you might want to keep track of the time, the method, parameters, calling method, etc.
So long as you dont just print "Error in "