Implement Spring JMSTemplate without acknowledgement - message-queue

We have a requirement is to build spring boot command line applicarion where we have to send messages to queue.
Only request queue has been setup.
As there is no response queue setup, we are not getting any acknowledgement from client side if they receive a message or not.
Right now I am using Spring's JMSTemplate send() method to send message to request queue and SingleConnectionFactory to create one shared connection as this is commmand line application
As there is no acknowledgement/response to message we send to request queue, End to end testing is difficult.
If destination/request queue connection is obtained and message is sent without any exception, I consider it a successful test.
Is it a right to implement Spring JMS templates send() method only ? and not following jms template send/receive pattern
Note: It is not possible to setup a response queue and get any acknowledgement from client side.

In JMS (and in most other messaging systems) producers and consumers are logically separated (i.e. de-coupled). This is part of the fundamental design of the system to reduce complexity and increase scalability. With these constraints your producers shouldn't care whether or not the message is consumed. The producers simply send messages. Likewise, the consumers shouldn't care who sends the messages or how often, etc. Their job is simply to consume the messages.
Assuming your application is actually doing something with the message (i.e. there is some kind of functional output of message processing) then that is what your end-to-end test should measure. If you get the ultimate result you're looking for then you may deduce that the steps in between (e.g. sending a message, receiving a message, etc.) were completed successfully.
To be clear, it's perfectly fine to send a message with Spring's JMSTemplate without using a request/response pattern. Generally speaking, if you get no exceptions then that means the message was sent successfully. However, there are other caveats when using JMSTemplate. For example, Spring's JavaDoc says this:
The ConnectionFactory used with this template should return pooled Connections (or a single shared Connection) as well as pooled Sessions and MessageProducers. Otherwise, performance of ad-hoc JMS operations is going to suffer.
That said, it's important to understand the behavior of your specific JMS client implementation. Many implementations will send non-persistent JMS messages asynchronously (i.e. fire and forget) which means they may not make it to the broker and no exception will be thrown on the client. Sending persistent messages is generally sufficient to guarantee that the client will throw an exception in the event of any problem, but consult your client implementation documentation to confirm.

Related

Why should I build my own error/exception handling into a Webflux application?

When there is some internal exception in a Webflux application, why do I want/need to write code to handle these exceptions? I understand handling issues and returning appropriate ServerResponse bodies when the service client incorrectly invokes a service, or when a non-error-condition (i.e., query returns empty cursor, etc.) occurs.
But, other than generating debug information into a logfile, is there anything to be gained by rolling-your-own exception handling components? This approach makes "more sense" to me in a monolithic application, where one is trying to avoid a scenario where the app "just dies".
But, for a service implementation, especially if there's some incentive not to expose too much about the internal implementation to a client, why wouldn't Spring's default error/exception handling (and "500 Internal Server Error" response/message) be sufficient.
So, after some time and thought (and little, but still helpful-and-appreciated feedback), I guess it boils down to:
(a) - It provides a localized context to "do things", like logging information about the exception/error condition, or categorizing the severity of the exception within-the-context of a server-client interaction.
(b) - It provides a localized context to hide/expose information from a client, based on the nature of the exception/error condition and whether the server is deployed in a production or test environment.
(c) - Being localized, it makes maintenance/modification a bit easier, as the handling of exceptions/errors is not scattered throughout the code.
(a) and (c) are enough to make me believe it's worth the effort.

Message Queue Where Messages Can Be Filtered By Tags

I'm in need of a message queue where I can associate messages with tags and receive only the messages that are associated with a certain tag.
For example let's say that {id:1, tags: "tag1", "tag2"} is a message with id 1 and associated with tags "tag1" and "tag2". So I would like the receive 1 when I ask for "tag1" or "tag2" from the queue but not for "tag3".
I also need this feature to support one-time delivery, which means when I receive the message above it won't be served again when asked for tag1 or tag2 (at least within the visibility timeout)
An MQ which enables filtering messages with a user-defined property would also work; but it should guarantee a one-time delivery of the message. So routing in AMQP (such as in RabbitMQ) would not work for me, since I believe it creates a copy of the message in each queue.
I've investigated several MQ implementations (RabbitMQ, ActiveMQ, SQS, MSMQ etc.) but failed to find an implementation of this feature. Is there an MQ which supports this type message filtering?
Since you were looking at RabbitMQ, ActiveMQ, SQS, MSMQ you may be also interested to check out ZeroMQ, nanomsg or YAMI4.
They have PUB/SUB mechanism with the filtering capabilities on the client side.
The client could receive messages with a particular tag.
Listening to many tags could be arranged by using dedicated threads or multiple connections.
PUB/SUB in ZeroMQ
PUB/SUB in nanomsg
PUB/SUB in YAMI4
I use nanomsg in the production in C/C++ application and in Java app with Java bindings.

Put message works in spite of catching MQException with MQ Code 2009

I have a strange issue which is causing a serious double-booking problem for us.
We have an MQ.NET code written in C# running on a Windows box that has MQ Client v 7.5. The code is placing messages on the MQ queue. Once in a while the "put" operation works and the message is placed on the, but the MQException is still thrown with Error Code 2009.
In this case, the program assumes that the put operation failed and places the same message on the queue again, which is not a desirable scenario. The assumption is that if the "put" resulted in MQException the operation has failed. Any idea how to avoid this issue from happening? See the client code below.
queue = queueManager.AccessQueue(queueName, MQC.MQOO_OUTPUT + MQC.MQOO_FAIL_IF_QUIESCING);
queueMessage = new MQMessage();
queueMessage.CharacterSet = 1208;
var utf8Enc = new UTF8Encoding();
byte[] utf8String = Encoding.UTF8.GetBytes(strInputMsg);
queueMessage.WriteBytes(Encoding.UTF8.GetString(utf8String).ToString());
queuePutMessageOptions = new MQPutMessageOptions();
queue.Put(queueMessage, queuePutMessageOptions);
Exception:
MQ Reason code: 2009, Exception: Error in the application.
StackTrace: at IBM.WMQ.MQBase.throwNewMQException()
at IBM.WMQ.MQDestination.Open(MQObjectDescriptor od)
at IBM.WMQ.MQQueue..ctor(MQQueueManager qMgr, String queueName, Int32 openOptions, String queueManagerName, String dynamicQueueName, String alternateUserId)
at IBM.WMQ.MQQueueManager.AccessQueue(String queueName, Int32 openOptions, String queueManagerName, String dynamicQueueName, String alternateUserId)
at IBM.WMQ.MQQueueManager.AccessQueue(String queueName, Int32 openOptions)
There is always an ambiguity of outcomes when using any async messaging over the network. Consider the following steps in the API call:
The client sends the API call to the server.
The server executes the API call.
The result is returned to the client.
Let's say the connection is lost prior or during #1 above. The application gets the 2009 and the message is never sent.
But what if the connection is lost after #1? The outcome of #2 cannot possibly be returned to the calling application. Whether the PUT succeeded or failed, it always gets back a 2009. Maybe the message was sent and maybe it wasn't. The application probably should take the conservative option, assume it wasn't sent, then resend it. This results in duplicate messages.
Worse is if the application is getting the message. When the channel agent successfully gets the message and can't return it to the client then that message is irretrievably lost. Since the application didn't specify syncpoint, it wasn't MQ that lost the message but rather the application.
This is intrinsic to all types of async messaging. So much so that the JMS 1.1 specification specifically addresses it in 4.4.13 Duplicate Production of Messages which states that:
If a failure occurs between the time a client commits its work on a
Session and the commit method returns, the client cannot determine if
the transaction was committed or rolled back. The same ambiguity
exists when a failure occurs between the non-transactional send of a
PERSISTENT message and the return from the sending method.
It is up to a JMS application to deal with this ambiguity. In some
cases, this may cause a client to produce functionally duplicate
messages.
A message that is redelivered due to session recovery is not
considered a duplicate message.
This can be addressed in part by using syncpoint. Any PUT or GET under syncpoint will be rolled back if the call fails. The application can safely assume that it needs to PUT or GET the message again and no dupes or lost messages will result.
However, there is still the possibility that 2009 will be returned on the COMMIT. At this point you do not know whether the transaction completed or not. If it is 2-phase commit (XA) the transaction manager will reconcile the outcome correctly. But if it is 1-Phase commit, then you are back to not knowing whether the call succeeded or failed.
In the case that the app got a message under syncpoint, it will at least have either been processed or rolled back. This completely eliminates the possibility of losing persistent messages due to ambiguous outcomes. However if the app received a message and gets 2009 on the COMMIT then it may receive the same message again, depending on whether the connection failure occurred in #1 or #3 in the steps above. Similarly, a 2009 when committing a PUT can only be dealt with by retrying the PUT. This also potentially results in dupe messages.
So, short of using XA, any async messaging faces the possibility of duplicate messages due to connection exception and recovery. TCP/IP has become so reliable since MQ was invented that most applications ignore this architectural constraint without detrimental effects. Although that increased reliability in the network makes it less risky to design apps that don't gracefully handle dupes, it doesn't actually address the underlying architectural constraint. That can only be done in code, XA being one example of that. Many apps are written to gracefully handle dupe messages and do not need XA to address this problem.
Note: Paul Clarke (the guy who wrote much of the MQ channel code) is quick to point out that the ambiguity exists when using bindings mode connections. In 20 years of using WMQ I have yet to see a 2009 on a bindings mode connection but he says the shorter path to the QMgr doesn't eliminate the underlying architectural constraint any more so than does the reliable network.)

reliability of spring integration esb

How the reliability of message transmission be protected in the spring intergration?
For example, the server crashed when messages transforming in the router, or messages were processed failed in splitter and transformer?
How the mechanism handles those situation?Is there any references or documents?
Any help will be appreciated!
Also, if your entry point is a channel adapter or gateway that supports transactions (e.g. JMS, AMQP, JDBC, JPA,..) and you use default channels, the entire flow will take place within the scope of that transaction, as the transaction context is bound to the thread. If you add any buffering channels or a downstream aggregator, then you would want to consider what Gary mentioned so that you are actually completing the initial transaction by handing responsibility to another reliable resource (as opposed to leaving a Message in an in-memory Map and then committing, for example).
Hope that makes sense.
Shameless plug: there's a good overview of transactions within the Spring Integration in Action book, available now through MEAP: http://manning.com/fisher/
Regards,
Mark
By default, messages are held in memory but you can declare channels to be persistent, as needed. Persistent channels use JMS, AMQP (rabbit), or a message store. A number of message stores are provided, including JDBC, MongoDB, Redis, or you can construct one that uses a technology of your choice.
http://static.springsource.org/spring-integration/docs/2.1.1.RELEASE/reference/html/

Preservation of exception cause when redelivering failed activemq jms messages processed by Mule ESB

I have built several Mule processes that consume messages from jms queues (ActiveMQ). Whenever a Mule component throws an exception, the transaction that consumes the messages rollback and the message gets redelivered to the original queue. After a few tries, it will be sent to a dead letter queue (DLQ.queuName).
We have this working OK, but we are missing the exception thrown, either the first one or the last one, we don't care (it'll probably be the same). This is something that can be done on other brokers (like WebLogic JMS), but I've been struggling with this for a while to no avail.
Does anybody know if this is something that can be configured or do I need to build a specific Mule exception handler or policy for ActiveMQ.
TIA,
Martin
That exception is lost in ActiveMQ at the moment (don't know about Mule) but it is reported to the log as error.
It would make a good enhancement, remembering the string form of the exception in the ActiveMQConsumer and passing it back to the broker with the poison Ack that forces it to go to
the DLQ. In that way, it could be remembered as a message property in the resulting DLQ message.
How would you like to handle the exception, have it reported to a connection exception listener or have it recorded in the DLQ message?