I use AMQ 6 (ActiveMQ) on OpenShift, and I use a queue with re-delivery with exponentialBackoff (set in connection query params).
When I have one consumer and two messages and the first message gets processed by my single consumer and does NOT get an ACK...
Will the broker deliver the 2nd message to the single consumer?
Or will the broker wait for the re-delivery to preserve message order.
This documentation states:
...Typically a consumer handles redelivery so that it can maintain message order while a message appears as inflight on the broker. ...
I don't want to have my consumer wait for re-delivery. It should consume other messages. Can I do this without multiple consumers? If so, how?
Note: In my connection query params I don't have the ActiveMQ exclusive consumer set.
I have read the Connection Configuration URI docs, but jms.nonBlockingRedelivery isn't mentioned there.
Can the resource adapter use it by query param?
If you set jms.nonBlockingRedelivery=true on your client's connection URL then messages will be delivered to your consumer while others are in the process of redelivery. This is false by default.
Related
I'm currently working with Apache Camel and its MQTT component. I have a route consuming messages from the broker (Apache ActiveMQ artemis) and one other sending messages to it. The problem is that there are no exception thrown when the message broker is not available. Moreover all the messages that are not sent are kept in memory waiting for an eventual restart of the server, causing memory overflows. I don't know if this related to the MQTT protocol itself or to the configuration of the endpoint.
Here is my configuration :
MQTTEndpoint mqttEndpoint = null;
mqttEndpoint = (MQTTEndpoint) mqttComponent.createEndpoint(MQTT_BROKER);
mqttEndpoint.getConfiguration().setHost(properties.getBrokerAddress());
mqttEndpoint.getConfiguration().setPublishTopicName(publishTopicName);
//mqttEndpoint.getConfiguration().setSubscribeTopicNames(subscribreTopicNames);
mqttEndpoint.getConfiguration().setUserName(properties.getBrokerUsername());
mqttEndpoint.getConfiguration().setPassword(properties.getBrokerPassword());
mqttEndpoint.getConfiguration().setSslContext(createSSLContext());
mqttEndpoint.getConfiguration().setByDefaultRetain(false);
mqttEndpoint.getConfiguration().setQualityOfService(QoS.AT_MOST_ONCE.toString());
mqttEndpoint.getConfiguration().setConnectAttemptsMax(1);
mqttEndpoint.getConfiguration().setConnectWaitInSeconds(5);
mqttEndpoint.getConfiguration().setReconnectBackOffMultiplier(1);
mqttEndpoint.getConfiguration().setDisconnectWaitInSeconds(3);
mqttEndpoint.setCamelContext(camelCtx);
So this is correct behaviour for the QOS level you have set. You are setting the QOS flag to QoS.AT_MOST_ONCE.toString(). This is known as QOS level 2.
A Small Summary Of QOS 2 – Only Once
This level guarantees that the message will be delivered only once. If there is networking issues and it cannot deliver it the message will stay in the client queue till delivery is possible. This is the slowest QOS level as it requires 4 messages.
The sender sends a message and waits for an acknowledgement (PUBREC)
The receiver sends a PUBREC message
If the sender doesn’t receive an acknowledgement ( PUBREC) it will resend the message
with the DUP flag set.
When the sender receives an acknowledgement message PUBREC it then sends a message release message (PUBREL).
If the sender doesn’t receive the PUBREL it will resend the PUBREC message
When the receiver receives the PUBREL message it can now forward the message onto any subscribers.
The receiver then send a publish complete (PUBCOMP) .
If the sender doesn’t receive the PUBCOMP message it will resend the PUBREL message.
When the sender receives the PUBCOMP the process is complete and it can delete the message from the outbound queue.
See this blog entry for more informaton.
Most important part is that in your case the receiver is not available thus the MQTT protocol for QOS 2 cannot complete.
Here is what I gathered from the RFC 5321:
4.1.1.5. RESET (RSET)
This command specifies that the current mail transaction will be aborted. Any stored sender, recipients, and mail data MUST be discarded, and all buffers and state tables cleared. The receiver MUST send a "250 OK" reply to a RSET command with no arguments. A reset command may be issued by the client at any time. It is effectively equivalent to a NOOP (i.e., it has no effect) if issued immediately after EHLO, before EHLO is issued in the session, after an end of data indicator has been sent and acknowledged, or immediately before a QUIT.
The emphases are mine. This says that if we receive the RSET after the end of data indicator ".", but before we sent the acknowledgement, then we must discard the content of the message, which is currently being delivered. This does not seem practical. Moreover, the server can easily acts as if it received the RSET after he sent the acknowledgement - the client would not be able to know. Trying to know what is usually done, I found this discussion https://www.ietf.org/mail-archive/web/ietf-smtp/current/msg00946.html where they say:
Under a RFC5321 compliant "No Quit/Mail" cancellation implementation, after
completing the DATA state, the server is waiting for a pending RSET, MAIL
or QUIT command:
QUIT - complete transaction, if any
MAIL - complete transaction, if any
perform a "reset"
RSET - cancel any pending DATA transaction delivery,
perform a "reset"
drop - cancel any pending DATA transaction delivery
We added this support in 2008 as a local policy option (EnableNoQuitCancel)
which will alter your SMTP state flow, your optimization and now you MUST
follow RSET vs QUIT/MAIL correctly. RSET (after DATA) aborts the
transaction, QUIT/MAIL (after DATA) does not. RSET is not an NOOP at this
point.
The specification says that discarding is a MUST. However, the above extract suggests that in practice it is interpreted as a MAY. I could look at the code of known implementations of SMTP/LMTP, such as Dovecot, but perhaps someone already reviewed that and this would save me time.
The text says "end of data indicator has been sent and acknowledged" which suggests that the client has received the server's response to the DATA command. Since the base protocol doesn't support command pipelining, I don't think sending anything after DATA but before the server's response (after the dot which terminates the DATA but before you receive a reply from the server) is well-defined behavior.
Personally, I can't think of any more reasonable server behavior than "pretend it didn't happen."
The answer is here : https://www.rfc-editor.org/rfc/rfc1047 . They basically says that you can acknowledge before you start the processing and it is actually recommended to do so. This does not violate RFC 5321. Of course, more information on this issue would be useful, but I am happy with rfc1047.
I've set up a small ActiveMQ Network of Brokers to increase reliability. It consists of 3 nodes with the following properties (full config template file is available here):
ActiveMQ Version 5.13.3 (latest as of July 16)
Local LevelDB persistence adapter
NetworkConnector uri="static:(tcp://${OTHER_NODE1}:61616,tcp://${OTHER_NODE2}:61616)" with the two variables set for e.g. node2 to node1 and node3 (uni-directal conn. between all nodes).
Clients connect with failover:(tcp://node1:61616,tcp://node2:61616,tcp://node3:61616), send and retrieve messages as needed.
The failover protocol randomizes the target machine, so messages might be sent back and forth inside the cluster.
There are two (failing) scenarios:
As it is described now, some messages are not delivered, because they are not allowed to go "back". This is done to avoid loops and described in this blog post.
Activating the replayWhenNoConsumers flag as described in the blog and in NoB: Stuck Messages causes those messages to be recognized as duplicates.With enableAudit enabled, I get cursor got duplicate send ID, disabling it gives me a <MSG> paged in, is cursor audit disabled? Removing from store and redirecting to dlq.
Maybe this is trivial to fix - anybody has an idea?
Thinking of 'production' usage of Orion Context Broker, I wonder what kind of guarantees are provided by the Orion Context Broker in terms of delivery of messages -- both from producer and consumer perspective? In particular, keeping in mind various possible failure scenarios (CB failure/restart, network transient failure, consumer failure/restart, etc), as well as possibility of resource congestion in the CB. Few examples:
1) if a context update operation succeeds, is it guaranteed that consequent queries will return the latest data (e.g., even if CB failed right after acknowledging the update request, and then restarted)?
2) if a consumer subscribed for certain context information, is it guaranteed that it will receive all the relevant updates -- exactly once, at least once, or even at all? (e.g., in case of transient network failure between CB and the consumer)
3) if a consumer updated its subscription, is it guaranteed that the consequent updates will accurately reflect it? (e.g., if CB failed right after acknowledging the subscription request, and then restarted)
4) if a consumer is subscribed for context changes ('onchange', no throttling), and there are multiple consequent updates from the producer affecting the same attribute, is it guaranteed that each of the changes will be sent (or some might be skipped -- e.g., due to too many notifications that CB needs to send during a certain period of time), in any particular order?
etc...
Thanks!
Answering bullet by bullet:
In general, if the client receives a 2xx response (inside of the response payload in the case of NGSIv1, HTTP response code in the case of NGSIv2) it can assume that the update has been persisted in context database, so subsequent queries will return that data (except in the case of running CB with -writeConcern 0 if the DB fails before the update can be persisted from DB memory to disk).
In order to keep things simpler, CB uses a "fire and forget" notification policy. However, CB can be combined with HTTP relaying software (e.g. Rush, event buses, etc.) in order to implement retries, etc.
Similar to case 1, if the client receives a 2xx response (inside of the response payload in the case of NGSIv1, HTTP response code in the case of NGSIv2) it can assume that the update has been persisted in context database (except in the case of running CB with -writeConcern 0 if the DB fails before the update can be persisted from DB memory to disk), so notifications of such data (due to either existing subscriptions or new ones) will use the new value.
All notifications will be sent as long as thread saturation (in the case of -notificationMode transient) or queue saturation (-notification threadpool:q:n) don't occur. You can find more information about notification modes in Orion documentation.
I have an orchestration that receives an XML with some email properties(like: to, from, cc, subject, etc..).
Then I want to send the emailmessage with a dynamic port (and I assigned some of the values according the input xml). After the email has been sent, I want to do some further processing but that processing may only execute when the mail has been delivered succesfully on the SMTP server.
In the functional design they want to have a retry per hour and maximum of one day, after that periode a message must be in the EventLog when it cannot be delivered successfully.
Therefore I set the dynamic port with the context properties BTS.RetryCount to 23 and BTS.RetryInterval to 60.
I have set the dynamic SMTP port delivery notification to "Transmitted" and I have a catch exception block to catch the DeliveryFailureException.
Is this enough ?
It is a litte bit confusing for me reading several blogs if I should mark the scope Synchronized...
Patrick,
You're right, the documentation on this aspect of BizTalk delivery notification is scarce and confusing. After extensive testing, I have not been able to identify a difference wether the Scope is set to Synchronized = true; or not.
The official documentation for the Synchronized setting only applies to shared variables when used in both branches of a Parallel execution.
As for the Delivery Notification itself, I'm currently facing a problem in production where the FILE adapter produces its ACK event before the entire contents of the file is written to the output folder - it renders this part of the solutiong useless!