Messages stuck or lost in ActiveMQ cluster - configuration

I've set up a small ActiveMQ Network of Brokers to increase reliability. It consists of 3 nodes with the following properties (full config template file is available here):
ActiveMQ Version 5.13.3 (latest as of July 16)
Local LevelDB persistence adapter
NetworkConnector uri="static:(tcp://${OTHER_NODE1}:61616,tcp://${OTHER_NODE2}:61616)" with the two variables set for e.g. node2 to node1 and node3 (uni-directal conn. between all nodes).
Clients connect with failover:(tcp://node1:61616,tcp://node2:61616,tcp://node3:61616), send and retrieve messages as needed.
The failover protocol randomizes the target machine, so messages might be sent back and forth inside the cluster.
There are two (failing) scenarios:
As it is described now, some messages are not delivered, because they are not allowed to go "back". This is done to avoid loops and described in this blog post.
Activating the replayWhenNoConsumers flag as described in the blog and in NoB: Stuck Messages causes those messages to be recognized as duplicates.With enableAudit enabled, I get cursor got duplicate send ID, disabling it gives me a <MSG> paged in, is cursor audit disabled? Removing from store and redirecting to dlq.
Maybe this is trivial to fix - anybody has an idea?

Related

Openshift AMQ6 - message order - queue

I use AMQ 6 (ActiveMQ) on OpenShift, and I use a queue with re-delivery with exponentialBackoff (set in connection query params).
When I have one consumer and two messages and the first message gets processed by my single consumer and does NOT get an ACK...
Will the broker deliver the 2nd message to the single consumer?
Or will the broker wait for the re-delivery to preserve message order.
This documentation states:
...Typically a consumer handles redelivery so that it can maintain message order while a message appears as inflight on the broker. ...
I don't want to have my consumer wait for re-delivery. It should consume other messages. Can I do this without multiple consumers? If so, how?
Note: In my connection query params I don't have the ActiveMQ exclusive consumer set.
I have read the Connection Configuration URI docs, but jms.nonBlockingRedelivery isn't mentioned there.
Can the resource adapter use it by query param?
If you set jms.nonBlockingRedelivery=true on your client's connection URL then messages will be delivered to your consumer while others are in the process of redelivery. This is false by default.

Why my Soffid JSON REST Web Services Connector does not update an object in the target system?

I am trying to connect my Soffid 3 server with our custom web application named Schrift. I am using а JSON REST Web Services Connector for this purpose. I added REST Web service plugin and then configured an agent with JSON/XML/SOAP Rest webservice type.
Loading of objects is working fine. My REST connector connects to the web service successfully and gets data of the accounts.
The problem is when I am trying to update some data (for example, I am trying to lock an account), nothing happens. And unfortunately I don't know what should be happening. When should REST connector send updated data to the managed system and in which way? I didn't find any log entries saying that REST connector was trying to update an object on managed system. Maybe I did smth wrong or missed something.
I would appreciate for any help. I can post any conf or log details if you need.
Update#1
(I did some investigation after the first answer)
I checked the agent settings: Read only and Manual account creation are set to no
The account was set to unmanaged type, but I succeeded in changing its type to shared and then to single without getting an error. Now it is set to single
The task queue is empty.
Also I've checked that update method is present and update properties are set correctly. updateParams is not set (it means that all attributes should be sent to the managed system).
But when I change status of the account (from Enable to Disable), nothing happens.
In the console log I can see only these lines
14-Sep-2021 13:26:29.708 INFO [BPM-Scheduler:192.168.7.121:1] com.soffid.iam.bpm.job.JobExecutorThread.run No job to execute
When I manually run the task Analize impact for changes on Schrift, Execution log shows
Changes detected for accounts
=============================
NO CHANGE DETECTED
Changes detected for roles
=============================
NO CHANGE DETECTED
Update#2
After many attempts I made some progress. Now when I make some changes in the account, the task named UpdateAccount baklykov#irf.com.ua#Schrift appears, but runs with an error.
At first it was 415 Unsupported Media Type error as I wrote in comments, but now it looks a little different
Throws exception updating object : Extensible object [type = account]
EmployeeEmail: baklykov#irf.com.ua
IsLockedOut: true (log truncated) ...
caused by Unexpected response, Content-Type: null
Update#3
I found out that soffid's request for updating the object was in improper format (all the parameters were passed in the html request instead of putting them in json body)
After researching I found a method's property called Encoding and set it to application/json value.
Now the parameters are passed in json body (that's what I need), but now the problem is that soffid puts all the parameters in json body, including the key parameter by which the object for updating should be determined. My guess this is the reason why the object in the target system is still not updated.
In other words my application expects a request like this:
https://myapp.mysite.com/api/v1/Soffid/Employees?EmployeeEmail=baklykov%40irf.com.ua :
{"EmployeeLastName":"Baklykov","EmployeeFirstName":"Ivan"}
but Soffid sends this:
https://myapp.mysite.com/api/v1/Soffid/Employees:
{"EmployeeLastName":"Baklykov","EmployeeFirstName":"Ivan","EmployeeEmail":"baklykov#irf.com.ua"}
The system should have created a UpdateAccount task in the task queue. Please, verify:
The task engine is in automatic mode. In read-only or manual mode, no task will be created.
If you are updating an account, check the account is not set as unmanaged. In that case, no tasks is created.
Finally, verify the task queue has not held the task up.
Have you checked the engine mode? Look at Main Menu > Administration > Configure Soffid > Integration engine > Smart engine settings
It should be set to automatic.

How to use option Arbitrtaion=WaitExternal in MySQL Cluster?

I'm currently reading MySQL Reference Manual and notice that there an option of NDB config -- Arbitrtaion=WaitExternal. The question is how to use this option and how to implement an external cluster manager?
The Arbitration parameter also makes it possible to configure arbitration in
such a way that the cluster waits until after the time determined by Arbitrat-
ionTimeout has passed for an external cluster manager application to perform
arbitration instead of handling arbitration internally. This can be done by
setting Arbitration = WaitExternal in the [ndbd default] section of the config.ini
file. For best results with the WaitExternal setting, it is recommended that
ArbitrationTimeout be 2 times as long as the interval required by the external
cluster manager to perform arbitration.
A bit of git annotate and some searching of original design docs says the following:
When the arbitrator is about to send an arbitration message to the arbitrator it will instead issue the following log message:
case ArbitCode::WinWaitExternal:{
char buf[8*4*2+1];
sd->mask.getText(buf);
BaseString::snprintf(m_text, m_text_len,
"Continuing after wait for external arbitration, "
"nodes: %s", buf);
break;
}
So e.g.
Continuing after wait for external arbitration, nodes: 1,2
The external clusterware should check for this message
at the same interval as the ArbitrationTimeout.
When it discovers this message, the external cluster ware
should kill the data node that it decides to lose the
arbitration.
This kill will be noted by the NDB data nodes and will
decide the matter which node is to survive.

mysql-proxy 0.8.3 load balance can not work

I have three Mysql Nodes listed below:
Master Address: 192.168.1.77:3306
Slave1 Address: 192.168.1.76:3306
Slave2 Address: 192.168.1.69:3306
and after i installed mysql-proxy of version 0.8.3 on 192.168.1.67, and create my configuration below:
[mysql-proxy]
admin-username=proxy
admin-password=proxy
admin-lua-script=/local/software/mysql-proxy/lib/mysql-proxy/lua/admin.lua
proxy-read-only-backend-addresses = 192.168.1.76:3306,192.168.1.69:3306
proxy-backend-addresses=192.168.1.77:3306
proxy-lua-script=/local/software/mysql-proxy/share/doc/mysql-proxy/rw-splitting.lua
log-file=/local/software/mysql-proxy/log/mysql-proxy.log
plugin-dir=/local/software/mysql-proxy/lib/mysql-proxy/plugins
plugins=proxy,admin,debug,replicant
log-level=debug
keepalive=true
edited file: rw-splitting.lua
min_idle_connections = 1,
max_idle_connections = 2,
then start mysql-proxy like the way:
./bin/mysql-proxy --defaults-file=mysql-proxy.cnf
logon the proxy:
mysql -uproxy -ppassword -P4040 -h192.168.1.67
and when i execute select sql again and again to open different mysql-proxy 4040 window, but from log i found that all the select sql queries are sent to the same server for 76, however only if i shutdown the 76, then it will send the queries to slave 69. i don't know why load balance not to work, is there some place what i made a mistake? thank you in advance.
rw-splitting.lua seems to leave some of the implementation as an exercise for the reader. There is a comment 'pick a random backend' but I see no implementation of it, or a round robin technique. The code seems to fill the backend servers from the top moving on to the next in the array when there are not idle connections.
If there are always idle connections at the master then the current implementation prefers to go there. After that it uses the first idle connection in the read only backend servers list. In this case 76 until you shut it down when it moved on to 69. I can't see why 77, the read/write backend is not being preferred. Possibly this is related to the number of idle connections available.
It would seem that seeking the lowest proxy.global.backends.connected_clients, the number of connections currently active on the backend, would be a good way to prioritize the backend used.
You should also take a look at the balance module lib/mysql-proxy/lua/proxy/balance.lua

Send mail with SMTP adapter with retry, retryinterval and delivery notification

I have an orchestration that receives an XML with some email properties(like: to, from, cc, subject, etc..).
Then I want to send the emailmessage with a dynamic port (and I assigned some of the values according the input xml). After the email has been sent, I want to do some further processing but that processing may only execute when the mail has been delivered succesfully on the SMTP server.
In the functional design they want to have a retry per hour and maximum of one day, after that periode a message must be in the EventLog when it cannot be delivered successfully.
Therefore I set the dynamic port with the context properties BTS.RetryCount to 23 and BTS.RetryInterval to 60.
I have set the dynamic SMTP port delivery notification to "Transmitted" and I have a catch exception block to catch the DeliveryFailureException.
Is this enough ?
It is a litte bit confusing for me reading several blogs if I should mark the scope Synchronized...
Patrick,
You're right, the documentation on this aspect of BizTalk delivery notification is scarce and confusing. After extensive testing, I have not been able to identify a difference wether the Scope is set to Synchronized = true; or not.
The official documentation for the Synchronized setting only applies to shared variables when used in both branches of a Parallel execution.
As for the Delivery Notification itself, I'm currently facing a problem in production where the FILE adapter produces its ACK event before the entire contents of the file is written to the output folder - it renders this part of the solutiong useless!