I'm currently reading MySQL Reference Manual and notice that there an option of NDB config -- Arbitrtaion=WaitExternal. The question is how to use this option and how to implement an external cluster manager?
The Arbitration parameter also makes it possible to configure arbitration in
such a way that the cluster waits until after the time determined by Arbitrat-
ionTimeout has passed for an external cluster manager application to perform
arbitration instead of handling arbitration internally. This can be done by
setting Arbitration = WaitExternal in the [ndbd default] section of the config.ini
file. For best results with the WaitExternal setting, it is recommended that
ArbitrationTimeout be 2 times as long as the interval required by the external
cluster manager to perform arbitration.
A bit of git annotate and some searching of original design docs says the following:
When the arbitrator is about to send an arbitration message to the arbitrator it will instead issue the following log message:
case ArbitCode::WinWaitExternal:{
char buf[8*4*2+1];
sd->mask.getText(buf);
BaseString::snprintf(m_text, m_text_len,
"Continuing after wait for external arbitration, "
"nodes: %s", buf);
break;
}
So e.g.
Continuing after wait for external arbitration, nodes: 1,2
The external clusterware should check for this message
at the same interval as the ArbitrationTimeout.
When it discovers this message, the external cluster ware
should kill the data node that it decides to lose the
arbitration.
This kill will be noted by the NDB data nodes and will
decide the matter which node is to survive.
Related
I've set up a small ActiveMQ Network of Brokers to increase reliability. It consists of 3 nodes with the following properties (full config template file is available here):
ActiveMQ Version 5.13.3 (latest as of July 16)
Local LevelDB persistence adapter
NetworkConnector uri="static:(tcp://${OTHER_NODE1}:61616,tcp://${OTHER_NODE2}:61616)" with the two variables set for e.g. node2 to node1 and node3 (uni-directal conn. between all nodes).
Clients connect with failover:(tcp://node1:61616,tcp://node2:61616,tcp://node3:61616), send and retrieve messages as needed.
The failover protocol randomizes the target machine, so messages might be sent back and forth inside the cluster.
There are two (failing) scenarios:
As it is described now, some messages are not delivered, because they are not allowed to go "back". This is done to avoid loops and described in this blog post.
Activating the replayWhenNoConsumers flag as described in the blog and in NoB: Stuck Messages causes those messages to be recognized as duplicates.With enableAudit enabled, I get cursor got duplicate send ID, disabling it gives me a <MSG> paged in, is cursor audit disabled? Removing from store and redirecting to dlq.
Maybe this is trivial to fix - anybody has an idea?
In developing in Fi-Cloud's CEP I've been having an issue that has been happening repeatedly. As I'm trying to develop a definition to perform a task, CEP's server and Authoring Tool stop responding, although ssh is still responsive.
This issue happens as I develop. I'm using the AuthoringTool to alter the definition bit by bit and then I re-upload it to the server through the authoring tool's export feature.
To reinitiate the proton with the new definition each time I alter it, I use Google's Postman with this single operation:
-PUT (url:http://{ip}:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer)
header: 'Content-Type' : 'application/json'; body : {"action": "ChangeDefinitions","definitions-url" : "/ProtonOnWebServerAdmin/resources/definitions/Definition_Name"}
At the same time, I'm logged in with three ssh intances, one to monitor the files being created on /opt/tomcat10/sample/ and other things, and the other two to 'tail -f ' log files the definition writes to, as events are processed: one log for events recieved and another log for events detected by the EPAgent.
I'm iterating through these procedures over and over as I'm developing and eventualy CEP server and the Authoring Tool stop responding.
By "tailing" tomcat's log file (# tail -f /opt/tomcat10/logs/catalina.out) I can see that, when under these circumstances, if I attemp a:
-GET (url: http://{ip}:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer)
I get no response back and tomcat logs the following response:
11452100 [http-bio-8080-exec-167] ERROR org.apache.wink.server.internal.RequestProcessor - An unhandled exception occurred which will be propagated to the container.
java.lang.OutOfMemoryError: PermGen space
Exception in thread "http-bio-8080-exec-167" java.lang.OutOfMemoryError: PermGen space
Ssh is still responsive and I can look at tomcat's log this way.
To get over this and continue, I exit ssh connections and restart CEP's instance in the Fi-Cloud.
Is the procedure I'm using to re-upload and re-run the definition inapropriate? Should I take a different approach to developing?
When you update a definition that the CEP is already working with, and you want the CEP engine to work with the updated definition, you need to:
Export the definition using the authoring tool export (as you did)
Stop the engine run, using REST PUT
PUT //host:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
{"action":"ChangeState","state":"stop"}
Start the engine, using REST PUT
PUT //host:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
{"action":"ChangeState","state":"start"}
You don't need to activate the "ChangeDefinitions" action, since it is the same definition name that the engine is already working with.
Activating "ChangeDefinitions" action, only influences the next run of the CEP, and has no influence on the current run.
This answer your question about how you should update a CEP definition.
Hope it will solve your issue.
I'm collecting some analytic data on my client device which does not require any initial data from the server database.
Is it possible to start with an empty database, add some analytic documents and then when I'm ready use push replication to add those documents to my server database with the sync gate?
I'm going to have an analytics channel but I don't want to pull EVERYTHING from that channel into my client database since it doesn't care about what's there already, it only wants to add to it.
I would be asking this question on the Couchbase forums but it is currently down.
Sure, push and pull replications are entirely separate so as long as you do not create a pull replication you won't receive any data from sync gateway.
Use the following API from CBLDatabase to upload data to server.'
/** Creates a replication that will 'push' this database to a remote database at the given URL.
This always creates a new replication, even if there is already one to the given URL.
You must call -start on the replication to start it. */
- (CBLReplication*) createPushReplication: (NSURL*)url;
Here's an example how you can setup push replication.
NSURL* url = [NSURL URLWithString: #"https://example.com/mydatabase/"];
CBLReplication *push = [database createPushReplication: url];
push.continuous = YES; // NO for One-shot replication
//After authenticating and adding progress observers here, call -start
[push start];
You can set-up pull replication(if needed) in similar way by using -createPullReplication:. Read more from docs over here - Replication.
I have three Mysql Nodes listed below:
Master Address: 192.168.1.77:3306
Slave1 Address: 192.168.1.76:3306
Slave2 Address: 192.168.1.69:3306
and after i installed mysql-proxy of version 0.8.3 on 192.168.1.67, and create my configuration below:
[mysql-proxy]
admin-username=proxy
admin-password=proxy
admin-lua-script=/local/software/mysql-proxy/lib/mysql-proxy/lua/admin.lua
proxy-read-only-backend-addresses = 192.168.1.76:3306,192.168.1.69:3306
proxy-backend-addresses=192.168.1.77:3306
proxy-lua-script=/local/software/mysql-proxy/share/doc/mysql-proxy/rw-splitting.lua
log-file=/local/software/mysql-proxy/log/mysql-proxy.log
plugin-dir=/local/software/mysql-proxy/lib/mysql-proxy/plugins
plugins=proxy,admin,debug,replicant
log-level=debug
keepalive=true
edited file: rw-splitting.lua
min_idle_connections = 1,
max_idle_connections = 2,
then start mysql-proxy like the way:
./bin/mysql-proxy --defaults-file=mysql-proxy.cnf
logon the proxy:
mysql -uproxy -ppassword -P4040 -h192.168.1.67
and when i execute select sql again and again to open different mysql-proxy 4040 window, but from log i found that all the select sql queries are sent to the same server for 76, however only if i shutdown the 76, then it will send the queries to slave 69. i don't know why load balance not to work, is there some place what i made a mistake? thank you in advance.
rw-splitting.lua seems to leave some of the implementation as an exercise for the reader. There is a comment 'pick a random backend' but I see no implementation of it, or a round robin technique. The code seems to fill the backend servers from the top moving on to the next in the array when there are not idle connections.
If there are always idle connections at the master then the current implementation prefers to go there. After that it uses the first idle connection in the read only backend servers list. In this case 76 until you shut it down when it moved on to 69. I can't see why 77, the read/write backend is not being preferred. Possibly this is related to the number of idle connections available.
It would seem that seeking the lowest proxy.global.backends.connected_clients, the number of connections currently active on the backend, would be a good way to prioritize the backend used.
You should also take a look at the balance module lib/mysql-proxy/lua/proxy/balance.lua
I am using NServiceBus for the first time and have a small, simple application where a user submits a form, the form fields are then sent to the queue, and the handler collects this data and writes it to the database using linq-to-sql.
Any changes within Component Services is a complete no-no as far as the DBA is concerned, so I'm now looking for an alternative to DTC (which is not enabled on the DB server), but using AsA_Server so that messages do not get purged.
I have tried removing AsA_Server after IConfigureThisEndpoint and specifying the configuration myself, but this doesn't seem to work (the console appears, page loads but nothing happens, it doesn't even stop at breakpoints.) AsA_Client does work, but as I understand it the messages will be purged at startup which I need to avoid.
Any suggestions?
Thanks,
OMK
EDIT: This has now been resolved by using wrapping the call to the database in a suppress transaction scope, which allows the database work to be done with no ambient transaction to enlist in:
using (TransactionScope sc = new TransactionScope(TransactionScopeOption.Suppress))
{
// code here
sc.Complete();
}
When you use AsA_Server, you are specifying you want durable queues and you will need to configure transactional queues.
With a transactional send/receive MSMQ requires you to send, transmit, receive, and process as part of one transaction. However, actually all these stages take place in their own transactions.
For example, the send transaction is complete when the sender sends a message onto their local MSMQ subsystem (even if the queue address is remote, the sender still sends to a local queue which acts as a kind of proxy to the remote queue).
The transmit transaction is complete when the MSMQ subsystem on the senders machine successfully transmits the message to the MSMQ subsystem on the receivers machine.
Even though this may all happen on one machine, I am guessing that your Handle() method is writing to a database on a different machine.
The problem here is that for the receive operation to complete satisfactorily from a transaction perspective, your call to the database must be successful. Only then will the message be de-queued from your input queue. This prevents any chance that the message is lost during processing failure.
However, in order to enforce that across the network you need to involve DTC to coordinate the distributed transaction to the database.
Bottom line, if you want durable queues in a distributed environment then you will need to use MSDTC.
Hope this helps.
There is an alternative. In your connection string you can add the option to not enlist in a distributed transaction and this will have your DB connection ignored in the DTC.
Of course, if this is set in the config then all database transactions for the application are ignored by the DTC rather than just a specific one.
Example:
<add key="DatabaseConnectionString" value="Data Source=SERVERNAME;Initial Catalog=DBNAME;Integrated Security=True;Enlist=False"/>
With NServiceBus 4.0 you can now do the following, which finally worked for me:
Configure.Transactions.Advanced(t =>
{
t.DisableDistributedTransactions();
t.DoNotWrapHandlersExecutionInATransactionScope();
});
When you use the As (AsA_Client, AsA_Server) interfaces, the configuration is applied after Init() so all the settings that you make there regarding MsmqTransport and UnicastBus are overriden.
It's possible to override those settings using IWantTheConfiguration in a IHandleProfile implementation. You get the Configuration after the default roles are applied but before the bus is started.
This way you can change the default profile settings and tailor them to your needs: deactivate transactions, enable impersonation...
Example:
public class DeactivateTransactions : IHandleProfile<Lite>, IWantTheEndpointConfig
{
private IConfigureThisEndpoint configure;
public IConfigureThisEndpoint Config
{
get { return configure; }
set
{
this.configure = value;
Configure.Instance.MsmqTransport()
.PurgeOnStartup(false)
.IsTransactional(false); // Or other changes
}
}
public void ProfileActivated()
{
}
}