mysql-proxy 0.8.3 load balance can not work - mysql

I have three Mysql Nodes listed below:
Master Address: 192.168.1.77:3306
Slave1 Address: 192.168.1.76:3306
Slave2 Address: 192.168.1.69:3306
and after i installed mysql-proxy of version 0.8.3 on 192.168.1.67, and create my configuration below:
[mysql-proxy]
admin-username=proxy
admin-password=proxy
admin-lua-script=/local/software/mysql-proxy/lib/mysql-proxy/lua/admin.lua
proxy-read-only-backend-addresses = 192.168.1.76:3306,192.168.1.69:3306
proxy-backend-addresses=192.168.1.77:3306
proxy-lua-script=/local/software/mysql-proxy/share/doc/mysql-proxy/rw-splitting.lua
log-file=/local/software/mysql-proxy/log/mysql-proxy.log
plugin-dir=/local/software/mysql-proxy/lib/mysql-proxy/plugins
plugins=proxy,admin,debug,replicant
log-level=debug
keepalive=true
edited file: rw-splitting.lua
min_idle_connections = 1,
max_idle_connections = 2,
then start mysql-proxy like the way:
./bin/mysql-proxy --defaults-file=mysql-proxy.cnf
logon the proxy:
mysql -uproxy -ppassword -P4040 -h192.168.1.67
and when i execute select sql again and again to open different mysql-proxy 4040 window, but from log i found that all the select sql queries are sent to the same server for 76, however only if i shutdown the 76, then it will send the queries to slave 69. i don't know why load balance not to work, is there some place what i made a mistake? thank you in advance.

rw-splitting.lua seems to leave some of the implementation as an exercise for the reader. There is a comment 'pick a random backend' but I see no implementation of it, or a round robin technique. The code seems to fill the backend servers from the top moving on to the next in the array when there are not idle connections.
If there are always idle connections at the master then the current implementation prefers to go there. After that it uses the first idle connection in the read only backend servers list. In this case 76 until you shut it down when it moved on to 69. I can't see why 77, the read/write backend is not being preferred. Possibly this is related to the number of idle connections available.
It would seem that seeking the lowest proxy.global.backends.connected_clients, the number of connections currently active on the backend, would be a good way to prioritize the backend used.
You should also take a look at the balance module lib/mysql-proxy/lua/proxy/balance.lua

Related

How to use option Arbitrtaion=WaitExternal in MySQL Cluster?

I'm currently reading MySQL Reference Manual and notice that there an option of NDB config -- Arbitrtaion=WaitExternal. The question is how to use this option and how to implement an external cluster manager?
The Arbitration parameter also makes it possible to configure arbitration in
such a way that the cluster waits until after the time determined by Arbitrat-
ionTimeout has passed for an external cluster manager application to perform
arbitration instead of handling arbitration internally. This can be done by
setting Arbitration = WaitExternal in the [ndbd default] section of the config.ini
file. For best results with the WaitExternal setting, it is recommended that
ArbitrationTimeout be 2 times as long as the interval required by the external
cluster manager to perform arbitration.
A bit of git annotate and some searching of original design docs says the following:
When the arbitrator is about to send an arbitration message to the arbitrator it will instead issue the following log message:
case ArbitCode::WinWaitExternal:{
char buf[8*4*2+1];
sd->mask.getText(buf);
BaseString::snprintf(m_text, m_text_len,
"Continuing after wait for external arbitration, "
"nodes: %s", buf);
break;
}
So e.g.
Continuing after wait for external arbitration, nodes: 1,2
The external clusterware should check for this message
at the same interval as the ArbitrationTimeout.
When it discovers this message, the external cluster ware
should kill the data node that it decides to lose the
arbitration.
This kill will be noted by the NDB data nodes and will
decide the matter which node is to survive.

Agile PLM Unable to extract {0}, see the log for details

I am new to Agile PLM,
I am getting error like Unable to extract {0}, see the log for details message in ATO's.
Can anyone help me to resolve this and root cause for this issue?
Log to Javaclient and try to do Destination reset.
Please check for space on your managed servers. Generally when it is more than 80% occupancy on server this error comes.
Third approach you can do is :
Disable all the Subscribers.
Test destination if it works or not.
4th Approach :
If you are having a clustered environment. You can check ACS configurations :
if ACS.Skips = True in all the server then it's not ideal scenario. It should be true only on one.
Apart from the above suggestions, if your issue is still not resolved, you can try to check if any exception is occurring during extract. I hope, if you are using any clustered environment, ACS is enabled in only one managed server. So expecting that, check out the STDOUT log of the particular managed server (if using WLS) where the ACS is enabled. If the WLS managed servers are installed as Windows Service you need to edit the registry entries for each of them & modify the server start up parameters for each of them to set ACS.Skipserver=true in all but one.
Now, to print the logs, you need to log in to web client, go to tools & settings -> Administration --> Logging configuration --> Set the following entries to DEBUG:
com.agile.acs.PCExtractTask
com.agile.acs.ScheduleMaster
com.agile.acs.ScheduledEventTask
com.agile.extract.server
com.agile.extract.server.ExtractService

Messages stuck or lost in ActiveMQ cluster

I've set up a small ActiveMQ Network of Brokers to increase reliability. It consists of 3 nodes with the following properties (full config template file is available here):
ActiveMQ Version 5.13.3 (latest as of July 16)
Local LevelDB persistence adapter
NetworkConnector uri="static:(tcp://${OTHER_NODE1}:61616,tcp://${OTHER_NODE2}:61616)" with the two variables set for e.g. node2 to node1 and node3 (uni-directal conn. between all nodes).
Clients connect with failover:(tcp://node1:61616,tcp://node2:61616,tcp://node3:61616), send and retrieve messages as needed.
The failover protocol randomizes the target machine, so messages might be sent back and forth inside the cluster.
There are two (failing) scenarios:
As it is described now, some messages are not delivered, because they are not allowed to go "back". This is done to avoid loops and described in this blog post.
Activating the replayWhenNoConsumers flag as described in the blog and in NoB: Stuck Messages causes those messages to be recognized as duplicates.With enableAudit enabled, I get cursor got duplicate send ID, disabling it gives me a <MSG> paged in, is cursor audit disabled? Removing from store and redirecting to dlq.
Maybe this is trivial to fix - anybody has an idea?

mysql max_user_connections bot traffic

Every month or two a swarm of robot visits my site and opens up connections so fast that my current max_user_connection value of 25 (will increase it to 75) is reached. Currently I restart the server and it works fine again until the next swarm comes. It is a webshop programmed in perl which tries to get the data using DBI connect.
So I have some questions :
Will the problem solve itself after some time or will the open process run until reset and try to get infos from the locked DB ?
Is it possible to do a small query to check for max user connections on the DB to exit if it is to high ?
Any other idea to get protection from DOS attacks or bot swarms (thought about rectriciting Asian IPs in htaccess) ?
You can detect ( for example robots ) with a module. For example HTTP::BrowserDetect.
use HTTP::BrowserDetect;
my $browser = HTTP::BrowserDetect->new($user_agent_string);
if ( $browser->robot() ) {
# dont open an mysql connection,
# return a cached version of the requested page
# or something like that
...
}

MySQL listen notify equivalent

Is there an equivalent of PostgresQL's notify and listen in MySQL? Basically, I need to listen to triggers in my Java application server.
Ok, so what I found is that you can make UDF functions in mysql that can do anything but need to be written in C/C++. They can be then called from triggers on updates in database and notify your application when update happened. I saw that there are some security concerns. I did not use it myself but from what I can see it looks like something that could accomplish what you want to do and more.
http://dev.mysql.com/doc/refman/5.6/en/adding-udf.html
The github project mysql-notification provides a MySQL user defined function MySQLNotification() as a plugin to MySQL that will send notification events via a socket interface. This project includes a sample NodeJS test server that receives the notification events that could be adapted for Java or any other socket service.
Example use:
$ DELIMITER ##
$ CREATE TRIGGER <triggerName> AFTER INSERT ON <table>
FOR EACH ROW
BEGIN
SELECT MySQLNotification(NEW.id, 2) INTO #x;
END##
Project includes full source code and installation instructions for OSX and Linux. License is GNU v3.
No, there aren't any built-in functions like these yet.
You need to "ping" (every 1-5 seconds) database with selecting with premade flag like "read" 0/1. After
SELECT * FROM mytable WHERE read = 0
update it with read = 1
I needed to do this, so I designed my application to send the update notices itself.
E.g.
--Scenario--
User A is looking at record 1
User B saves an update to record 1 while User A has it open.
Process:
I wrote my own socket server as a Windows Service. I designed a que like system which is basically,
EntityType EntityID NoticeType
Where the EntityType is the type of Poco in my data layer that needs to send out notices, EntityID is the primary key value of the row that changed in sql (the values of the poco), and NoticeType is 1 Updated, 2 Inserted, and 3 Deleted.
The socket server accepts connections from the server side application code on a secure connection "meaning client side code cannot make requests designed to be sent by the server side application code"
The socket server accepts a message like
900 1 1023 1
Which would mean the server needs to notify concerned client connections that Entity Type 1 "Person" with ID 1023 was Updated.
The server knows what users need to be notified because when User's look at a record, they are registered in the socket server as having an interest in the record and the record's ID which is done by the web socket code in the client side javascript.
Record 1 is a POCO in my app code that has an IsNew and IsDirty field. "Using EntityFrameWork6 and MySql" If UserB's save caused an actual change (and not just saving existing data) IsDirty will be true on the postback on UserB's POCO.
The application code see's the record is dirty then notifies the socket server with a server side sent socket "which will be allowed" that says Entity 1 with ID 1023 was Updated.
The socket server sees it, and puts it in the que.
Being .Net, I have a class for concerned users that uses the same pocos from the data layer running in the Socket Server window service. I use linq to select users who are working with an entity matching the entity type and primary key id of the entity in the que.
It then loops through those users and sends them a socket like
901 1 1023 1 letting them know the entity was updated.
The javascript in the client side receives it causing users B's page to do an ajax postback on Record 1, But what happens with UserA's is different.
If user A was in the process of making a change, they will get a pop up to show them what changed, and what their new value will be if they click save and asks them which change they want to keep. If UserA doesn't have a change it does an ajax postback with a notification bar at the top that says "Record Change: Refreshed Automatically" that expires after a few seconds.
The Cons to this,
1. It's very complex
2. It won't catch insert/update/delete operations from outside of the application.
In my case, 2 won't happen and if 2 does happen it's by myself or another dev who knows how to manually create the notify que requests "building an admin page for that".
You can use https://maxwells-daemon.io to do so.
It is based on mysql bin logs, when changes in database is occurred it will send json message with updates to kafka, rabbitmq or other streaming platforms