mysql max_user_connections bot traffic - mysql

Every month or two a swarm of robot visits my site and opens up connections so fast that my current max_user_connection value of 25 (will increase it to 75) is reached. Currently I restart the server and it works fine again until the next swarm comes. It is a webshop programmed in perl which tries to get the data using DBI connect.
So I have some questions :
Will the problem solve itself after some time or will the open process run until reset and try to get infos from the locked DB ?
Is it possible to do a small query to check for max user connections on the DB to exit if it is to high ?
Any other idea to get protection from DOS attacks or bot swarms (thought about rectriciting Asian IPs in htaccess) ?

You can detect ( for example robots ) with a module. For example HTTP::BrowserDetect.
use HTTP::BrowserDetect;
my $browser = HTTP::BrowserDetect->new($user_agent_string);
if ( $browser->robot() ) {
# dont open an mysql connection,
# return a cached version of the requested page
# or something like that
...
}

Related

Google Cloud SQL No Response

We are running a Sails.js API on Google Container Engine with a Cloud SQL database and recently we've been finding some of our endpoints have been stalling, never sending a response.
I had a health check monitoring /v1/status and it registered 100% uptime when I had the following simple response;
status: function( req, res ){
res.ok('Welcome to the API');
}
As soon as we added a database query, the endpoint started timing out. It doesn't happen all the time, but seemingly at random intervals, sometimes for hours on end. This is what we have changed the query to;
status: function( req, res ){
Email.findOne({ value: "someone#example.com" }).then(function( email ){
res.ok('Welcome to the API');
}).fail(function(err){
res.serverError(err);
});
}
Rather suspiciously, this all works fine in our staging and development environments, it's only when the code is deployed in production that the timeout occurs and it only occurs some of the time. The only thing that changes between staging and production is the database we are connecting to and the load on the server.
As I mentioned earlier we are using Google Cloud SQL and the Sails-MySQL adapter. We have the following error stacks from the production server;
AdapterError: Invalid connection name specified
at getConnectionObject (/app/node_modules/sails-mysql/lib/adapter.js:1182:35)
at spawnConnection (/app/node_modules/sails-mysql/lib/adapter.js:1097:7)
at Object.module.exports.adapter.find (/app/node_modules/sails-mysql/lib/adapter.js:801:16)
at module.exports.find (/app/node_modules/sails/node_modules/waterline/lib/waterline/adapter/dql.js:120:13)
at module.exports.findOne (/app/node_modules/sails/node_modules/waterline/lib/waterline/adapter/dql.js:163:10)
at _runOperation (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/finders/operations.js:408:29)
at run (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/finders/operations.js:69:8)
at bound.module.exports.findOne (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/finders/basic.js:78:16)
at bound [as findOne] (/app/node_modules/sails/node_modules/lodash/dist/lodash.js:729:21)
at Deferred.exec (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/deferred.js:501:16)
at tryCatcher (/app/node_modules/sails/node_modules/waterline/node_modules/bluebird/js/main/util.js:26:23)
at ret (eval at <anonymous> (/app/node_modules/sails/node_modules/waterline/node_modules/bluebird/js/main/promisify.js:163:12), <anonymous>:13:39)
at Deferred.toPromise (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/deferred.js:510:61)
at Deferred.then (/app/node_modules/sails/node_modules/waterline/lib/waterline/query/deferred.js:521:15)
at Strategy._verify (/app/api/services/passport.js:31:7)
at Strategy.authenticate (/app/node_modules/passport-local/lib/strategy.js:90:12)
at attempt (/app/node_modules/passport/lib/middleware/authenticate.js:341:16)
at authenticate (/app/node_modules/passport/lib/middleware/authenticate.js:342:7)
at Object.AuthController.login (/app/api/controllers/AuthController.js:119:5)
at bound (/app/node_modules/sails/node_modules/lodash/dist/lodash.js:729:21)
at routeTargetFnWrapper (/app/node_modules/sails/lib/router/bind.js:179:5)
at callbacks (/app/node_modules/sails/node_modules/express/lib/router/index.js:164:37)
Error (E_UNKNOWN) :: Encountered an unexpected error :
Could not connect to MySQL: Error: Pool is closed.
at afterwards (/app/node_modules/sails-mysql/lib/connections/spawn.js:72:13)
at /app/node_modules/sails-mysql/lib/connections/spawn.js:40:7
at process._tickDomainCallback (node.js:381:11)
Looking at the errors alone, I'd be tempted to say that we have something misconfigured. But the fact that it works some of the time (and has previously been working fine!) leads me to believe that there's some other black magic at work here. Our Cloud SQL instance is D0 (though we've tried upping the size to D4) and our activation policy is "Always On".
EDIT: I had seen others complain about Google Cloud SQL eg. this SO post and I was suspicious but we have since moved our database to Amazon RDS and we are still seeing the same issues, so it must be a problem with sails and the mysql adapter.
This issue is leading to hours of downtime a day, we need it resolved, any help is much appreciated!
This appears to be a sails issue, and not necessarily related to Cloud SQL.
Is there any way the QPS limit for Google Cloud SQL is being reached? See here: https://cloud.google.com/sql/faq#sizeqps
Why is my database instance sometimes slow to respond?
In order to minimize the amount you are charged for instances on per use billing plans, by default your instance becomes passive if it is not accessed for 15 minutes. The next time it is accessed there will be a short delay while it is activated. You can change this behavior by configuring the activation policy of the instance. For an example, see Editing an Instance Using the Cloud SDK.
It might be related to your policy setting. If you set it to ON_DEMAND, the instance will sleep to save your budget so that the first query to activate the instance is slow. This might cause the timeout.
https://cloud.google.com/sql/faq?hl=en

MySQL proxy redirect Read/Write

We have a system where we have a Master / Multiple Slaves .
Currently everything happens on the Master and the slaves are just here for backup .
We use Codeigniter as a development platform .
Now we decided to user the slaves for the Reads and the Master for the Write queries .
I have been told that this is not doable without modifying the source code because proxy can't know the type of the query .
Any idea how to proceed with this without causing too much damages for a perfectly working system ...
We will use this : http://dev.mysql.com/downloads/mysql-proxy/
It does exactly what we want :
More info here :
http://jan.kneschke.de/2007/8/1/mysql-proxy-learns-r-w-splitting/
http://www.infoq.com/news/2007/10/mysqlproxyrwsplitting
http://archive.oreilly.com/pub/a/databases/2007/07/12/getting-started-with-mysql-proxy.html
something i was also looking, few month back i did something like this but i added 3 web server with master slave mysql servers, first web server enabled with mod_proxy to redirect request to read and write server all request will come to this server, if post,put or delete request come to server it will go to write server, all get or normal request will go to read server
here you can find mod_proxy setting which i used
http://pastebin.com/a30BRHFq
here you can read about load balancing
http://www.rackspace.com/knowledge_center/article/simple-load-balancing-with-apache
still looking for better solution with less hardware involved
figure out another solution through CI, create two database connections in database.php file keep save mysql server as default database connection and other connection for write only server
you can use this base model extend
https://github.com/jamierumbelow/codeigniter-base-model
you need to extend your models with this model and need to extend you model with this, it has functionality for callbacks before and after insert,update, delete and get queries, only you need to add one custom method or callback change_db_group
//this method in MY_Model
function change_db_group{
$this->_database = $this->load->database('writedb', TRUE)
}
no your example model
class Example_Model extends MY_Model{
protected $_table = 'example_table';
protected $before_create = array('change_db_group');
protected $before_update = array('change_db_group');
protected $before_delete = array('change_db_group');
}
you database connection will be changed before executing insert,update or delete queries

MySQL listen notify equivalent

Is there an equivalent of PostgresQL's notify and listen in MySQL? Basically, I need to listen to triggers in my Java application server.
Ok, so what I found is that you can make UDF functions in mysql that can do anything but need to be written in C/C++. They can be then called from triggers on updates in database and notify your application when update happened. I saw that there are some security concerns. I did not use it myself but from what I can see it looks like something that could accomplish what you want to do and more.
http://dev.mysql.com/doc/refman/5.6/en/adding-udf.html
The github project mysql-notification provides a MySQL user defined function MySQLNotification() as a plugin to MySQL that will send notification events via a socket interface. This project includes a sample NodeJS test server that receives the notification events that could be adapted for Java or any other socket service.
Example use:
$ DELIMITER ##
$ CREATE TRIGGER <triggerName> AFTER INSERT ON <table>
FOR EACH ROW
BEGIN
SELECT MySQLNotification(NEW.id, 2) INTO #x;
END##
Project includes full source code and installation instructions for OSX and Linux. License is GNU v3.
No, there aren't any built-in functions like these yet.
You need to "ping" (every 1-5 seconds) database with selecting with premade flag like "read" 0/1. After
SELECT * FROM mytable WHERE read = 0
update it with read = 1
I needed to do this, so I designed my application to send the update notices itself.
E.g.
--Scenario--
User A is looking at record 1
User B saves an update to record 1 while User A has it open.
Process:
I wrote my own socket server as a Windows Service. I designed a que like system which is basically,
EntityType EntityID NoticeType
Where the EntityType is the type of Poco in my data layer that needs to send out notices, EntityID is the primary key value of the row that changed in sql (the values of the poco), and NoticeType is 1 Updated, 2 Inserted, and 3 Deleted.
The socket server accepts connections from the server side application code on a secure connection "meaning client side code cannot make requests designed to be sent by the server side application code"
The socket server accepts a message like
900 1 1023 1
Which would mean the server needs to notify concerned client connections that Entity Type 1 "Person" with ID 1023 was Updated.
The server knows what users need to be notified because when User's look at a record, they are registered in the socket server as having an interest in the record and the record's ID which is done by the web socket code in the client side javascript.
Record 1 is a POCO in my app code that has an IsNew and IsDirty field. "Using EntityFrameWork6 and MySql" If UserB's save caused an actual change (and not just saving existing data) IsDirty will be true on the postback on UserB's POCO.
The application code see's the record is dirty then notifies the socket server with a server side sent socket "which will be allowed" that says Entity 1 with ID 1023 was Updated.
The socket server sees it, and puts it in the que.
Being .Net, I have a class for concerned users that uses the same pocos from the data layer running in the Socket Server window service. I use linq to select users who are working with an entity matching the entity type and primary key id of the entity in the que.
It then loops through those users and sends them a socket like
901 1 1023 1 letting them know the entity was updated.
The javascript in the client side receives it causing users B's page to do an ajax postback on Record 1, But what happens with UserA's is different.
If user A was in the process of making a change, they will get a pop up to show them what changed, and what their new value will be if they click save and asks them which change they want to keep. If UserA doesn't have a change it does an ajax postback with a notification bar at the top that says "Record Change: Refreshed Automatically" that expires after a few seconds.
The Cons to this,
1. It's very complex
2. It won't catch insert/update/delete operations from outside of the application.
In my case, 2 won't happen and if 2 does happen it's by myself or another dev who knows how to manually create the notify que requests "building an admin page for that".
You can use https://maxwells-daemon.io to do so.
It is based on mysql bin logs, when changes in database is occurred it will send json message with updates to kafka, rabbitmq or other streaming platforms

Need to be able to Insert/Delete New Groups in openfire via HTTP or MySQL

I know how to insert a new group via MySQL, and it works, to a degree. The problem is that the database changes are not loaded into memory if you insert the group manually. Sending a HUP signal to the process does work, but it is kludgy and a hack. I desire elegance :)
What I am looking to do, if possible is to make changes (additions/deletions/changes) to a group via MySQL, and then send an HTTP request to the openfire server to read the new changes. Or in the alternative, add/delete/modify groups similar to how the User Service works.
If anyone can help I would appreciate it.
It seems to me that if sending a HUP signal works for you, then that's actually quite a simple, elegant and efficient way to get Openfire to read your new group, particularly if you do it with the following command on the Openfire server (and assuming it's running a Linux/Unix OS):
pkill -f -HUP openfire
If you still want to send an HTTP request to prompt Openfire to re-read the groups, the following Python script should do the job. It is targeted at Openfire 3.8.2, and depends on Python's mechanize library, which in Ubuntu is installed with the python-mechanize package. The script logs into the Openfire server, pulls up the Cache Summary page, selects the Group and Group Metadata Cache options, enables the submit button and then submits the form to clear those two caches.
#!/usr/bin/python
import mechanize
import cookielib
# Customize to suit your setup
of_host = 'http://openfire.server:9090'
of_user = 'admin_username'
of_pass = 'admin_password'
# Initialize browser and cookie jar
br = mechanize.Browser()
br.set_cookiejar(cookielib.LWPCookieJar())
# Log into Openfire server
br.open(of_host + '/login.jsp')
br.select_form('loginForm')
br.form['username'] = of_user
br.form['password'] = of_pass
br.submit()
# Select which cache items to clear in the Cache Summary page
# On my server, 13 is Group and 14 is Group Metadata Cache
br.open(of_host + '/system-cache.jsp')
br.select_form('cacheForm')
br.form['cacheID'] = ['13','14']
# Activate the submit button and submit the form
c = br.form.find_control('clear')
c.readonly = False
c.disabled = False
r = br.submit()
# Uncomment the following line if you want to view results
#print r.read()

mysql-proxy 0.8.3 load balance can not work

I have three Mysql Nodes listed below:
Master Address: 192.168.1.77:3306
Slave1 Address: 192.168.1.76:3306
Slave2 Address: 192.168.1.69:3306
and after i installed mysql-proxy of version 0.8.3 on 192.168.1.67, and create my configuration below:
[mysql-proxy]
admin-username=proxy
admin-password=proxy
admin-lua-script=/local/software/mysql-proxy/lib/mysql-proxy/lua/admin.lua
proxy-read-only-backend-addresses = 192.168.1.76:3306,192.168.1.69:3306
proxy-backend-addresses=192.168.1.77:3306
proxy-lua-script=/local/software/mysql-proxy/share/doc/mysql-proxy/rw-splitting.lua
log-file=/local/software/mysql-proxy/log/mysql-proxy.log
plugin-dir=/local/software/mysql-proxy/lib/mysql-proxy/plugins
plugins=proxy,admin,debug,replicant
log-level=debug
keepalive=true
edited file: rw-splitting.lua
min_idle_connections = 1,
max_idle_connections = 2,
then start mysql-proxy like the way:
./bin/mysql-proxy --defaults-file=mysql-proxy.cnf
logon the proxy:
mysql -uproxy -ppassword -P4040 -h192.168.1.67
and when i execute select sql again and again to open different mysql-proxy 4040 window, but from log i found that all the select sql queries are sent to the same server for 76, however only if i shutdown the 76, then it will send the queries to slave 69. i don't know why load balance not to work, is there some place what i made a mistake? thank you in advance.
rw-splitting.lua seems to leave some of the implementation as an exercise for the reader. There is a comment 'pick a random backend' but I see no implementation of it, or a round robin technique. The code seems to fill the backend servers from the top moving on to the next in the array when there are not idle connections.
If there are always idle connections at the master then the current implementation prefers to go there. After that it uses the first idle connection in the read only backend servers list. In this case 76 until you shut it down when it moved on to 69. I can't see why 77, the read/write backend is not being preferred. Possibly this is related to the number of idle connections available.
It would seem that seeking the lowest proxy.global.backends.connected_clients, the number of connections currently active on the backend, would be a good way to prioritize the backend used.
You should also take a look at the balance module lib/mysql-proxy/lua/proxy/balance.lua