Spymemcached client auto-reconnect to another server in Couchbase Cluster? - couchbase

I read Couchbase Rebalancing document (http://blog.couchbase.com/rebalancing-couchbase-part-i) and it wrote : "A client losing its connection to the cluster will attempt to reestablish (configurable). Anytime it reconnects (first time or not) it gets the latest map that the cluster has. Ironically, a flaky network in theory might just help here to keep the map constantly updated during a rebalance, but that's for a different discussion."
I use Spymemcached 2.7.3 and how can i achieve that.
I give an example: My Java client add two server (10.0.0.40 and 10.0.0.15, use URL) to connect to Couchbase cluster. But in reality, when 10.0.0.40 down, the persistent connection did not keep. I have to restart my client to switch to 10.0.0.15. How can my client can re-connect to 10.0.0.15 when 10.0.0.40 down without restart my application.
Updated:
I use below code to connect to Couchbase cluster:
ArrayList<URI> listAddr = new ArrayList<>();
listAddr.add(new URI("http://10.0.0.40:8091/pools"));
listAddr.add(new URI("http://10.0.0.15:8091/pools"));
listAddr.add(new URI("http://10.0.0.16:8091/pools"));
client = new MemcachedClient(new BinaryConnectionFactory(), listAddr, "test", "test", "");
I want to my java client auto reconnect to another server in pool (40,15,16) to get topology (when my java client's still running) if the first server in pool (40) failed.
Can i achieve this purpose with spymemcahce or i have to move to Couchbase Java SDK.

spymemcached java client dos not handle membase fail over for particular node.
You can check here .
If you update your java client to couchbase java client, then you can handle fail over by removing failed node from cluster.
for more information you can check here or here

Related

VerneMQ plugin_chain_exhausted Authentication MySQL

I have a running instance of VerneMQ (cluster of 2 nodes) on Google kubernets and using MySQL (CloudSQL) for Auth. Server accepts connections over TLS
It works fine, but after a few days i start seeing this message on the log:
can't authenticate client {[],<<"Client-id">>} from X.X.X.X:16609 due to plugin_chain_exhausted
The client app (paho) complains that the server refused the connection for being "not authorized (code=5 in paho error)"
after a few retry it finally connects. but every time it get's harder and harder until it just won't connect anymore
If i restart VerneMQ everything get's back to normal
I have only 3 clients currently connected at most, at the same time.
clients already connected have no issues in pub/sub.
In my configuration i have (among other things):
log.console.level=debug
plugins.vmq_diversity=on
vmq_diversity.mysql.* = all of them set
allow_anonymous=off
vmq_diversity.auth_mysql.enabled=on
it's like the server degrades over time. the status webpage reports no problem
My verne server was build from the git repository about a month ago and runs on a docker container
what could be the cause?
what else could i check to find posibles causes? maybe a diversity missconfiguration?
Tks
To quickly explain the plugin_chain_exhausted log: with Verne you can run multiple authentication/authorization plugins, and they will be checked in a chain. If one plugin allows the client, it will be in. If no plugin allows the client, you'll see the log above.
This does not explain the behaviour you describe, though. I don't think I have seen that.
In any case, the first thing to check is whether you actually run multiple plugins. For instance: have you disabled the vmq.passwd and the vmq.acl plugins?

MarkLogic Cluster - Add data in 1st host & update in 2nd host throws error

MarkLogic setup is as follows
3 hosts
Data confniguration
- 1 master forest on each host
- 1 replica for each host on different host
We have MarkLogic cluster (3 hosts) with failover) deployed on Azure VMs
We are using MarkLogic ContentPump (MLCP) to ingest data into MarkLogic
This is what we have implemented
Installed Java on 1st host
Copied MLCP tool
Ingested data by providing 1st server as host parameter
Now we got batch of xmls to update back to MarkLogic
With failover implementation, due to some reason 1st host is not available, so when i tried to ingest data thru 2nd host, i started getting error that record was ingested in different host, so update can't happen from here.
So i would like to know the best practices to be followed for ingestion process
To enable the system to reliably failover, you will also need to setup replicas for the Security, App Services & any other system database you may be using as part of your architecture.
The reason you are unable to connect to the other hosts is that the Security database is on host 1, so you are unable to authenticate. Once that is configured for failover, you should no longer run into those issues.
The documentation covers that setup here:
https://docs.marklogic.com/guide/cluster/config-both-failover#id_57935

After Aurora Cluster DB failover, unable to write to DB

Right now I am connecting to a cluster endpoint that I have set up for an Aurora DB-MySQL compatible cluster, and after I do a "failover" from the AWS console, my web application is unable to properly connect to the DB that should be writable.
My setup is like this:
Java Web App (tomcat8) with HikariCP as the connection pool, with ConnecterJ as the driver for MySQL. I am evaluating Aurora-MySQL to see if it will satisfy some of the needs the application has. The web app sits in an EC2 instance that is in the same VPC and SG as the Aurora-MySQL cluster. I am connecting through the cluster endpoint to get to the database.
After a failover, I would expect HikariCP to break connections (it does), and then attempt to reconnect (it does), however, the application must be connecting to the wrong server, because anytime a write is hit to the database, a SQL Exception is thrown that says:
The MySQL server is running with the --read-only option so it cannot execute this statement
What is the solution here? Should I rework my code to flush DNS after all connections go down, or after I start receiving this error, and then try to re-initiate connections after that? That doesn't seem right...
I don't know why I keep asking questions if I just answer them (I should really be more patient), but here's an answer in case anyone stumbles upon this in a Google search:
RDS uses DNS changes when working with the cluster endpoint to make it looks "seamless". Since the IP behind the hostname can change, if there is any sort of caching going on, then you can see pretty quickly how a change won't be reflected. Here's a page from AWS' docs that go into it a bit more: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-jvm-ttl.html
To resolve my issue, I went into the jvm's security file and then changed it to be 0 just to verify if what was happening was correct. Seems correct. Now I just need to figure out how to do it properly...

PHP + MySQL connection pool [duplicate]

Is it possible to cache database connections when using PHP like you would in a J2EE container? If so, how?
There is no connection pooling in php.
mysql_pconnect and connection pooling are two different things.
There are many problems connected with mysql_pconnect and first you should read the manual and carefully use it, but this is not connection pooling.
Connection pooling is a technique where the application server manages the connections. When the application needs a connection it asks the application server for it and the application server returns one of the pooled connections if there is one free.
We can do connection scaling in php for that please go through following link: http://www.oracle.com/technetwork/articles/dsl/white-php-part1-355135.html
So no connection pooling in php.
As Julio said apache releases all resources when the request ends for the current reques. You can use mysql_pconnect but you are limited with that function and you must be very careful. Other choice is to use singleton pattern, but none of this is pooling.
This is a good article: https://blogs.oracle.com/opal/highly-scalable-connection-pooling-in-php
Also read this one http://www.apache2.es/2.2.2/mod/mod_dbd.html
Persistent connections are nothing like connection pooling. A persistent connection in php will only be reused if you make multiple db connects within the same request/script execution context. In most typical web dev scenarios you'll max out your connections way faster if you use mysql_pconnect because your script will have no way to get a reference to any open connections on your next request. The best way to use db connections in php is to make a singleton instance of a db object so that the connection is reused within the context of your script execution. This still incurs at least 1 db connect per request, but it's better than making multiple db connects per reqeust.
There is no real db connection pooling in php due to the nature of php. Php is not an application server that can sit there in between requests and manage references to a pool of open connections, at least not without some kind of major hack. I think in theory you could write an app server in php and run it as a commandline script that would just sit there in the background and keep a bunch of db connections open and pass references to them to your other scripts, but I don't know if that would be possible in practice, how you'd pass the references from your commandline script to other scripts, and I sort of doubt it would perform well even if you could pull it off. Anyway that's mostly speculation. I did just notice the link someone else posted to an apache module to allow connection pooling for prefork servers such as php. Looks interesting:
https://github.com/junamai2000/mod_namy_pool#readme
I suppose you're using mod_php, right?
When a PHP file finishes executing all it's state is killed so there's no way (in PHP code) to do connection pooling. Instead you have to rely on extensions.
You can mysql_pconnect so that your connections won't get closed after the page finishes, that way they get reused in the next request.
This might be all that you need but this isn't the same as connection pooling as there's no way to specify the number of connections to maintain opened.
You can use MySQLi.
For more info, scroll down to Connection pooling section # http://www.php.net/manual/en/mysqli.quickstart.connections.php#example-1622
Note that Connection pooling is also dependent on your server (i.e. Apache httpd) and its configuration.
If an unused persistent connection for a given combination of "host, username, password, socket, port and default database can not be found" in the open connection pool, then only mysqli opens a new connection otherwise it would reuse already open available persistent connections, which is in a way similar to the concept of connection pooling. The use of persistent connections can be enabled and disabled using the PHP directive mysqli.allow_persistent. The total number of connections opened by a script can be limited with mysqli.max_links (this may be interesting to you to address max_user_connections issue hitting hosting server's limit). The maximum number of persistent connections per PHP process can be restricted with mysqli.max_persistent.
In wider programming context, it's a task of web/app server however in this context, it's being handled by mysqli directive of PHP itself in a way supporting connection re-usability. You may also implement a singleton class to get a static instance of connection to reuse just like in Java. Just want to remind that java also doesn't support connection pooling as part of its standard JDBC, they're being different module/layers on top of JDBC drivers.
Coming to PHP, the good thing is that for the common databases in the PHP echosystem it does support Persistent Database Connections which persists the connection for 500 requests (config of max_requests in php.ini) and this avoids creating a new connection in each request. So check it out in docs in detail, it solves most of your challenges. Please note that PHP is not so much sophisticated in terms of extensive multi-threading mechanism and concurrent processing together with powerful asynchronous event handling, when compared to strictly object oriented Java. So in a way it is very less effective for PHP to have such in-built mechanism like pooling.
You cannot instantiate connection pools manually.
But you can use the "built in" connection pooling with the mysql_pconnect function.
I would like to suggest PDO::ATTR_PERSISTENT
Persistent connections are links that do not close when the execution of your script ends. When a persistent connection is requested, PHP checks if there's already an identical persistent connection (that remained open from earlier) - and if it exists, it uses it. If it does not exist, it creates the link.
Connection pooling works at MySQL server side like this.
If persistence connection is enabled into MySQL server config then MySQL keep a connection open and in sleep state after requested client (php script) finises its work and die.
When a 2nd request comes with same credential data (Same User Name, Same Password, Same Connection Parameter, Same Database name, Maybe from same IP, I am not sure about the IP) Then MySQL pool the previous connection from sleep state to active state and let the client use the connection. This helps MySQL to save time for initial resource for connection and reduce the total number of connection.
So the connection pooling option is actually available at MySQL server side. At PHP code end there is no option. mysql_pconnect() is just a wrapper that inform PHP to not send connection close request signal at the end of script run.
For features such as connection pooling - you need to install swoole extension first: https://openswoole.com/
It adds async features to php.
After that its trivial to add mysql and redis connection pooling:
https://github.com/open-smf/connection-pool
Some PHP frameworks come with pooling built-in: https://hyperf.wiki/2.2/#/en/pool

How to keep server and application separate

I have a nodejs Application running on server with node-mysql and express, At first I faced problem where some exceptions were not handled and the application would go down with network connectivity issues.
I handled all uncaught exceptions and the sever wouldn't go down this time but instead it would hang. I figured it was because I returned response only if query didn't raise any exception, so I handled all query related exceptions too.
Next if MySQL server terminate connection for some reason my application wouldn't reconnect, i tried reconnecting but it would give an error related to "enqueue connection handshake or something". From another stack question I was supposed to use connection pool so if server terminates connection it will regain connectivity some how, which I did.
My here question is that each time I faced an issue I had to shut down whole application and thanks to nodejs where server is configured programmatically goes down too. Can I or better yet how can I decouple my Server and Application almost completely so that if I make some change in my application I wouldn't have to re-deploy?
Specially for case that right now everything is okay and my application is constantly giving me connection pool error on server and in development version its working fine, so even if I restart my application I am not sure how will I face this problem again so I can properly diagnose this.
Let me know if anyone needs more info regarding my question.
Are you using a front-end framework to serve your application, or are you serving it all from server calls?
So fundamentally, if your server barfs for any reason (i.e. 500 error), you WANT to shut down and restart, because once your server is in that state, all of your in-transit data and your stack is in an unknown state. There's no way to correctly recover from that, so you are safer from both a server and an end-user point of view to shutdown the process and restart.
You can minimise the impact of this by using something like Node's Cluster module, which allows you to fork child processes of your server and generate multiple instances of the same server, connected to the same database, allowing access on the same port etc, therefore, if your user (or your server), manages to hit an unhandled exception, it can kill the process and restart without shutting down your entire server.
Edit: Here's a snippet:
var cluster = require('cluster');
var threads = require('os').cpus().length;
if(cluster.isMaster) {
for(var i = 0; i < threads; i++) {
cluster.fork();
}
cluster.on('exit', function(dead, code, signal) {
console.log('worker ' +dead.process.pid+ ' died.');
var worker = cluster.fork();
console.log('worker '+worker.process.pid+ ' started');
});
} else {
//
// do your server logic in here
}
That being said, there's no way for you to run up your application and server separately if Node is serving your client content. Once you terminate your server, your Endpoints are down. If you really wanted to be able to keep a client-side application active and reboot your server, you'd have to entirely separate the logic, i.e. have your Application in a different project to your server, and use your server as API endpoints only.
As for Connection Pools in Node-mysql: I have never used that module so I couldn't say what best practice is there.