I have single couchbase node cluster with only data service enabled.
From documentation, we need to add one more to node to enable fts service. But i would like to enable fts on existing single node setup without deleting cluster/data/adding one more node.
Can i enable fts or any other service on existing setup. if so please let me know.
As far as I know, there is no way to do what you are asking for. At best you can back up your data outside the node, reconfigure the node from scratch, and reload your data. It's kind of weird that it's not possible, but when run in production, there is typically only one service per node anyway, so having to add a new server when you want a new service type is reasonable.
Related
We are going to use maxscale as a sql proxy with our mariadb database, with Galera cluster.
In Galera cluster, when quorum is not achieved and split-brain condition happens, some node becomes Non-primary. The Non-primary nodes start rejecting queries coming to them.(as per document)
Does maxscale automatically handle this and stops sending queries to non-primary nodes until they become primary component again.?
I have tested one thing that if any node goes down, maxscale handles that properly and stops sending queries to that node. My question is, does it do same for Non-primary nodes too? If not how to handle it.
PS: I am actually not able to test the non-primary thing myself that's why I am asking this question here. It would be great if somebody can help me achieve and test this situation myself too.
Yes, the Galera monitoring in MaxScale will handle split-brain situations. The monitoring in MaxScale will use the cluster UUID to detect which nodes are a part of it.
For more information, refer to the galeramon documentation.
I've set up a couchbase cluster with 2 nodes containing 300k docs on 4 buckets. the option replicas is forced to 1 as there are only 2 machines.
But documents are splitted half in one node half in the other, I need to have double copy of each document so if a node goes down the other one che still supply all data to my app.
Is there a setting I missed in creating the cluster?
can I still set the cluster to replicate all documents?
I hope someone can help.
thanks
PS: I'm using couchbase community 4.5
UPDATE:
I add screenshots of cluster web interface and cbstast output:
the following is the state with one node only
next the one with both node up:
then cbstats results on both node when both are up and running:
AS you can see with only one node there are half items displayed. Does it mean that the other half resides as replicas but are not shown???
can I still run consistenly my app with only one node???
UPDATE:
I had to click fail-over manually to see replicas become active on the remaining node. As with just two cluster auto fail-over is disabled!!!
Couchbase Server will partition or shard the documents across the two nodes, as you observed. It will also place replicas on those nodes, based on your one-replica configuration.
To access a replica, you must use one of the Client SDKs.
For example, this Java code will attempt to retrieve a replica (getFromReplica("id", ReplicaMode.ALL)) if the active document retrieval fails (get("id")).
bucket.async()
.get("id")
.onErrorResumeNext(bucket.async().getFromReplica("id", ReplicaMode.ALL))
.subscribe();
The ReplicaMode.ALL tells Couchbase to try all nodes with replicas and the active node.
So what was happening with only two nodes in the cluster was that auto fail-over didn't start automatically as specified here:
https://developer.couchbase.com/documentation/server/current/clustersetup/automatic-failover.html
this means data replicas where not activated in the remaining node unless fail-over was triggerd manullay.
The best thing is to have more than TWO nodes in the cluster before going in production.
To be honest I should have ridden documentation very carefully before asking any question.
thanks Jeff Kurtz for your help, you pushed me towards the solution. (the understanding of how couchbase replicas policy works).
I am trying apache ignite data grid to query cached data using sql.
I could load data into ignite caches on startup from mysql and csv and am able to query using sql.
To deploy in production, in addition to loading cache on startup. I want to keep updating different caches once I have data is available in mysql and when csvs are are created for some caches.
I can not use read through as I will be using sql queries.
How it can be done in ignite ?
Read through cannot be configured for SQL Queries. You can go through this discussion in Apache Ignite Users forum.
http://apache-ignite-users.70518.x6.nabble.com/quot-Read-through-quot-implementation-for-sql-query-td2735.html
If you elaborate a bit on your use case, I can suggest you an alternative.
If you're updating database directly, the only way to achieve this is to manually reload data. You can have a trigger on DB that will somehow initiate the reload, or have a mechanism that will periodically check if there were any changes.
However, the preferable way to do this is to never update DB directly, but always use Ignite API for this with write-through. This way you will guarantee that cache and DB are always consistent.
Here is the scenario.
I have two nodes under my couchbase server, Node A and B.I have replication on, so B will act as the node where replicated data of A should go.
Lets say that I try adding a new record and it happen to get saved at node A. Node A saves this data on RAM and on its disk successfully but UNFORTUNATELY, it crashes even before this data could get replicated to Node B
If I have configured automatic failover, Then all requests for Node A data will now go to Node B.
My question is Will I be able to get this new data which could not get replicated to node B but was successfully written over Node A's Disk? considering that Node A is down and all i have is Node B to communicate with
If yes, Please explain how. if no, Is there any official couchbase doc mentioning this behavior.
I tried looking for an answer in the official document and mostly it look like that answer is no, But thought of discussing this here before concluding that its data loss for sure.
Thanks in advance
In the scenario you described, yes the data will not be available, assuming you didn't check that the data had been successfully replicated. However note that replication will typically complete before perisistance, as the network is typically faster than disk.
Couchbase provides an observe API which allows you to verify that a particular mutation has been replicated and/or persisted. See Monitoring data using observe in the Couchbase developer guide.
I have a small hadoop/hive cluster (6 nodes in total).
Using "hadoop dfsadmin -report" I see that are datanodes are working well and connected.
Additionally when I add data in a hive table I can see that the data are being distributed
all over the node. (Easy to check, as the disk space used increases).
I am trying to create some indexes on one table. From the jobtracker http interface, I see only one node available. I tried to run multiple queries ( I use mysql for the metadata) but they appear to run only on the node that hive is installed.
Basically My question is how to make the jobtracker to utilize the other nodes as well.
From what you tell it looks that:
Datanodes are properly running on all nodes and able to communicate with namenode.
Task trackers are not running on all nodes except of one, or, are not able to communicate with the job tracker for some reason.
After checking that task trackers indeed running - read their logs to find out what is their problem to communicate with JobTracker.