I know is not possible with Couchbase 2.2, but is it possible to change the number of replicas on 2.5 ?
Thank you
Yes, you can change the replica count in 2.5 from the web console. The steps are listed below.
Click on the Data Buckets link.
Click the arrow to the left of the bucket name to expand the bucket details.
Click the Edit button.
The replica count appears in the Replicas section. Change the quantity there.
Click Save.
Click the Server Nodes link. After a short time (refresh if necessary), you will see a red message indicating that a rebalance is required. Rebalance your cluster from the button on that page. A rebalance is required so that Couchbase can distribute the new set of replica documents across the cluster.
You can also find info about a 'working but not officially supported' way to change the settings in 2.2 at https://groups.google.com/forum/#!topic/couchbase/ClqBDavQIkk.
Related
I am trying to set up a custom dashboard for my Compute Engine instances. One of the metrics that I want to report on is the amount of free disk space available on each VM. I noticed that "disk bytes used" is one of the available metrics but it is not actually available to me to select unless I disable the "Only Show Active" metrics.
I have the "OS Agent" (recently released) installed and running on the VMs.
I can't seem to find any documentation referencing this particular metric and how to get it working.
Has anyone tried this and figured out the magic solution?
Here is what I did in order to get the metrics working in a replicated environment:
1.-I created 2 GCE instances (Debian and RedHat).
Navigate to the Monitoring section, and select Dashboards.
3.- Select the VM Instances Dashboard from the Dashboard List.
4.- From the Instances section, I selected both instances and clicked on Install Agents; it will open the Cloud Shell VM and auto populate the command to install the Ops Agent.
5.- You might need to wait up to 10 minutes to get the agents connected to the Monitoring Dashboard.
6.- Once you see the Ops Agent running on the instances, select the Infrastructure Summary Dashboard.
7.- Scroll down the Dashboard, and you will see the Top Disk Used (Agent) section populated.
If you prefer, you can also create a custom Dashboard.
On the Left Panel, navigate to the Metrics Explorer section.
In the Resource type, select VM Instance (gce_instance), and, at the bottom, unselect the “Only show active” checkbox.
In the Metric dropdown, menu select Disk Usage, and also unselect the “Only show active” checkbox.
4.- You need to wait at least 1 minute to see the chart populated.
Here is the full list of metrics accepted for gce_compute
I have a deployment with 2 pods of a web application. The web application requires a logon and maintains a session.
After i kill the first pod i am automatically redirected to the logon page of the second pod, but when the first pod is loaded again i am redirected back to it.
I have tried to use the HAproxy "balance source" algorithm and coockies.
Any idea why doesn't it stay with the second pod?
balance source uses a hashing algorithm that changes the workload distribution every time the number of available backends changes, because that is what it's designed to do. If you had more than 2 backends, you would also find that taking down any one backend will cause some traffic that wasn't even hitting the impacted backend to shift to another, because of this redistribution.
If the hash result changes due to the number of running servers changing, many clients will be directed to a different server.
http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4-balance
For an explanation of why you didn't see the expected behavior when using cookies instead of balance source, we'd need to see your configuration.
I've set up a couchbase cluster with 2 nodes containing 300k docs on 4 buckets. the option replicas is forced to 1 as there are only 2 machines.
But documents are splitted half in one node half in the other, I need to have double copy of each document so if a node goes down the other one che still supply all data to my app.
Is there a setting I missed in creating the cluster?
can I still set the cluster to replicate all documents?
I hope someone can help.
thanks
PS: I'm using couchbase community 4.5
UPDATE:
I add screenshots of cluster web interface and cbstast output:
the following is the state with one node only
next the one with both node up:
then cbstats results on both node when both are up and running:
AS you can see with only one node there are half items displayed. Does it mean that the other half resides as replicas but are not shown???
can I still run consistenly my app with only one node???
UPDATE:
I had to click fail-over manually to see replicas become active on the remaining node. As with just two cluster auto fail-over is disabled!!!
Couchbase Server will partition or shard the documents across the two nodes, as you observed. It will also place replicas on those nodes, based on your one-replica configuration.
To access a replica, you must use one of the Client SDKs.
For example, this Java code will attempt to retrieve a replica (getFromReplica("id", ReplicaMode.ALL)) if the active document retrieval fails (get("id")).
bucket.async()
.get("id")
.onErrorResumeNext(bucket.async().getFromReplica("id", ReplicaMode.ALL))
.subscribe();
The ReplicaMode.ALL tells Couchbase to try all nodes with replicas and the active node.
So what was happening with only two nodes in the cluster was that auto fail-over didn't start automatically as specified here:
https://developer.couchbase.com/documentation/server/current/clustersetup/automatic-failover.html
this means data replicas where not activated in the remaining node unless fail-over was triggerd manullay.
The best thing is to have more than TWO nodes in the cluster before going in production.
To be honest I should have ridden documentation very carefully before asking any question.
thanks Jeff Kurtz for your help, you pushed me towards the solution. (the understanding of how couchbase replicas policy works).
So I am facing this problem,whereby,whenever I stop my MySql server(which is using an EC2 free-tiered micro instance), I would have my non-root users passwords changed!! by itself.
I need to reset their respective passwords everytime I stop and reboot my MySql EC2 instance.
See the following screenshot:
Perform the Image / Create Image functionality. Give it a meaningful image name and description. For description, help yourself later by being as verbose as possible like "from 20160401 build plus Scala 2.12 and vsfptd configured". The request to save the custom AMI will be received and may take a short time to complete. Typically when you are just starting with small instances, it will be completed in a few minutes. When completed, it will be visible in the left pane under Images / AMIs.
See the AWS Manual page entitled Step 3: Deploy Your App at the bottom. The section "Create a Custom AMI".
In short, without saving your work and the current state of your server, all work is lost by a stop and reboot. You need to manage, cleanup, and discard prior AMI instances that cause confusion later. That is why the description field is your best friend. Naturally only discard things not of value.
I attempted to start a VM instance using a predefined disk in zone europe-west1-a. I have been using the disk for a number of weeks. The VM startup never completed (the start activity did not complete and the instance never appeared in the VM list - so presumably the VM failed to startup).
When I tried to start the VM a second time, the disk was no longer available. The disk is also not listed under the "Disks" tab of compute engine.
I have bronze support package, so can't create a ticket with google.
Any suggestions on what to do?
You should send a question about this using the grey "Send feedback" link at the bottom right of Developers Console page. This may require looking at the logs for your specific project/account and is not something that we can solve here on StackOverflow.