How to check a particular bucket capacity in couchbase? - couchbase

How to check a particular bucket capacity in couchbase? So, if its going to reach its capacity make a new bucket and start inserting data into that bucket .. Can that be done .. Java API..
My question is .. If we have a big data set that we need to insert into couchbase and if the bucket size is not enough ... and I am using rest API to make that happen .. Is there a way I can do something like ..
if(bucket has reached capacity)
create another bucket dynamically createbucket( )
and insert data now into this newly created bucket

In addition to #m03geek answer:
So the first answer, do not use "buckets" to manage space. The DD are the same for all buckets. (You can just select DD for data and another DD for indexes).
So the limit is the size of your disk, remember that you distribute the data on many nodes, so we can say that the space for you database is the sum of the free space on all nodes.
It is also important to remember that couchbase has two types of files:
data
index
(and some replicas)
All data in files are managed using a append only approach, this means that the files grow, then it is compacted. You can find more information about this here:
http://www.couchbase.com/docs/couchbase-manual-2.1.0/couchbase-admin-tasks-compaction.html
http://blog.couchbase.com/compaction-magic-couchbase-server-20
Also when you are creating a bucket you have to set the RAM quota, to limit the size of the cache that is used to store all metadata, and cache the values. Once again this is distributed on all nodes of the cluster, for example if you have a 5 nodes cluster and you put 2GB or RAM quotas for your bucket, you have 10GB or RAM available for this bucket.
The space is managed automatically by Couchbase that removes (after data has been persisted on disk) data from the RAM when necessary.
Finally, if you are looking for some stats from your cluster you can access many of them using the REST API documented here:
http://www.couchbase.com/docs/couchbase-manual-2.1.0/couchbase-admin-restapi.html

Couchbase bucket size is equal to free disk space. So if you have i.e. 1Tb HDD and create one bucket it's capacity will be 1Tb. If you create 2 buckets or 100 buckets, the capacity of your HDD will not change. So if you run out of bucket size - buy more HDDs. The bucket size that you've write in admin console is how much RAM memory couchbase can consume to store frequently asked keys, but not to store data.

Related

Where can I find the clear definitions for a Couchbase Cluster, Couchbase Node and a Couchbase Bucket?

I am new to Couchbase and NoSQL terminologies. From my understanding a Couchbase node is a single system running a Couchbase Server application and a collection of such nodes having the same data by replication form a Couchbase Cluster.
Also, a Couchbase Bucket is somewhat like a table in RDBMS wherein you put your documents. But how can I relate the Node with the Bucket? Can someone please explain me about it in simple terms?
a Node is a single machine (1 IP/ hostname) that executes Couchbase Server
a Cluster is a group of Nodes that talk together. Data is distributed between the nodes automatically, so the load is balanced. The cluster can also provides replication of data for resilience.
a Bucket is the "logical" entity where your data is stored. It is both a namespace (like a database schema) and a table, to some extent. You can store multiple types of data in a single bucket, it doesn't care what form the data takes as long as it is a key and its associated value (so you can store users, apples and oranges in a same Bucket).
The bucket acts gives the level of granularity for things like configuration (how much of the available memory do you want to dedicate to this bucket?), replication factor (how many backup copies of each document do you want in other nodes?), password protection...
Note that I said that Buckets where a "logical" entity? They are in fact divided into 1024 virtual fragments which are spread between all the nodes of the cluster (that's how data distribution is achieved).

Can I really persist data on disk using couchbase

I have a lot of data needed to stored on disk.
Since it is only key-value pairs, I want to use couchbase to do it.
The data is several GB and I only allocate 1 GB RAM to the bucket.
I though RAM to couchbase is only a cache.
But after inserting a lot of data I got:
Hard Out Of Memory Error. Bucket "test2" on node 100.66.32.169 is full. All memory allocated to this bucket is used for metadata.
when I open the couchbase web console.
Can couchbase be a database to store data on disk? Or it is RAM oritented?
Update:
OK, let me make the question more specific:
In couchbase:
If I allocate the RAM of a bucket to be 1 GB, can I store 10 GB data to that bucket?
If I can do 1. , can I consider that 1 GB RAM is a kind of cache of the 10 GB data (just like CPU L2 cache is a cache of RAM) ?
By default, Couchbase stores all keys (and some metadata) in RAM, and fills whatever remains with values. Starting with version 3.0, you can set your bucket to full-eviction mode, which only keeps the keys of cached documents in RAM. This lets you store much more data than you have memory, but at a cost to performance to some read operations, especially trying to retrieve keys that don't exist.
To solve your specific problem, edit the bucket and set it to full metadata eviction. Note that this will restart the bucket.
Couchbase tries to keep as much of the "live dataset" (ie. most used / requested keys) into the node's memory. This is key to the performance of the database, and part of the design, so good memory sizing of your nodes and quotas of your buckets are key.
It does offer persistence, but I'd say this is not a disk-first oriented database.
Persistence to disk is mainly for two things: making data durable and resilient to node shutdown (of course) and off-load data (in priority least used data) from RAM to disk.
I think you're asking a bunch of different questions here.
Specifically about the error message: looks like your bucket is simply too small to hold all the data you're storing in it.
About persisting to disc: you can force couchbase to write to disc (and even configure the number of nodes that docs are replicated into) but as noted above, that would probably hurt your performance a little.
Have a look, for example, at the persist_to flag in the set() api of the python client for couchbase.
couchbase client for python

couchbase RAM quota and vbucket's detail questions

I had a cluster which inculdes three nodes. We created a bucket inside and set the number of bucket replicas to be 2. Besides the RAM quota is set to be 10G per node, that is, the total RAM quota is 30G.
I used client-side to save data into this bucket. Hours later, the client-side printed Temporary failure error. and Couchbase web console showed that the bucket RAM reached 29G.Repeated data compression but the RAM didn't reduce anymore.
My questions is organized as follows.
1, I guess the key in bucket can only be saved into the RAM but not in hardware, right or wrong?
2,Wheter the 29G data, which can not be compressed into hardware ,is key or not?
3,Wheter each node that saves others node's replica information is saved in hardware or not? If not, how could it be saved.
4,Every time the client-side saves data, it will make use of hash function to evaluate vbucket in order to judge which nodes that the data will be saved in. Is the process carried on the client-side?
In response to your specific questions:
1, I guess the key in bucket can only be saved into the RAM but not in hardware, right or wrong?
If by hardware you mean disk; then yes, currently Couchbase must hold all document keys (along with some additional metadata) in RAM. This is to ensure that any request for a key can be answered immediately, both in the positive ("yes, this key exists and here's it's value) and the negative ("no, such a key doesn't exist.)"
2,Wheter the 29G data, which can not be compressed into hardware ,is key or not?
Some of this is probably the metadata. If you go to the Bucket tab and display it's statistics by clicking on it's name, you can see the amount of memory used - specifically under the VBucket Resources tab to see how much is used for metadata and user data. See the Couchbase Admin Guide - Viewing Bucket and cluster statistics for more details.
3,Wheter each node that saves others node's replica information is saved in hardware or not? If not, how could it be saved.
The replica metadata is also always kept in RAM, but the replica values (like active values) can be ejected to disk to free up memory.
4,Every time the client-side saves data, it will make use of hash function to evaluate vbucket in order to judge which nodes that the data will be saved in. Is the process carried on the client-side?
Yes the vbucket hashing is done on the client - see the Architecture and Concepts - Vbuckets section in the Admin guide.
In general you may want to review the Sizing chapter in the Admin guide to determine how much of you memory is being used for storing key metadata - specifically the Memory Sizing section. The exact calculation depends on the version of Couchbase (and so I won't duplicate here).

Does couchbase actually support datasets larger than memory?

Couchbase documentation says that "Disk persistence enables you to perform backup and restore operations, and enables you to grow your datasets larger than the built-in caching layer," but I can't seem to get it to work.
I am testing Couchbase 2.5.1 on a three node cluster, with a total of 56.4GB memory configured for the bucket. After ~124,000,000 100-byte objects -- about 12GB of raw data -- it stops accepting additional puts. 1 replica is configured.
Is there a magic "go ahead and spill to disk" switch that I'm missing? There are no suspicious entries in the errors log.
It does support data greater than memory - see Ejection and working set management in the manual.
In your instance, what errors are you getting from your application? When you start to reach the low memory watermark, items need to be ejected from memory to make room for newer items.
Depending on the disk speed / rate of incoming items, this can result in TEMP_OOM errors being sent back to the client - telling it needs to temporary back off before performing the set, but these should generally be rare in most instances. Details on handling these can be found in the Developer Guide.
My guess would be that it's not the raw data that is filling up your memory, but the metadata associated with it. Couchbase 2.5 needs 56 bytes per key, so in your case that would be approximately 7GB of metadata, so much less than your memory quota.
But... metadata can be fragmented on memory. If you batch-inserted all the 124M objects in a very short time, I would assume that you got at least a 90% fragmentation. That means that with only 7GB of useful metadata, space required to hold it has filled up your RAM, with lots of unused parts in each allocated block.
The solution to your problem is to defragment the data. It can either be achieved manually or triggered as needed :
manually :
automatically :
If you need more insights about why compaction is needed, you can read this blog article from Couchbase.
Even if none of your documents is stored in RAM, CouchBase still stores all the documents IDs and metadata in memory(this will change in version 3), and also needs some available memory to run efficiently. The relevant section in the docs:
http://docs.couchbase.com/couchbase-manual-2.5/cb-admin/#memory-quota
Note that when you use a replica you need twice as much RAM. The formula is roughly:
(56 + avg_size_of_your_doc_ID) * nb_docs * 2 (replica) * (1 + headroom) / (high_water_mark)
So depending on your configuration it's quite possible 124,000,000 documents require 56 Gb of memory.

If Couchbase key size is greater than RAM quota, how can i move some keys from RAM to disk

for example; i have a huge data want to save in a bucket, all keys's size is 4G ,and the bucket's RAM quota is 3G, now can i save the huge data in the bucket, is there have some method that some keys did not fit in RAM will overflow to disk, and how can i do it
This isn't possible (as of Couchbase 2.5.1). Currently, metadata (including the key) has to be held in RAM, to allow client requests to be able to quickly determine if a key exists.
Therefore in your example you wouldn't be able to store more than 3GB of keys (the bucket quota). Note even then you'd have no RAM left for actual document values, as they would always have to be read from disk.