Couchbase document size limit - couchbase

I Saw that Couchbase doucment size limit is 20mb.
So if I have a user and the user has 1000 products maybe and the document size is now larger than 20mb how I handle this Situation ?

Related

How to increase sqflite table row length?

Devs.!
I've very common but rare problem in sqlflite database. As i'm getting large amount of data from server that's why we used sqflite for save data locally in database.
But as i mentioned we are getting large amount of data. So, when we retrieve that data from database, we are getting some errors or can say bugs which is as following,
W/CursorWindow(15261): Window is full: requested allocation 1310926 bytes, free space 904042 bytes, window size 2097152 bytes
E/SQLiteQuery(15261): exception: Row too big to fit into CursorWindow required Pos=0, totalRows=1; query: SELECT * FROM my_package
As solution i found that we are retrieving more than 1 mb sized data from table at single query and Because of there's limit of 1 MB in sql. we are facing this issue.
So, My question is how do we increase this limit in sql flutter?
I believe there is no way to increase CursorWindow size limit as it is mentioned here : Issue raised in Sqflite having comment saying about limit been unchangeable
Feel free to try any of the suggested solutions in this StackOverflow comment: various ways to circumvent the issue in the question

Why do databases limit row/value size?

I have been reading about databases and looks like most of the databases impose a limit on the size of a value (key-value store/document-based*) or the size of a row (relational database*). Although I understand the limitations on the size of the key/primary-key, it helps to increase the branching factor of BTree such that each BTree node can be fetched within one read of a block on file-system. For values, I assume that the keys store just a pointer to the file containing the value which allows values to be arbitrarily large. Is it that the pointer thing is true only for text/blob kind of data and the rest of the values are stored in the Btree node? But storing values with the Btree node itself helps optimize just one IO (to go and start reading the file pointed by the pointer), the size restriction seems to be a lot for the trade-off.
References:
Limit on mysql: https://dev.mysql.com/doc/mysql-reslimits-excerpt/5.7/en/column-count-limit.html
Limit on dynamodb: Maximum size of DynamoDB item
Cursor based result set traversal is a thing I suggest, DB clients won't be fetching half a row at any time, so if there's no limit on the row size, the client side lib would have to be ready facing arbitrary lengthy binary streams, which obviously make it much harder to design an efficient while still correct wire protocol for C/S communication.
But I don't thing that's the whole story, many other concerns can count as well.

circular buffer "requestBufferSize: of couchbase

What is requestBufferSize of couchbase 2.0 ? Doc says its circular buffer for I/O but is it number of key-calue pair of size of key-value pair ? if it is size then is it KB or MB ?
I think the size in bytes. On several discussions/bug reports I've seen terminology used as "default 16K buffer".
Now looking at the description:
Default: 16384. The size of the request ring buffer where all request initially are
stored and then picked up to be pushed onto the I/O threads. Tuning
this to a lower value will more quickly lead to BackpressureExceptions
during overload or failure scenarios. Setting it to a higher value
means backpressure will take longer to occur, but more requests will
potentially be queued up and more heap space is used.
You can see that its referred as "default 16K buffer".

Why Couchbase document size is just 2.5KB?

The default documents size allowed is just 2.5KB what is the reason for that? I found answer how to increase it, are normally documents are less than 2.5KB?
2.5 KB size limit is only for web console based CRUD operations and not for rest based CRUD operations
To increase the size limit open the document
/opt/couchbase/lib/ns_server/erlang/lib/ns_server/priv/public/js/documents.js
search for docBytesLimit in the document, its value will be 2500, change it to the value you want.

MySQL: What is a page?

I can't for the life of me remember what a page is, in the context of a MySQL database. When I see something like 8KB/page, does that mean 8KB per row or ...?
Database pages are the internal basic structure to organize the data in the database files. Following, some information about the InnoDB model:
From 13.2.11.2. File Space Management:
The data files that you define in the configuration file form the InnoDB tablespace. The files are logically concatenated to form the tablespace. [...] The tablespace consists of database pages with a default size of 16KB. The pages are grouped into extents of size 1MB (64 consecutive pages). The “files” inside a tablespace are called segments in InnoDB.
And from 13.2.14. Restrictions on InnoDB Tables
The default database page size in InnoDB is 16KB. By recompiling the code, you can set it to values ranging from 8KB to 64KB.
Further, to put rows in relation to pages:
The maximum row length, except for variable-length columns (VARBINARY, VARCHAR, BLOB and TEXT), is slightly less than half of a database page. That is, the maximum row length is about 8000 bytes. LONGBLOB and LONGTEXT columns must be less than 4GB, and the total row length, including BLOB and TEXT columns, must be less than 4GB.
Well,
its not really a question about MySql its more about what page size is in general in memory management.
You can read about that here: http://en.wikipedia.org/wiki/Page_(computer_memory)
Simply put its the smallest unit of data that is exchanged/stored.
The default page size is 4k which is probably fine.
If you have large data-sets or only very few write operations it may improve performance to raise the page size.
Have a look here: http://db.apache.org/derby/manuals/tuning/perf24.html
Why? Because more data can be fetched/addressed at once.
If the probability is high that the desired data is in close proximity to the data you just fetched, or directly afterwards (well its not really in 3d space but i think you get what i mean), you can just fetch it along in one operation and take better advantage of several caching and data fetching technologies, in general from your hard drive.
But on the other side you waste space if you have data that doesn't fill up the page size or is just a little bit more or something.
I personally never had a case where tuning the page size was important. There were always better approaches to optimize performance, and if not, it was already more than fast enough.
It's the size of which data is stored/read/written to disk and in memory.
Different page sizes might work better or worse for different work loads/data sets; i.e. sometimes you might want more rows per page, or less rows per page. Having said that, the default page size is fine for the majority of applications.
Note that "pages" aren't unique for MySQL. It's an aspect of a parameter for all databases.