Why Couchbase document size is just 2.5KB? - couchbase

The default documents size allowed is just 2.5KB what is the reason for that? I found answer how to increase it, are normally documents are less than 2.5KB?

2.5 KB size limit is only for web console based CRUD operations and not for rest based CRUD operations
To increase the size limit open the document
/opt/couchbase/lib/ns_server/erlang/lib/ns_server/priv/public/js/documents.js
search for docBytesLimit in the document, its value will be 2500, change it to the value you want.

Related

SSRS has any page limit to render?

I have a dataset of 4 million records. Ideally a page should contain 40 rows maximum. So is it possible to generate 100000+ pages of data using ssrs?
For me I waited for 2 hours, still it is not generated. Can anyone say ssrs has any limit?
This is taken from the SSRS documentation
When you run a report, report size is equal to the amount of data that is returned in the report plus the size of the
output stream. Reporting Services does not impose a maximum limit on the size of a rendered report.System
memory determines the upper limit on size(by default,a report server uses all available configured memory when
rendering a report), but you can specify configuration settings to set memory thresholds and memory
management policies
However, I would question the usefulness of a report with 100,000+ pages. Who would look at it?

What should I set for 'json.maxItemsComputed' in VS Code?

The default states 5000 symbols. Is this dependent on the machine's performance or something else?
From VS Code settings:
JSON: Max Items Computed
The maximum number of outline symbols and folding regions computed (limited for performance reasons).
This relates to the GUI buttons in the editor, you can use to fold down the JSON tree.
If there are more items than the maximum, the routine that prepares those buttons will just give up, and the buttons won't be displayed. I've experienced it for a package-lock.json in a project with lots of dependencies.
A reasonable approach for this would be:
Leave it at the default
If you encounter a JSON file that doesn't show the folding controls,
consider whether you actually need them
if you don't need them - no problem
if you do need them - increase the limit (perhaps temporarily)
if you then experience performance problems editing JSON files, reduce the limit
To avoid performance issues with large JSON files, JSON language support now has an upper limit on the number of folding regions and document symbols it computes (for the Outline view and breadcrumbs). By the default, the limit is 5000 items, but you can change the limit with the setting json.maxItemsComputed.

Why do databases limit row/value size?

I have been reading about databases and looks like most of the databases impose a limit on the size of a value (key-value store/document-based*) or the size of a row (relational database*). Although I understand the limitations on the size of the key/primary-key, it helps to increase the branching factor of BTree such that each BTree node can be fetched within one read of a block on file-system. For values, I assume that the keys store just a pointer to the file containing the value which allows values to be arbitrarily large. Is it that the pointer thing is true only for text/blob kind of data and the rest of the values are stored in the Btree node? But storing values with the Btree node itself helps optimize just one IO (to go and start reading the file pointed by the pointer), the size restriction seems to be a lot for the trade-off.
References:
Limit on mysql: https://dev.mysql.com/doc/mysql-reslimits-excerpt/5.7/en/column-count-limit.html
Limit on dynamodb: Maximum size of DynamoDB item
Cursor based result set traversal is a thing I suggest, DB clients won't be fetching half a row at any time, so if there's no limit on the row size, the client side lib would have to be ready facing arbitrary lengthy binary streams, which obviously make it much harder to design an efficient while still correct wire protocol for C/S communication.
But I don't thing that's the whole story, many other concerns can count as well.

max length in CKEditor plugin

I'm using plugin ckeditor in grails 2.4.3, latest version.
However, I cannot find option to set max length in this editor.
Any helps would be appreciated.
No such option exists. Part of the problem is that it is too hard to measure what constitutes max length for different sites. You must build your own solution based on your requirements on what "max length" means for you.
Sometimes user want the text to not contain more than N chars, sometimes the source must not be longer than M characters and sometimes the content must not exceed a certain graphical boundary (height/width) - it all depends.

MySQL: What is a page?

I can't for the life of me remember what a page is, in the context of a MySQL database. When I see something like 8KB/page, does that mean 8KB per row or ...?
Database pages are the internal basic structure to organize the data in the database files. Following, some information about the InnoDB model:
From 13.2.11.2. File Space Management:
The data files that you define in the configuration file form the InnoDB tablespace. The files are logically concatenated to form the tablespace. [...] The tablespace consists of database pages with a default size of 16KB. The pages are grouped into extents of size 1MB (64 consecutive pages). The “files” inside a tablespace are called segments in InnoDB.
And from 13.2.14. Restrictions on InnoDB Tables
The default database page size in InnoDB is 16KB. By recompiling the code, you can set it to values ranging from 8KB to 64KB.
Further, to put rows in relation to pages:
The maximum row length, except for variable-length columns (VARBINARY, VARCHAR, BLOB and TEXT), is slightly less than half of a database page. That is, the maximum row length is about 8000 bytes. LONGBLOB and LONGTEXT columns must be less than 4GB, and the total row length, including BLOB and TEXT columns, must be less than 4GB.
Well,
its not really a question about MySql its more about what page size is in general in memory management.
You can read about that here: http://en.wikipedia.org/wiki/Page_(computer_memory)
Simply put its the smallest unit of data that is exchanged/stored.
The default page size is 4k which is probably fine.
If you have large data-sets or only very few write operations it may improve performance to raise the page size.
Have a look here: http://db.apache.org/derby/manuals/tuning/perf24.html
Why? Because more data can be fetched/addressed at once.
If the probability is high that the desired data is in close proximity to the data you just fetched, or directly afterwards (well its not really in 3d space but i think you get what i mean), you can just fetch it along in one operation and take better advantage of several caching and data fetching technologies, in general from your hard drive.
But on the other side you waste space if you have data that doesn't fill up the page size or is just a little bit more or something.
I personally never had a case where tuning the page size was important. There were always better approaches to optimize performance, and if not, it was already more than fast enough.
It's the size of which data is stored/read/written to disk and in memory.
Different page sizes might work better or worse for different work loads/data sets; i.e. sometimes you might want more rows per page, or less rows per page. Having said that, the default page size is fine for the majority of applications.
Note that "pages" aren't unique for MySQL. It's an aspect of a parameter for all databases.