Following this guide: https://developer.chrome.com/apps/offline_storage#asking_more
When I execute
navigator.webkitPersistentStorage.requestQuota(Number.MAX_SAFE_INTEGER, console.log.bind(console));
I receive output 10737418240 bytes (10.73GB), which is the calculated maximum size when using Temporary storage:
(available storage space + storage being used by apps) * .5
However, I did press OK to allow allocating more storage than that. So why don't I get the requested storage?
An application can have a larger quota for persistent storage than
temporary storage, but you must request storage using the Quota
Management API and the user must grant you permission to use more
space.
I've got the same problem posted here Cannot allocate more than 10GB of HTML5 persistent storage
Conclusion: maximum persistent storage quota is 10GB, and the documentation saying it's limited by user's free disk space is incorrect.
Related
When I create a boot disk with gCloud less than 200GB in size, I see this error:
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks/persistent-disks#pdperformance.
I don't, however, see the details about this 200GB size which it alludes to being at somewhere on the page at that url.
Should I care about this warning at all? I wonder is it is more of a ploy for them to make more money trying to encourage you to lease more space?
Note: I'm using a standard disk, not a solid state. My disk access performance which is of any concern is via MySQL with very small read / writes 99% of the time, and occasionally blobs in the range of say 1 to 100 MBs.
It looks like the documentation has shifted around a little, and the warning is out of date.
There is a section of the Block Storage page that explains the relationship between persistent disk size and performance.
We'll fix the URL in gcloud.
I have a lot of data needed to stored on disk.
Since it is only key-value pairs, I want to use couchbase to do it.
The data is several GB and I only allocate 1 GB RAM to the bucket.
I though RAM to couchbase is only a cache.
But after inserting a lot of data I got:
Hard Out Of Memory Error. Bucket "test2" on node 100.66.32.169 is full. All memory allocated to this bucket is used for metadata.
when I open the couchbase web console.
Can couchbase be a database to store data on disk? Or it is RAM oritented?
Update:
OK, let me make the question more specific:
In couchbase:
If I allocate the RAM of a bucket to be 1 GB, can I store 10 GB data to that bucket?
If I can do 1. , can I consider that 1 GB RAM is a kind of cache of the 10 GB data (just like CPU L2 cache is a cache of RAM) ?
By default, Couchbase stores all keys (and some metadata) in RAM, and fills whatever remains with values. Starting with version 3.0, you can set your bucket to full-eviction mode, which only keeps the keys of cached documents in RAM. This lets you store much more data than you have memory, but at a cost to performance to some read operations, especially trying to retrieve keys that don't exist.
To solve your specific problem, edit the bucket and set it to full metadata eviction. Note that this will restart the bucket.
Couchbase tries to keep as much of the "live dataset" (ie. most used / requested keys) into the node's memory. This is key to the performance of the database, and part of the design, so good memory sizing of your nodes and quotas of your buckets are key.
It does offer persistence, but I'd say this is not a disk-first oriented database.
Persistence to disk is mainly for two things: making data durable and resilient to node shutdown (of course) and off-load data (in priority least used data) from RAM to disk.
I think you're asking a bunch of different questions here.
Specifically about the error message: looks like your bucket is simply too small to hold all the data you're storing in it.
About persisting to disc: you can force couchbase to write to disc (and even configure the number of nodes that docs are replicated into) but as noted above, that would probably hurt your performance a little.
Have a look, for example, at the persist_to flag in the set() api of the python client for couchbase.
couchbase client for python
I created replication in Microsoft Azure Virtual Machine .
I'm using MySQL and working with sql workbench (windows).
yesterday I discovered that my 250 GB storage are full and replication stopped.
this log wrote,
Timestamp, Thread, Type, Details
2015-07-29 23:26:44, 1672, Warning, Disk is full writing '.\database123-relay-bin.000164' (Errcode: 28 - No space left on device). Waiting for someone to free space...
and I created another 250 GB external storage.
I have 2 Q :
how can I create queries and use data within two difference storage ?
is it the right thing to do? to create another storage or there is a way to create flexible storage
?
that i found is this : http://www.psce.com/blog/2012/05/07/running-out-of-disk-space-on-mysql-partition-a-quick-rescue/
but it not help , need help and Guidance
this is another option that i found :
how to extend C drive size of Azure VM
Use data disks for storing application data from your Azure VMs. The largest data disk size available is 1TB. If you require more space, you can stripe more than one data disk together. Avoid using the OS disk for storing application data because you will run into this issue of limited space. Also avoid using the temporary disk for storing application data as it is not persistent. You cannot extend the OS disk size, but if you use data disks, you can start with a smaller data disk and increase its size as your application grows.
Learn more about Azure disks here: https://msdn.microsoft.com/en-us/library/azure/Dn790303.aspx
I'm working on a digital asset management deployment and it's likely we'll need need more than the 10TB storage maximum for a persistent disk. The DAM software does not support multiple storage points and likely won't in the future.
How are people coping with this limitation? Is the 10TB max likely to increase as Google compute engine matures?
As of 1 Feb 2016, you can create a persistent disk of up to 64 TB; see the blog post for more details.
According to the official doc:
Instances with shared-core machine types are limited to a maximum of 16 persistent disks.
For custom machine types or predefined machine types that have a minimum of 1 vCPU, you can attach up to 128 persistent disks.
Each persistent disk can be up to 64 TB in size, so there is no need to manage arrays of disks to create large logical volumes. Each instance can attach only a limited amount of total persistent disk space and a limited number of individual persistent disks. Predefined machine types and custom machine types have the same persistent disk limits.
Most instances can have up to 64 TB of total persistent disk space attached. Shared-core machine types are limited to 3 TB of total persistent disk space. Total persistent disk space for an instance includes the size of the boot persistent disk.
Couchbase documentation says that "Disk persistence enables you to perform backup and restore operations, and enables you to grow your datasets larger than the built-in caching layer," but I can't seem to get it to work.
I am testing Couchbase 2.5.1 on a three node cluster, with a total of 56.4GB memory configured for the bucket. After ~124,000,000 100-byte objects -- about 12GB of raw data -- it stops accepting additional puts. 1 replica is configured.
Is there a magic "go ahead and spill to disk" switch that I'm missing? There are no suspicious entries in the errors log.
It does support data greater than memory - see Ejection and working set management in the manual.
In your instance, what errors are you getting from your application? When you start to reach the low memory watermark, items need to be ejected from memory to make room for newer items.
Depending on the disk speed / rate of incoming items, this can result in TEMP_OOM errors being sent back to the client - telling it needs to temporary back off before performing the set, but these should generally be rare in most instances. Details on handling these can be found in the Developer Guide.
My guess would be that it's not the raw data that is filling up your memory, but the metadata associated with it. Couchbase 2.5 needs 56 bytes per key, so in your case that would be approximately 7GB of metadata, so much less than your memory quota.
But... metadata can be fragmented on memory. If you batch-inserted all the 124M objects in a very short time, I would assume that you got at least a 90% fragmentation. That means that with only 7GB of useful metadata, space required to hold it has filled up your RAM, with lots of unused parts in each allocated block.
The solution to your problem is to defragment the data. It can either be achieved manually or triggered as needed :
manually :
automatically :
If you need more insights about why compaction is needed, you can read this blog article from Couchbase.
Even if none of your documents is stored in RAM, CouchBase still stores all the documents IDs and metadata in memory(this will change in version 3), and also needs some available memory to run efficiently. The relevant section in the docs:
http://docs.couchbase.com/couchbase-manual-2.5/cb-admin/#memory-quota
Note that when you use a replica you need twice as much RAM. The formula is roughly:
(56 + avg_size_of_your_doc_ID) * nb_docs * 2 (replica) * (1 + headroom) / (high_water_mark)
So depending on your configuration it's quite possible 124,000,000 documents require 56 Gb of memory.