Making localStorage and/or IndexedDB data offline permanent? - html

Is it possible to make localStorage and/or IndexedDB offline data permanent?
I am planning to make a completely offline HTML5 app and want the data to never get deleted, unless and otherwise the user knowingly does so.
I do not want the data to be deleted even after the app is closed, system is shutdown or something like CCleaner is run.

What you are looking is persistent storage as defined in Quota Management API. Currently none of the browser implemented it.
However IndexedDB data, even thought temporary storage, are persist over application life cycle.

Related

Store data in Local Storage is a good practice?

I'm using the local storage to store data(values, menu etc. ), is it good practice in programing?
Local storage is inherently no more secure than using cookies. They are very similar to cookies, but can only be read client-side, whereas Cookies are sent to the server when visiting a site that has them. This could, in some systems, make Local Storage less secure, as it can only be read by the client side JS code, which can be modified.
There is no security on either Cookies or Local Storage, requiring you to make your own security if you want it. They are both good practice to use in certain situations, mainly when security is not a priority and you want to for example, save settings or count visits to a site.
FYI: Local Storage is not a Chrome-only feature. The Chrome Storage API is a Chrome only storage mechanism used for Chrome and Chromium extensions. It works the same as local storage, but with a few differences. Be sure not to be confused by the two.

Using Couchbase SDK vs Sync Gateway API

I have a full deployment of couchbase (server, sync gateway and lite) and have an API, mobile app and web app all using it.
It works very well, but I was wondering if there are any advantages to using the Sync Gateway API over the Couchbase SDK? Specifically I would like to know if Sync Gateway would handle larger numbers of operations better than the SDK, perhaps an internal queue/cache system, but can't seem to find definitive documentation for this.
At the moment the API uses the C# Couchbase SDK and we use SyncGateway very little (only really for synchronising the mobile app).
First, some relevant background info :
Every document that needs to be synced over to Couchbase Lite(CBL) clients needs to be processed by the Sync Gateway (SGW). This is true whether a doc is written via the SGW API or whether it comes in via server write (N1QL or SDK). The latter case is referred to as "import processing” wherein the document that is written to the bucket (via N1QL) is read by SGW via DCP feed. The document is then processed by SGW and written back to the bucket with the relevant sync metadata.
Prerequisite :
In order for the SGW to import documents written directly via N1QL/SDK, you must enable “shared bucket access” and import processing as discussed here
Non-mobile documents :
If you have documents that are never going to be synced to the CBL clients, then choice is obvious. Use server SDKs or N1QL
Mobile documents (docs to sync to CBL clients) :
Assuming you are on SGW 2.x syncing with CBL 2.x clients
If you have documents written at server end that need to be synced to CBL clients, then consider the following
Server side write rate:
If you are looking at writes on server side coming in at sustained rates significantly exceeding 1.5K/sec (lets say 5K/sec), then you should go the SGW API route. While it's easy enough to do a bulk update via server N1QL query, remember that SGW still needs to keep up and do the import processing (what's discussed in the background).
Which means, if you are doing high volume updates through the SDK/N1QL, then you will have to rate limit it so the SGW can keep up (do batched updates via SDK)
That said, it is important to consider the fact that if SGW can't keep up with the write throughput on the DCP feed, it's going to result in latency, no matter how the writes are happening (SGW API or N1QL)
If your sustained write rate on server isn’t excepted to be significantly high, then go with N1QL.
Deletes Handling:
Does not matter. Under shared-bucket-access, deletes coming in via SDK or SGW API will result in a tombstone. Read more about it here
SGW specific config :
Naturally, if you are dealing with SGW specific config, creating SGW users, roles, then you will use the SGW API for that.
Conflict Handling :
In 2.x, it does not matter. Conflicts are handled on CBL side.
Challenge with SGW API
Probably the biggest challenge in a real-world scenario is that using the SG API path means either storing information about SG revision IDs in the external system, or perform every mutation as a read-then-write (since we don't have a way to PUT a document without providing a revision ID)
The short answer is that for backend operations, Couchbase SDK is your choice, and will perform much better. Sync Gateway is meant to be used by Mobile clients, with few exceptions (*).
Bulk/Batch operations
In my performance tests using Java Couchbase SDK and bulk operations from AsyncBucket (link), I have updated up to 8 thousand documents per second. In .Net there you can do Batch operations too (link).
Sync Gateway also supports bulk operations, yet it is much slower because it relies on REST API and it requires you to provide a _rev from the previous version of each document you want to update. This will usually result in the backend having to do a GET before doing a PUT. Also, keep in mind that Sync Gateway is not a storage unit. It just works as a proxy to Couchbase, managing mobile client access to segments of data based on the channels registered for each user, and writes all of it's meta-data documents into the Couchbase Server bucket, including channel indexing, user register, document revisions and views.
Querying
Views are indexed thus for querying of large data they may will respond very fast. Whenever a document is changed, the map function of all views has the opportunity to map it. But when a view is created through Sync Gateway REST API, some code is added to your map function to handle user channels/permissions, making it slower than plain code created directly in Couchbase Admin UI. Querying views with compound keys using startKey/endKey parameters is very powerful when you have hierarchical data, but this functionality and the use of reduce function are not available for mobile clients.
N1QL can also be very fast too, when your N1QL query is taking advantage of Couchbase indexes.
Notes
(*) One exception to the rule is when you want to delete a document and have this reflected on mobile phones. The DELETE operation, leaves an empty document with _deleted: true attribute, and can only be done through Sync Gateway. Next time the mobile device synchronizes and finds this hint, it will delete the document from local storage. You can also use set this attribute through a PUT operation, when you may also adding _exp: "2019-12-12T00:00:00.000Z" attribute to perform a programmed purge of the document in a future date, so that the server also gets clean. However, just purging a document through Sync Gateway is equivalent to delete it through Couchbase SDK and this won't reflect on mobile devices.
NOTE: Prior to Sync Gateway 1.5 and Couchbase 5.0, all backend operations had to be done directly in Sync Gateway so that Sync Gateway and mobile clients could detect those changes. This has changed since shared_bucket_access option was introduced. More info here.

Locally store large amounts of data

The main purpose is to store data locally so it can be accessed without internet connection.
In my React application I will need to fetch JSON data (such as images, text and videos) from the internet and display it for a certain amount of time.
To add flexibility, this should work offline as well.
I've read about options such as localStorage and Firebase but all of them so far require either access to the Internet, or are limited to 10Mb which is too low for what I'll need.
What would be my best option to persist data in some sort of offline
database or file trough react?
I'd also be thankful if you could point me to a good tutorial about
any provided solution.
To store large amounts of data on client side you can use indexedDB.
IndexedDB is a low-level API for client-side storage of significant amounts of structured data, including files/blobs.
You can read more about indexedDB api here

How much secure data stored in sessionstorage and localstorage in Browsers

I am a newbie in web application development.Recently i got a chance to use
SessionStorage and LocalStorage . I have seen most browsers saving the data in browser cache in sqlite or as web storge either in Base64 or in plain text format.
How much secure this or we need to implemnent some encryption before saving into the storage?
There are two types of web storage, which differ in scope and lifetime:
Local storage (window.localStorage) — The local storage uses the localStorage object to store data for your entire website, permanently. That means the stored local data will be available on the next day, the next week, or the next year unless you remove it.
Session storage (window.sessionStorage) — The session storage uses the sessionStorage object to store data on a temporary basis, for a single window (or tab). The data disappears when session ends i.e. when the user closes that window (or tab).
Unlike cookies, the storage limit is far larger (at least 5MB) and information is never transferred to the server.
Also check this thread
Does HTML5 web storage (localStorage) offer a security advantage over cookies?

Can I keep websql database open to improve performance?

I have an HTML5 mobile app running on iOS and Android. Users will normally have a little bit of local data stored in a few tables. Let's say five tables with an average of three records.
Performance of websql is really bad. I read in this post that much of the delay is probably in opening and closing the database for each transaction. My users will normally only do one transaction at a time, so the time needed to open and close the database for each operation will usually be a relatively big chunk of total time needed.
I am wondering if I could just open the database once, dispense with all the transaction wrappers and just execute the sql straight away?
The table is never used by any other person or process than the user updating their data, or the app reading the data after an update and sending the data to a server for calculations and statistics.
Most crucially: if I follow the above strategy, and the database is never closed, but the user or the OS closes the app (properly speaking: the webview), will the changed data persist or be lost?
Okay, I found the problem. I use the persistenceJS framework to deal with the local database. This keeps a copy of the websql data stored in a js object and keeps database and js object in sync. That's a process that takes a while, and I was putting everything in the "flush" handler, which comes after the sync.
I also keep the connection open. For IndexedDB, I could keep open on UI and background thread at the same time without observing problem. I believe WebSQL will be the same. If you are using just JS file, you could try out my own javascript library, it is very thin wrapper for both IndexedDB and WebSQL. But the library is written for IndexedDb style.