IndexedDB flush to disk on Chrome - google-chrome

I'm facing an issue with IndexedDB on Chrome where I reload my page once the transaction returns a successful write.
Problem is sometimes that data does not reflect after reload. I can solve this by giving a timeout of about 100ms before reload, which leads me to believe that the data is not flushed to disk everytime.
Firexox has an experimental readwriteflush mode which ensures data is flushed to disk before returning a success call, but can't seem to find a similar one for Chrome. Any suggestions?
Here's my insert code:
const data = {type: type, value: value};
const objectStore = StorageService.db.transaction(['localData'], 'readwrite').objectStore('localData');
// readwriteflush doesn't work in chrome
// const objectStore = StorageService.db.transaction(['localData'], 'readwriteflush').objectStore('localData');
const requestSet = objectStore.put(data);
requestSet.onerror = function (event) {
alert('Error in saving data locally');
};
requestSet.onsuccess = function (event) {
console.log('Data was successfully saved locally: ' + type);
if (callback != undefined) {
callback();
}
};
The callback has location.reload = '/'; executed in it (along with some other things), so the page reloads after the onsuccess has been returned.
After the page reloads, I cannot see any data on my IndexedDB storage, both via code and on developer tools. This does not always happen however, I've observed that this happens only when data is larger than usual.

"success" fired at a request does not indicate that the transaction has committed successfully. The transaction could later fail due to a separate failed request (e.g. a conflicting add call), I/O error, or e.g. power loss.
You need to wait for the "complete" event to be fired at the transaction. Chrome flushes to disk before firing the "complete" event.

Related

net::ERR_CONNECTION_RESET with service worker in Chrome

I have a very simple service worker to add offline support. The fetch handler looks like
self.addEventListener("fetch", function (event) {
var url = event.request.url;
event.respondWith(fetch(event.request).then(function (response) {
//var cacheResponse: Response = response.clone();
//caches.open(CURRENT_CACHES.offline).then((cache: Cache) => {
// cache.put(url, cacheResponse).catch(() => {
// // ignore error
// });
//});
return response;
}).catch(function () {
// check the cache
return getCachedContent(event.request);
}));
});
Intermittently we are seeing a net::ERR_CONNECTION_RESET error for a particular script we load into the page when online. The error is not coming from the server as the service worker is picking up the file from the browser cache. Chrome's network tab shows the service worker has successfully fetched the file from the disk cache but the request from the browser to the service worker shows as (failed)
Does anyone know the underlying issue causing this? Is there a problem with my service worker implementation?
This is likely due to a bug in Chrome (and potentially other browsers as well) that could result in a garbage collection event removing a reference to the response stream while it's still being read.
Its fix in Chrome is being tracked at https://bugs.chromium.org/p/chromium/issues/detail?id=934386.

Chrome doesn't use cache after power loss?

I am creating a digital signage player that uses Chrome as it's display engine. We need to be able to still muddle along if the network goes down without too much interruption.
Chrome works fine caching images, and I've set the "Exipres" header to be a month after access. I can set the player computer offline and have the app run for days with no problem. If I reboot the machine the right way (Start->Shut Down), caching still works as expected.
The issue is that when Chrome exits abnormally - Either a crash or power loss - on reboot, Chrome ignores the cache and refuses to load images. This happens if I cut power 5 minutes after it loads the page, so content is not expiring.
My guess is that Chrome is set to ignore the cache after an abnormal exit to prevent corrupted cache from continually crashing the browser. However, this behavior is not what I need.
Does anyone know of a command line arg or flag I can set to keep this from happening?
Thanks for your help.
I tried everything I could think of to make Chrome not invalidate the local cache on system failure, and came up empty. There's a few other people who had the same question, and I didn't see an answer.
Here's what I did that made this work, and if someone else is having the same problem, it might be the workaround that you need.
I added a service worker that would cache images. The code below isn't perfect yet, but should be a starting place for someone... (FYI, I learned this 5 minutes ago, so if someone wants to give me a pointer or two on how to make this more elegant, I'm all ears.)
We cache anything that has a response type of "cors" so we cache only images coming from the remote server. Note that your images must be loaded via https for this to work.
Taken (mostly) from: https://developers.google.com/web/fundamentals/getting-started/primers/service-workers
var CACHE_NAME = 'shine_cache';
var urlsToCache = [
'/'
];
self.addEventListener('install', function(event) {
// Perform install steps
event.waitUntil(
caches.open(CACHE_NAME)
.then(function(cache) {
console.log('Opened cache');
return cache.addAll(urlsToCache);
})
);
});
self.addEventListener('fetch', function(event) {
//console.log('Handling fetch event for', event.request);
if (event.request.method == 'POST') {
//console.log("Skipping POST");
event.respondWith(fetch(event.request));
return;
}
if (event.request.headers.get('Accept').indexOf('image') !== -1) {
event.respondWith(
caches.match(event.request)
.then(function(response) {
// Cache hit - return response
if (response) {
console.log("Returning from cache.", event.request);
return response;
}
// IMPORTANT: Clone the request. A request is a stream and
// can only be consumed once. Since we are consuming this
// once by cache and once by the browser for fetch, we need
// to clone the response.
var fetchRequest = event.request.clone();
return fetch(fetchRequest).then(
function(response) {
console.log("Have a response.", response);
// Check if we received a valid response
if(!response || response.status !== 200 || response.type !== 'cors') {
return response;
}
// IMPORTANT: Clone the response. A response is a stream
// and because we want the browser to consume the response
// as well as the cache consuming the response, we need
// to clone it so we have two streams.
var responseToCache = response.clone();
caches.open(CACHE_NAME)
.then(function(cache) {
console.log("Caching response", event.request);
cache.put(event.request, responseToCache);
});
return response;
}
);
})
);
}
});

Accessing indexedDB in ServiceWorker. Race condition

There aren't many examples demonstrating indexedDB in a ServiceWorker yet, but the ones I saw were all structured like this:
const request = indexedDB.open( 'myDB', 1 );
var db;
request.onupgradeneeded = ...
request.onsuccess = function() {
db = this.result; // Average 8ms
};
self.onfetch = function(e)
{
const requestURL = new URL( e.request.url ),
path = requestURL.pathname;
if( path === '/test' )
{
const response = new Promise( function( resolve )
{
console.log( performance.now(), typeof db ); // Average 15ms
db.transaction( 'cache' ).objectStore( 'cache' ).get( 'test' ).onsuccess = function()
{
resolve( new Response( this.result, { headers: { 'content-type':'text/plain' } } ) );
}
});
e.respondWith( response );
}
}
Is this likely to fail when the ServiceWorker starts up, and if so what is a robust way of accessing indexedDB in a ServiceWorker?
Opening the IDB every time the ServiceWorker starts up is unlikely to be optimal, you'll end up opening it even when it isn't used. Instead, open the db when you need it. A singleton is really useful here (see https://github.com/jakearchibald/svgomg/blob/master/src/js/utils/storage.js#L5), so you don't need to open IDB twice if it's used twice in its lifetime.
The "activate" event is a great place to open IDB and let any "onupdateneeded" events run, as the old version of ServiceWorker is out of the way.
You can wrap a transaction in a promise like so:
var tx = db.transaction(scope, mode);
var p = new Promise(function(resolve, reject) {
tx.onabort = function() { reject(tx.error); };
tx.oncomplete = function() { resolve(); };
});
Now p will resolve/reject when the transaction completes/aborts. So you can do arbitrary logic in the tx transaction, and p.then(...) and/or pass a dependent promise into e.respondWith() or e.waitUntil() etc.
As noted by other commenters, we really do need to promisify IndexedDB. But the composition of its post-task autocommit model and the microtask queues that Promises use make it... nontrivial to do so without basically completely replacing the API. But (as an implementer and one of the spec editors) I'm actively prototyping some ideas.
I don't know of anything special about accessing IndexedDB from the context of a service worker via accessing IndexedDB via a controlled page.
Promises obviously makes your life much easier within a service worker, so I've found using something like, e.g., https://gist.github.com/inexorabletash/c8069c042b734519680c to be useful instead of the raw IndexedDB API. But it's not mandatory as long as you create and manage your own promises to reflect the state of the asynchronous IndexedDB operations.
The main thing to keep in mind when writing a fetch event handler (and this isn't specific to using IndexedDB), is that if you call event.respondWith(), you need to pass in either a Response object or a promise that resolves with a Response object. As long as you're doing that, it shouldn't matter whether your Response is constructed from IndexedDB entries or the Cache API or elsewhere.
Are you running into any actual problems with the code you posted, or was this more of a theoretical question?

chrome.storage.sync vs chrome.storage.local

I was trying to understand how to use the chrome.storage.api.
I have included the following in my manifest.json:
"permissions": [
"activeTab","storage"
],
Than, I opened a new tab with the devtools and switched the <page context> to the one of my chrome-extension. Than I typed:
chrome.storage.sync.set({"foo":"bar"},function(){ console.log("saved ok"); } );
and got:
undefined
saved ok
Than I tried getting this stored value:
chrome.storage.sync.get("foo",function(data){ console.log(data); } );
but this got me:
undefined
Object {}
Than I did the same, but instead of sync I used local and this worked as expected:
chrome.storage.local.set({"foo":"bar"},function(){ console.log("saved ok"); } );
..and the retrieval:
chrome.storage.local.get("foo",function(data){ console.log(data); } );
Which got me: Object {foo: "bar"} as it should.
Is this because I am not signed in to my account on chrome? But in that case, isn't chrome.storage.sync designed to fallback into storing the data locally?
EDIT
Strangely, when i type this straight on console it seems to be working, but this code doesn't run from background.js code inside a click listener:
var dataCache = {};
function addStarredPost(post)
{
var id = getPostId(post);
var timeStamp = new Date().getTime();
var user = getUserName();
dataCache[id] = {"id":id,"post":post,"time":timeStamp,"user":user};
chrome.storage.sync.set(dataCache,function(){ console.log("Starred!");});
}
After this is ran, chrome.storage.sync.get(null,function(data){ console.log(data); }); returns an empty object as if the data wasn't stored. :/
This code seems to be working perfect with chrome.storage.local instead.
chrome.runtime.lastErros returns undefined
The max size for chrome local storage is 5,242,880 bytes.
To extend the storage you can add on the manifest.json :
"permissions": [
"unlimitedStorage"
]
The max size for chrome sync storage is:
102,400 bytes total
8,192 bytes per item
512 items max
1,800 write operations per hour
120 operations per minutes
(source)
Whoops!
The problem was I was trying to sync data that exceeded in size. (4096 Bytes per item)
I wasn't getting chrome.runtime.lastError because I was mistakenly putting it inside the get function scope, instead of the set function which was producing the error. Hence, I'm posting this answer so it might help others who share the same confusion.
You should check chrome.runtime.lastError inside each api call, like so:
chrome.storage.local.set(objectToStore, function(data)
{
if(chrome.runtime.lastError)
{
/* error */
console.log(chrome.runtime.lastError.message);
return;
}
//all good. do your thing..
}
This ran OK with chrome.storage.local because according to the docs you only have this limitation with sync.
printing chrome.runtime.lastError gave me: Object {message: "QUOTA_BYTES_PER_ITEM quota exceeded"}

How to solve that AttachAsync of a DownloadOperation does not return immediately?

When using the Background Transfer API we must iterate through current data transfers to start them again ahen the App restarts after a termination (i.e. system shutdown). To get progress information and to be able to cancel the data transfers they must be attached using AttachAsync.
My problem is that AttachAsync only returns when the data transfer is finished. That makes sense in some scenarios. But when having multiple data transfers the next transfer in the list would not be started until the currently attached is finished. My solution to this problem was to handle the Task that AttachAsync().AsTask() returns in the classic way (not use await but continuations):
IReadOnlyList<DownloadOperation> currentDownloads =
await BackgroundDownloader.GetCurrentDownloadsAsync();
foreach (var downloadOperation in currentDownloads)
{
Task task = downloadOperation.AttachAsync().AsTask();
DownloadOperation operation = downloadOperation;
task.ContinueWith(_ =>
{
// Handle success
...
}, CancellationToken.None, TaskContinuationOptions.OnlyOnRanToCompletion,
TaskScheduler.FromCurrentSynchronizationContext());
task.ContinueWith(_ =>
{
// Handle cancellation
...
}, CancellationToken.None, TaskContinuationOptions.OnlyOnCanceled,
TaskScheduler.FromCurrentSynchronizationContext());
task.ContinueWith(t =>
{
// Handle errors
...
}, CancellationToken.None, TaskContinuationOptions.OnlyOnFaulted,
TaskScheduler.FromCurrentSynchronizationContext());
}
It kind of works (in the actual code I add the downloads to a ListBox). The loop iterates through all downloads and executes StartAsync. But the downloads are not really started all at the same time. Only one is runninng at a time and only if it finishes the next one continues.
Any solution for this problem?
The whole point of Task is to allow you to have the option of parallel operations. If you await then you are telling the code to serialize the operations; if you don't await, then you are telling the code to parallelize.
What you can do is add each download task to a list, telling the code to parallelize. You can then wait for tasks to finish, one by one.
How about something like:
IReadOnlyList<DownloadOperation> currentDownloads =
await BackgroundDownloader.GetCurrentDownloadsAsync();
if (currentDownloads.Count > 0)
{
List<Task<DownloadOperation>> tasks = new List<Task<DownloadOperation>>();
foreach (DownloadOperation downloadOperation in currentDownloads)
{
// Attach progress and completion handlers without waiting for completion
tasks.Add(downloadOperation.AttachAsync().AsTask());
}
while (tasks.Count > 0)
{
// wait for ANY download task to finish
Task<DownloadOperation> task = await Task.WhenAny<DownloadOperation>(tasks);
tasks.Remove(task);
// process the completed task...
if (task.IsCanceled)
{
// handle cancel
}
else if (task.IsFaulted)
{
// handle exception
}
else if (task.IsCompleted)
{
DownloadOperation dl = task.Result;
// handle completion (e.g. add to your listbox)
}
else
{
// should never get here....
}
}
}
I hope this is not too late but I know exactly what you are talking about. I'm also trying to resume all downloads when the application starts.
After hours of trying, here's the solution that works.
The trick is to let the download operation resume first before attacking the progress handler.
downloadOperation.Resume();
await downloadOperation.AttachAsync().AsTask(cts.Token);