How to handle prompts for increased web storage? - html

When you reach the size limit for your web sql store, Mobile Safari (and the Android browser) will prompt you to increase the storage size. Once that happens, neither the transaction, onSuccess callback or onError callback is executed. I cannot seem to catch any exceptions here either, so how am I supposed to handle prompts for increased storage?
All the operations are async, so I can only think of setting a timeout and checking if the transaction completed after some time has gone. Which of course is a nasty, bug-ridden hack. And in addition to that, I have to re-do the transaction to actually check if the space was increased or not.
Fiddle for verifying on mobile browser
// open the jsfiddle in safari
// refuse when prompted to increase space to show the bug
db.transaction(function (tx) {
tx.executeSql("INSERT INTO key_value(key, value) VALUES(?,?);", ["mykey", buildBigString(3*Math.pow(10,6))], function (tx, result) {
// will never be called
done();
}, function (tx, error) {
// will never be called
done();
});
});

The problem with the above code was basically that I was missing an error callback for the transaction wrapping the sql insertion. For more info on generally how to actually handle user prompts, see this elaborated blog post on the matter.
From the specification on the Asynchronous Database API
void transaction(in SQLTransactionCallback callback,
in optional SQLTransactionErrorCallback errorCallback,
in optional SQLVoidCallback successCallback);
No example code showed this, and so I kept on assuming the transaction method was used without callbacks.
So instead of writing
db.transaction(function (tx) {
// if this prompts the user to increase the size it will not execute the sql or run any of the callbacks
tx.executeSql('INSERT INTO foo (id, text) VALUES (1, createReallyBigString('4MB')', onComplete, onError );
});
Do this
// Note the reverse order of the callbacks!
db.transaction(function (tx) {
// this will prompt the user to increase the size and the sql will not be executed
tx.executeSql('INSERT INTO foo (id, text) VALUES (1, createReallyBigString('4MB')');
}, onError, onComplete );

Related

How does an incoming request to a nodejs server get handled when the event loop is waiting for a DB operation

I have a route in my API, as an example lets call it /users/:userId/updateBalance. This route will fetch the users current balance, add whatever comes from the request and then update the balance with the newly calculated balance. A request like this comes into the server for a specific user every 30 minutes, so until recently, I thought a concurrency issue to be impossible.
What ended up happening is that somewhere, a sent request failed and was only sent again 30 minutes later, about within a second of the other request. The result was that, as I can see it in the database, both of these requests fetched the same balance from the DB and both added their respective amounts. Essentially, the second request actually read a stale balance, as normally it should execute after request 1 has executed.
To give a numerical example for some more clarity, lets say request 1 was to add $2 to the balance, and request 2 was to add $5, and the user had a balance of $10. If the requests act in parallel, the users balance would end at either $12 or $15 depending on whether request 1 or request 2 finished first respectively, because both requests fetch a balance of $10 from the DB. However, obviously the expected behaviour is that we want request 1 to execute, update the users balance to $12, and then request 2 to execute and update the balance from $12 to $17.
To give some better perspective of the overall execution of this process: the request is received, a function is called, the function has to wait for the balance from the DB, the function then calculates the new balance and updates the db, after which execution is completed.
So I have a few questions on this. The first being, how does node handle incoming requests when it is waiting for an asynchronous request like a MySQL database read. Given the results I have observed, I assume that when the first request is waiting for the DB, the second request can commence being processed? Otherwise I am uncertain of how such asynchronous behaviour is experienced within a single threaded environment like node.
Secondly, how do I go about controlling this and preventing it. I had wanted to use a MySQL transaction with a forUpdate lock, but it turns out it seems not possible due to the way the code is currently written. Is there a way to tell node that a certain block of code can not be executed "in parallel"? Or any other alternatives?
You are right, while node waits for the database query to return, it will handle any incoming requests and start that requests database call before the first one finishes.
The easiest way to prevent this IMO would be to use queues. Instead of processing the balance update directly in the route handler, that route handler could push an event to a queue (in Redis, in AWS SQS, in RabbitMQ etc) and somewhere else in your app (or even in a completely different service) you would have a consumer that listens to new events in that queue. If an update fails, add it back to the beginning of the queue, add some wait time, and then try again.
This way, no matter how many times your first request fails, your balance will be correct, and pending changes to that balance will be in the correct order. In case of an event in the queue failing repeatedly, you could even send an email or a notification to someone to have a look at it, and while the problem is fixed pending changes to the balance will be added to the queue, and once it's fixed, everything will be processed correctly.
You could even read that queue and display information to your user, for instance tell the user the balance has pending updates so it might not be accurate.
Hope this helps!
The first being, how does node handle incoming requests when it is waiting for an asynchronous request like a MySQL database read
The event loop of nodejs makes this happens, otherwise you'll have a totally sync programm with super-low performances.
Every single async function invocked in a context will be executed after the context itself has been executed.
Between the finish of execution of the context and the execution of the async function, other async functions can be scheduled for been executed (this "insertion" is managed by the event loop).
If an async function is awaited, the remaining code of the context is scheduled somewhere after the execution of the async function.
Is more clear when playing with it. Example 1:
// Expected result: 1, 3, 4, 2
function asyncFunction(x) {
// setTimeout as example of async operation
setTimeout(() => console.log(x), 10)
}
function context() {
console.log(1)
asyncFunction(2)
console.log(3)
}
context()
console.log(4)
Example 2:
// Expected result: 1, 2, 3
function asyncFunction(x) {
// Promise as example of async operation
return new Promise((resolve) => {
console.log(x)
resolve()
})
}
async function context() {
console.log(1)
await asyncFunction(2)
console.log(3)
}
context()
Example 3 (more similar to your situation):
// Expected result: 1, 2, 4, 5, 3, 6
function asyncFunction(x) {
// Promise as example of async operation
return new Promise((resolve) => {
console.log(x)
resolve()
})
}
async function context(a, b, c) {
console.log(a)
await asyncFunction(b)
console.log(c)
}
context(1, 2, 3)
context(4, 5, 6)
In your example:
when the server receive a connection, the execution of the handler is scheduled
when the handler is executed, it schedule the execution of the query, and the remaining portion of the handler context is scheduled after that
In between scheduled executions everything can happen.

Service Worker slow response times

In Windows and Android Google Chrome browser, (haven't tested for others yet) response time from a service worker increases linearly to number of items stored in that specific cache storage when you use Cache.match() function with following option;
ignoreSearch = true
Dividing items in multiple caches helps but not always convenient to do so. Plus even a small amount of increase in items stored makes a lot of difference in response times. According to my measurements response time is roughly doubled for every tenfold increase in number of items in the cache.
Official answer to my question in chromium issue tracker reveals that the problem is a known performance issue with Cache Storage implementation in Chrome which only happens when you use Cache.match() with ignoreSearch parameter set to true.
As you might know ignoreSearch is used to disregard query parameters in URL while matching the request against responses in cache. Quote from MDN:
...whether to ignore the query string in the url. For example, if set to
true the ?value=bar part of http://example.com/?value=bar would be ignored
when performing a match.
Since it is not really convenient to stop using query parameter match, I have come up with following workaround, and I am posting it here in hopes of it will save time for someone;
// if the request has query parameters, `hasQuery` will be set to `true`
var hasQuery = event.request.url.indexOf('?') != -1;
event.respondWith(
caches.match(event.request, {
// ignore query section of the URL based on our variable
ignoreSearch: hasQuery,
})
.then(function(response) {
// handle the response
})
);
This works great because it handles every request with a query parameter correctly while handling others still at lightning speed. And you do not have to change anything else in your application.
According to the guy in that bug report, the issue was tied to the number of items in a cache. I made a solution and took it to the extreme, giving each resource its own cache:
var cachedUrls = [
/* CACHE INJECT FROM GULP */
];
//update the cache
//don't worry StackOverflow, I call this only when the site tells the SW to update
function fetchCache() {
return Promise.all(
//for all urls
cachedUrls.map(function(url) {
//add a cache
return caches.open('resource:'url).then(function(cache) {
//add the url
return cache.add(url);
});
});
);
}
In the project we have here, there are static resources served with high cache expirations set, and we use query parameters (repository revision numbers, injected into the html) only as a way to manage the [browser] cache.
It didn't really work to use your solution to selectively use ignoreSearch, since we'd have to use it for all static resources anyway so that we could get cache hits!
However, not only did I dislike this hack, but it still performed very slowly.
Okay, so, given that it was only a specific set of resources I needed to ignoreSearch on, I decided to take a different route;
just remove the parameters from the url requests manually, instead of relying on ignoreSearch.
self.addEventListener('fetch', function(event) {
//find urls that only have numbers as parameters
//yours will obviously differ, my queries to ignore were just repo revisions
var shaved = event.request.url.match(/^([^?]*)[?]\d+$/);
//extract the url without the query
shaved = shaved && shaved[1];
event.respondWith(
//try to get the url from the cache.
//if this is a resource, use the shaved url,
//otherwise use the original request
//(I assume it [can] contain post-data and stuff)
caches.match(shaved || event.request).then(function(response) {
//respond
return response || fetch(event.request);
})
);
});
I had the same issue, and previous approaches caused some errors with requests that should be ignoreSearch:false. An easy approach that worked for me was to simply apply ignoreSearch:true to a certain requests by using url.contains('A') && ... See example below:
self.addEventListener("fetch", function(event) {
var ignore
if(event.request.url.includes('A') && event.request.url.includes('B') && event.request.url.includes('C')){
ignore = true
}else{
ignore = false
}
event.respondWith(
caches.match(event.request,{
ignoreSearch:ignore,
})
.then(function(cached) {
...
}

DalekJS and Mithril: Test are too fast

I use Dalek to test my sample to-do application written with help of Mithril framework.
Everything goes fine until .type() comes in.
If I .type() something in input that have bi-directional binding m.prop with m.withAttr and then assert values of that field i get strage behaviour. Instead "test title" I get "tsttle". It seems that test are running too quickly for Mithril to capture changes and render them back to DOM.
If assertions for input equality is removed — all works just fine.
Is there any workaround, can I slow down type process?
P.S. I use Chrome browser as test runner.
That definitely is an interesting issue, the problem is though, that Dalek can't control the speed of the letters typed. This is due to the fact that the JSON-Wire Protocol does not give us a way to handle that, see here
One thing you could do, even if it seems like overkill, is to add a long function chain with explicit waits, like this:
.type('#selector', 'H')
.wait(500)
.type('#selector', 'e')
.wait(500)
.type('#selector', 'l')
.wait(500)
.type('#selector', 'l')
.wait(500)
.type('#selector', 'o')
You also could go ahead & write a utility function that handles that for you
function myType (selector, keys, test, wait) {
var keysArr = keys.split('');
keysArr.forEach(function (key) {
test.type(selector, key).wait(wait);
});
return test;
}
And then use it in your test like this:
module.exports = {
'my test': function (test) {
test.open('http://foobar.com');
myType('#selector', 'Hello', test, 500);
test.done();
}
};
Mithril, as of when I'm writing this, does a re-render on onkey* events. An option to avoid this is coming.
You could use attr::config at present to handle the onkey* events as this will not cause a rerender. For example:
m('input', {config: addHandler});
function addHandler (el, isInitialized, context) {
if (!isinitialized) {
el.addEventListener('onkeyup', keyHandler, false);
}
}
function keyHandler (event) { /* do something with key press */ }
Its possible {config: addHandler, onchange: m.withAttr('value', mpropData)} will do what you want, but I don't know Dalek. If its doesn't, then you can consider updating mpropData inside keyHandler.
Mithril renders asynchronously in response to event handlers (basically so that related groups of events like keypress/input all get a chance to run before redrawing)
You could try a few things:
if you have access to your data model from your test, you could run your assertion against that model value (which is updated synchronously), as opposed to using the DOM value which only gets updated on the next animation frame
otherwise, you could force a synchronous redraw by explicitly calling m.render (yes, render, not redraw) before running the assertion, to ensure the view is actually in sync w/ the data model
alternatively, you could try waiting for one animation frame (or two) before running the assertion

Lifetime of a Web SQL transaction

What's the lifetime of a Web SQL transaction, or, if it's dynamic, what does it depend on?
From my experience opening a new transaction takes a considerable amount of time, so I was trying to keep the transaction open for the longest time possible.
I also wanted to keep the code clean, so I was trying to separate the JS into abstract functions and passing a transaction as a parameter - something I'm sure is not good practice but sometimes greatly improves performance when it works.
As an example:
db.transaction(function (tx) {
// First question: how many tx.executeSql
// calls are allowed within one transaction?
tx.executeSql('[some query]');
tx.executeSql('[some other query]', [], function (tx, results) {
// Do something with results
});
// Second question: passing the transaction
// works some times, but not others. Is this
// allowed by the spec, good practice, and/or
// limited by any external factors?
otherFunction(tx, 'some parameter');
});
function otherFunction(tx, param) {
tx.executeSql('[some query]');
}
Also, any suggestions on techniques for speedy access to the Web SQL database would be welcome as well.

Node js: Assign mysql result to requests

Previously I was PHP developer so this question might be stupid to some of you.
I am using mysql with node js.
client.query('SELECT * FROM users where id="1"', function selectCb(err, results, fields) {
req.body.currentuser = results;
}
);
console.log(req.body.currentuser);
I tried to assign the result set (results) to a variable (req.body.currentuser) to use it outside the function, but it is not working.
Can you please let me know a way around it.
The query call is asynchronous. Hence selectCb is executed at a later point than your console.log call. If you put the console.log call into selectCb, it'll work.
In general, you want to call everything that depends on the results of the query from the selectCb callback. It's one of the basic architectural principles in Node.JS.
The client.query call, like nearly everything in node.js, is asynchronous. This means that the method just initiates a request, but execution continues. So when it gets to the console.log, nothing has been defined in req.body.currentuser yet.
You can see if you move the console log inside the callback, it will work:
client.query('SELECT * FROM users where id="1"', function selectCb(err, results, fields) {
req.body.currentuser = results;
console.log(req.body.currentuser);
});
So you need to structure your code around this requirement. Event-driven functional programming (which is what this is) can be difficult to wrap your head around at first. But once you get it, it makes a lot of sense.