dojo jsonrest store with periods of no internet connectivity - google-chrome

At the moment I don't know if this is a Dojo issue, a browser issue, or both.
I have a dojo.store.JsonRest data store of items:
//Create stores
var json = new JsonRest(options);
//Memory store
var memory = Observable(new Memory({}));
//Observable cache
var cache = new Cache(json, memory);
The store of items may be shared with different users simultaneously, so the store is periodically updated by issuing something like:
store.query({..})
When I want to add a new item to it, I use
dojo.xhr('POST',{
url:...,
postData:...,
handleAs:'json',
headers:{...},
failOk:true,
timeout:15*1000
});
This works fine. However, I would like to gracefully handle the case when the post occurs during the loss of an internet connection. In particular, I do not want the store to automatically try to post again when a connection is established again; I want the user to retry manually.
In Chrome, it appears that the POST is aborted, and regardless of an internet connection being subsequently established again, the deferred object from the POST appears to be discarded, and the new item is never added to the datastore.
In Firefox, it appears that the POST is aborted. But when the datastore is refreshed, e.g. by invoking:
store.query({...})
the new item, whose POST was aborted, is then added to the store. It's as if the query() call is quietly adding the new item to the datastore when an internet connection is established again.
I'm not observing this behavior in Chrome. And in order to get uniform behavior across different browsers, I would like to know if there is a way to ensure that once a POST is aborted, it's existence and memory is completely obliterated in Firefox.

I'm not sure why you have a separate method of adding an item to the store?
When you do a get on the Cache, if the item does not exist in the memory store, then a request is made by the JsonRest store and the returned item is stored in the memory store. There should be no need to do a separate xhr request, infact query() calls on the cache always go through the JsonRest store, you'd need to override the caches query function to make it check the memory store first. If you just want to add to the store, then a put or add would suit your needs, and do what your xhr post is doing.
Also if you are not happy with the way failed requests are handled by the JsonRest store, just extend it and override its get, query, add, remove or put function(s), handling request responses in your own way.

Related

Data Studio connector making multiple calls to API when it should only be making 1

I'm finalizing a Data Studio connector and noticing some odd behavior with the number of API calls.
Where I'm expecting to see a single API call, I'm seeing multiple calls.
In my apps script I'm keeping a simple tally which increments by 1 every url fetch and that is giving me the correct number I expect to see with getData().
However, in my API monitoring logs (using Runscope) I'm seeing multiple API requests for the same endpoint, and varying numbers for different endpoints in a single getData() call (they should all be the same). E.g.
I can't post the code here (client project) but it's substantially the same framework as the Data Connector code on Google's docs. I have caching and backoff implemented.
Looking for any ideas or if anyone has experienced something similar?
Thanks
Per the this reference, GDS will also perform semantic type detection if you aren't explicitly defining this property for your fields. If the query is semantic type detection, the request will feature sampleExtraction: true
When Data Studio executes the getData function of a community connector for the purpose of semantic detection, the incoming request will contain a sampleExtraction property which will be set to true.
If the GDS report includes multiple widgets with different dimensions/metrics configuration then GDS might fire multiple getData calls for each of them.
Kind of a late answer but this might help others who are facing the same problem.
The widgets / search filters attached to a graph issue getData calls of their own. If your custom adapter is built to retrieve data via API calls from third party services, data which is agnostic to the request.fields property sent forward by GDS => then these API calls are multiplied by N+1 (where N = the amout of widgets / search filters your report is implementing).
I could not find an official solution for this either, so I invented a workaround using cache.
The graph's request for getData (typically requesting more fields than the Search Filters) will be the only one allowed to query the API Endpoint. Before starting to do so it will store a key in the cache "cache_{hashOfReportParameters}_building" => true.
if (enableCache) {
cache.putString("cache_{hashOfReportParameters}_building", 'true');
Logger.log("Cache is being built...");
}
It will retrieve API responses, paginating in a look, and buffer the results.
Once it finished it will delete the cache key "cache_{hashOfReportParameters}building", and will cache the final merged results it buffered so far inside "cache{hashOfReportParameters}_final".
When it comes to filters, they also invoke: getData but typically with only up to 3 requested fields. First thing we want to do is make sure they cannot start executing prior to the primary getData call... so we add a little bit of a delay for things that might be the search filters / widgets that are after the same data set:
if (enableCache) {
var countRequestedFields = requestedFields.asArray().length;
Logger.log("Total Requested fields: " + countRequestedFields);
if (countRequestedFields <= 3) {
Logger.log('This seams to be a search filters.');
Utilities.sleep(1000);
}
}
After that we compute a hash on all of the moving parts of the report (date range, plus all of the other parameters you have set up that could influence the data retrieved form your API endpoints):
Now the best part, as long as the main graph is still building the cache, we make these getData calls wait:
while (cache.getString('cache_{hashOfReportParameters}_building') === 'true') {
Logger.log('A similar request is already executing, please wait...');
Utilities.sleep(2000);
}
After this loop we attempt to retrieve the contents of "cache_{hashOfReportParameters}_final" -- and in case we fail, its always a good idea to have a backup plan - which would be to allow it to traverse the API again. We have encountered ~ 2% error rate retrieving data we cached...
With the cached result (or buffered API responses), you just transform your response as per the schema GDS needs (which differs between graphs and filters).
As you start implementing this, you`ll notice yet another problem... Google Cache is limited to max 100KB per key. There is however no limit on the amount of keys you can cache... and fortunately others have encountered similar needs in the past and have come up with a smart solution of splitting up one big chunk you need cached into multiple cache keys, and gluing them back together into one object when retrieving is necessary.
See: https://github.com/lwbuck01/GASs/blob/b5885e34335d531e00f8d45be4205980d91d976a/EnhancedCacheService/EnhancedCache.gs
I cannot share the final solution we have implemented with you as it is too specific to a client - but I hope that this will at least give you a good idea on how to approach the problem.
Caching the full API result is a good idea in general to avoid round trips and server load for no good reason if near-realtime is good enough for your needs.

How to bulk insert rows into realm database from web (JSON) in Xamarin

This is my very first attempt to work with Xamarin studio with Realm (to make app for both iOS and Android) and I am stuck at this situation since last 24 hours.
My online database-table has 30,000 rows. Earlier when I used to work in Android studio, I used to import those rows in app's 1st run with the help of JSON, GSON and insert into SQLite db.
But I am unable to do so in Realm & Xamarin. I know, I have not provided any code snippet (my effort), but honestly even after searching a lot about this, I couldn't find how should I proceed?
I've already answered that in the Github issue, but in case someone else stumbles across it, the best way to do that without blocking the UI thread, is to use the Realm.WriteAsync API. Basically, you'll do something like:
var items = await service.GetAllItems();
// I assume items are already deserialized RealmObject-s
var realm = Realm.GetInstance();
await realm.WriteAsync(r =>
{
foreach (var item in items)
{
r.Manage(item);
}
}
/* Data is loaded, show message or process it in other ways */
One thing to note is that within the WriteAsync lambda, we're using the r instance and not the original realm one. The reason is that because realms are not thread safe and the asynchronous write will happen on another thread, so it implicitly creates another instance and passes it as an argument of the action parameter.

How can I configure Polymer's platinum-sw-* to NOT cache one URL path?

How can I configure Polymer's platinum-sw-cache or platinum-sw-fetch to cache all URL paths except for /_api, which is the URL for Hoodie's API? I've configured a platinum-sw-fetch element to handle the /_api path, then platinum-sw-cache to handle the rest of the paths, as follows:
<platinum-sw-register auto-register
clients-claim
skip-waiting
on-service-worker-installed="displayInstalledToast">
<platinum-sw-import-script href="custom-fetch-handler.js"></platinum-sw-import-script>
<platinum-sw-fetch handler="HoodieAPIFetchHandler"
path="/_api(.*)"></platinum-sw-fetch>
<platinum-sw-cache default-cache-strategy="networkFirst"
precache-file="precache.json"/>
</platinum-sw-cache>
</platinum-sw-register>
custom-fetch-handler.js contains the following. Its intent is simply to return the results of the request the way the browser would if the service worker was not handling the request.
var HoodieAPIFetchHandler = function(request, values, options){
return fetch(request);
}
What doesn't seem to be working correctly is that after user 1 has signed in, then signed out, then user 2 signs in, then in Chrome Dev Tools' Network tab I can see that Hoodie regularly continues to make requests to BOTH users' API endpoints like the following:
http://localhost:3000/_api/?hoodieId=uw9rl3p
http://localhost:3000/_api/?hoodieId=noaothq
Instead, it should be making requests to only ONE of these API endpoints. In the Network tab, each of these URLs appears twice in a row, and in the "Size" column the first request says "(from ServiceWorker)," and the second request states the response size in bytes, in case that's relevant.
The other problem which seems related is that when I sign in as user 2 and submit a form, the app writes to user 1's database on the server side. This makes me think the problem is due to the app not being able to bypass the cache for the /_api route.
Should I not have used both platinum-sw-cache and platinum-sw-fetch within one platinum-sw-register element, since the docs state they are alternatives to each other?
In general, what you're doing should work, and it's a legitimate approach to take.
If there's an HTTP request made that matches a path defined in <platinum-sw-fetch>, then that custom handler will be used, and the default handler (in this case, the networkFirst implementation) won't run. The HTTP request can only be responded to once, so there's no chance of multiple handlers taking effect.
I ran some local samples and confirmed that my <platinum-sw-fetch> handler was properly intercepting requests. When debugging this locally, it's useful to either add in a console.log() within your custom handler and check for those logs via the chrome://serviceworker-internals Inspect interface, or to use the same interface to set some breakpoints within your handler.
What you're seeing in the Network tab of the controlled page is expected—the service worker's network interactions are logged there, whether they come from your custom HoodieAPIFetchHandler or the default networkFirst handler. The network interactions from the perspective of the controlled page are also logged—they don't always correspond one-to-one with the service worker's activity, so logging both does come in handy at times.
So I would recommend looking deeper into the reason why your application is making multiple requests. It's always tricky thinking about caching personalized resources, and there are several ways that you can get into trouble if you end up caching resources that are personalized for a different user. Take a look at the line of code that's firing off the second /_api/ request and see if it's coming from an cached resource that needs to be cleared when your users log out. <platinum-sw> uses the sw-toolbox library under the hood, and you can make use of its uncache() method directly within your custom handler scripts to perform cache maintenance.

How to extend AFNetworking 2.0 to perform request combining

I have a UI where the same image URL could be requested by several UIImageViews at varying times. Obviously if a request from one of them has finished then returning the cached version works as expected. However, especially with slower networks, I'd like to be able to piggy-back requests for an image URL onto any currently running/waiting HTTP request for the same URL.
On an HTTP server this called request combining and I'd love to do the same in the client - to combine the different requests for the same URL into a single request and then callback separately to each of the callers). The requests for that URL dont happen to start at the same time.
What's the best way to accomplish this?
I think re-writing UIImageView+AFNetworking might be the easiest way:
check the af_sharedImageRequestOperationQueue to see if it has an operation with the same request
if I do already have an operation in the queue or running then add myself to some list of callbacks/blocks to be called on success/failure
if I don't have the operation, then create it as normal
in the setCompletionBlockWithSuccess to call each of the blocks in turn.
Any simpler alternatives?
I encountered a similar problem and decided that your way was the most straightforward. One added bit of complexity is that these downloads require special credentials and so must go through their own operation queue. Here's the code from my UIImageView category to check whether a particular URL is inflight:
NSUInteger foundOperation = [[ConnectionManager sharedConnectionManager].operationQueue.operations indexOfObjectPassingTest:^BOOL(AFHTTPRequestOperation *obj, NSUInteger idx, BOOL *stop) {
BOOL URLAlreadyInFlight = [obj.request.URL.absoluteString isEqualToString:URL.absoluteString];
if (URLAlreadyInFlight) {
NSBlockOperation *updateUIOperation = [NSBlockOperation blockOperationWithBlock:^{
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
self.image = [[ImageCache sharedImageCache] cachedImageForURL:URL];
}];
}];
//Makes updating the UI dependent on the completion of the matching operation.
[updateUIOperation addDependency:obj];
}
return URLAlreadyInFlight;
}];
Were you able to come up with a better solution?
EDIT: Well, it looks like my method of updating the UI just can't work, as the operation's completion blocks are run asynchronously, so the operation finishes before the blocks are run. However, I was able to modify the image cache to be able to add callbacks for when certain URLs are cached, which seems to work correctly. So this method will properly detect when certain URLs are in flight and be able to take action with that knowledge.

AngularJS form wizard save progress

I have a service in AngularJS that generates all the steps needed, the current state of each step (done, current, show, etc) and an associated directive that actually implements the service and displays the data of the service. But, there are 2 steps that are divided in 4 and 3 steps each:
Step one
Discounts
Activities
Duration
Payment Length
Step two
Identification
Personal data
Payment
How can I "save" the state of my form in case the person leaves the site and comes back later? Is it safe to use localStorage? I'm no providing support for IE6 or 7. I thought of using cookies, but that can end up being weak (or not)
Either local storage or cookies should be fine. I doubt this will be an issue, but keep in mind that both have a size limit. Also, it goes without saying that the form state will only be restored if the user returns on the same browser, and without having deleted cookies / local storage.
Another option could be to save the information server side. If the user is signed in, you can make periodic AJAX calls with the data and store the state on the server. When the user finishes all steps, you can make an AJAX call telling the server to delete any saved data it might have. This allows you to restore state even if the user returns on a different browser, as long as he is signed in.
Regardless of what direction you go with this, you can use jQuery's serialize method to serialize the form into a string and save it using your choice of storage.