Google Maps Data Layer: loading a second file using map.data.loadGeoJson much slower - google-maps

I am writing an application that displays zipcode, and other layers on a google map. For the zipcodes I have a simplified geojson file for the upper zoom levels and a more detailed database of features that get added when zoomed in. This works fine on the first load. The map loads at a high zoom level and loads the simplified geojson in ~10 second. However after I zoom in then zoom back out the reload of the simplified geojson takes ~30 second. This is after clearing all previous data with
map.data.forEach(function(feature) {map.data.remove(feature); });
At first I believed this to be a memory leak issue cause by something else in my code, so I did a simple test where I created a map that just loaded the simplified file, cleared it then loaded it again, and I get the same result of about 3 times load time. I then created a much smaller geojson file with just 3 zipcode boundaries, and did a test where I loaded that tiny file, cleared it then loaded the simplified file and the load takes almost as long ~30 seconds. So it would seem that the simple act of loading a second file causes some sort of extra processing need within Google maps. I have even tried loading the new file in a new layer with
high_zoom_zips = new google.maps.Data({map: map})
high_zoom_zips.loadGeoJson('zips.json');
however the longer load time remains. Reloading it a third or forth time do not seems to increase the load time significantly
Has anyone else run into this problem? is there a need to reset anything else within Google maps to prevent this extra processing?

Related

Change level in Cesium imageryProvider faster

I have a cesium viewer using a local imageryProvider. I have 8 levels available, but when I zoom in, Cesium takes longer than it should to call the next level. I zoom in, my map gets blurred, then it changes the level.
This is my viewer code:
var viewer = new Cesium.Viewer(mapID, {
imageryProvider: new Cesium.UrlTemplateImageryProvider({
url: '../../app/CesiumUnminified/Assets/Textures/myTiles/{z}/{x}/{y}.png',
maximumLevel: 8
})
});
Is there any way to call the next level faster so my map doesn't get blurred?
Thank you!
The next level is blurred while waiting for a response from your server. In this case it looks like you're serving the images locally, so the server could be a development or stripped-down server that might not have the performance of a production server.
Most browsers these days have a "Developer tools" type section with a network tab, that shows traffic between the client and server. For example, here's an article explaining how to interpret this display in Google Chrome, although similar tools exist in Firefox, IE, and Edge. Take a look at the timing of tile responses, and see if there's anything that can be done to the server to speed things up.

Polymer - Don't load app until geolocation isn't set

I'm currently working on a webapp with the polymer-framework.
However, the whole application depends on the geolocation of the device. I'm setting the location in the app-globals file to use it globally, but it takes some time (around 500ms).
My question now is, how can I say to polymer that it should get to work, after the geolocation is set?
Thanks!
I think it depends on the specific behavior you're looking for. A few cases:
Everything still loads when the page loads, including perhaps some app UI and you have a loading icon or similar to indicate that the app is still starting up while you're waiting for the location.
You don't want to show the app until you have the location. Perhaps you don't want the app resources to be loaded at this time either.
You don't even want Polymer to be loaded until you have the location.
There's a lot of options & they may depend on your app architecture a bit, but for a general structure:
show a loader icon on startup & just wait to populate any templates you have until location is set
perhaps keep the hidden attribute on your main app element until location is available &/or don't append the element to the DOM until this point (although with similar resource loading opportunity caveat as in 3 below).
you could wait to add polymer & related elements until location is available, but generally seems silly to do this since your app could be doing this work while the user is waiting for a location to be found.
If you've got a more specific need then more details are probably needed.

Chrome chokes on TOO many XHR2 requests

I have a "JSFiddle-like" demo of fetching PNG (Binary Blobs) in a tight loop using XHR2. This first demo grabs 341 PNG images, and then saves them in IndexedDB (using PouchDB).
This demo works fine: http://codepen.io/DrYSG/pen/hpqoD
(To operate, first press [Delete DB], Reload Page, wait for Status = Ready (you should see that it plans to fetch 341 tiles), then press [Download tiles]. )
The next demo is the same code (identical JS, CSS, HTML), but it tries to get 6163 PNG files (again from Google Drive). This time you will see many XHR 0 errors in the console log.
http://codepen.io/DrYSG/pen/qCGxi
The Algorithm it uses is as follows:
Test for presence of XHR2, IndexedDB, and Chrome (which does not have binary blobs, but Base64). and show this status info
Fetch a JSON manifest of PNG tiles from GoogleDrive (I have 171 PNG tiles, each 256x256 in size). The manifest lists their names and sizes.
Store the JSON manifest in the DB
MVVM and UI controls are from KendoUI (This time I did not use their superb grid control, since I wanted to explore CSS3 Grid Styling).
I am using the nightly build of PouchDB
All files PNG file are feteched from Google Drive (NASA Blue Marble.
I created the tile pyramid with Safe FME 2013 Desktop.
My guess is what is happening is that all these XHR2 requests are being fired async, being placed on a thread separate from the JavaScript thread, and then when there are too many pending requests, Chrome is getting sick.
FireFox does not have this issue, nor does IE10.
You can fork the code, and try different values for line 10: (max number of tiles to fetch).
I have submitted a bug to Chromium Bugs, but does anyone have any experience in throttling the async XHR2 fetches for large downloads of data to the Chrome Browsers?
The chromium folks acknowledge this is something that they should fix: https://code.google.com/p/chromium/issues/detail?id=244910, but in the meantime I have implemented throttling using jquery defer/resolve to keep the number of threads low.
Update: I am going to delete my codepen, since I don't need to show this error anymore.

Searching for recently created folders yields no result

I am able to successfully create a folder within the root of my Box account via the v2 API. However, if I immediately issue a search request for it, I get no results. If I wait for some period of time (maybe 20 mins) and then issue the search request, the folder I created is returned.
Is there some cacheing going on on the Box side? If so, is there a way to invalidate the cache via the API or some workaround for this?
Thanks!
What is going on is background backend processing of your file. Just like a new website won't show up in a google search until Google has time to 'learn' about the new website, Box's search engine has to process the file and add the text version of the contents to the Box search engine. Exactly how long it takes to be added depends on a lot of variables, including the size and format of the file.
You'll see pretty much the same behavior if you upload a large document to Box and then try to preview it immediately. Box goes off and does some magic to convert your file to a previewable format. Except in the case of the preview, the Box website gives you a little bit of feedback saying "Generating preview." The search bar doesn't tell you "adding new files to search index."
This is mostly because it is more important for Box to get your file and make sure we store it safely and let you know that Box has it. A few milliseconds later on we start working on processing your file for full text search and all the other processing that we do.

How long is the delay before the thumbnail url is available on a new file?

I'm inserting a new file, and using the returned File object to store a thumbnail. Intermittently, getThumbnail() returns null for .pdf files.
I'm guessing that the explanation is the thumbnail is generated asynchronously and there are times when the processing is incomplete before the insert() call returns with anincomplete File object.
Is there any way I can make this behave more deterministically?
Alternatively, anybody know if the subsequent processing of the thumbnail constitutes a "change" that would be returned by a get changes call?
AFAIK yes the thumbnails are calculated asynchronously. The delay can be different based on server loads, file type and file size but in my testing the thumbs for PDF were available very shortly after the file has been created.
Probably at this point the best you could do is try a subsequent request and keep trying until you get a thumbnail but don't forget to use exponential back-off not to overload the server and kill your quota in some case.
I don't think that when the thumbnail is ready this counts as a change in the changes feed in that case.