We have a page that has a load time of 5.8 seconds. The long load time seems to be related to Google's QuotaService.RecordEvent which has a latency of 60 ms. Any thoughts on what might be causing this latency? And what could be done speed it up?
In Chrome (v31) when the search page is loading the maps, the entire page is not scrollable. This is not the case in Safari (v7) or Firefox (v26).
Issue can be replicated using this link.
http://voradius.nl/search?product=de+g+van+geluk&location=1083xh&submit=Zoeken
The QuotaService will count things that are affected by usage-limits(e.g. Map-loads).
This will not be done when you load the API, e.g. a Map-Load happens(and will be counted) when you have successfully created a google.maps.Map-instance(let's assume when the tilesloaded-event fires the first time). Of course there will be a latency, because it takes some time to initialize a map.
But this will not slow down the loading of the page, because usually you initialize the map when the document/window has finished loading.
The reason must be something else.
Related
I am trying to improve the FID measure for my wordpress website with no success.
Whatever I do the "pagespeed insights" tool shows FID over 100ms on the Mobile tab.
The desktop tab shows ~4ms.
Finally I have just created an html file with 1 line of text no js, no css, just a single line of text.
I have uploaded the file to the same server and the FID is still over 100ms on Mobile.
How can it be?
https://www.extra.co.il/wp-content/themes/extra/test.html
https://developers.google.com/speed/pagespeed/insights/?hl=iw&url=https%3A%2F%2Fwww.extra.co.il%2Fwp-content%2Fthemes%2Fextra%2Ftest.html
Origin Summary is taken over a 28 day rolling average across every page on the site.
Simply uploading a test file will not change that as it is based on real world data and is not part of the synthetic test.
In the "Lab" section (the synthetic test) you will see "Total Blocking Time", that is the equivalent metric for First Input Delay and you will see it is at 0ms (as there is no JavaScript blocking your test page).
The reason the desktop performs better is down to processing power, mobile phones typically run 2x-8x times slower than a desktop CPU so it is expected that the First Input Delay is going to be higher on mobile.
Yet again that is real world data on a 28 day rolling average so you can't make a change today and see the results immediately.
For this reason I would suggest using the web vitals library to gather real world data in real time rather than relying on the data coming from the CrUX dataset (which is what Page Speed Insights is using to display the "Origin Summary".
We are currently developing an application which uses dygraphs for plotting data retrieved from a server at regular intervals (1 second). dygraphs has served this application well and we have had no major issues with performance. Now, we are trying to take large chunks of data (5 sets of 5000 points) and plot them on a single dygraph plot and the system seems to be bogging down in rendering the plot (taking on the order of 2 seconds to return). From what I understand, dygrpahs should be fairly fast, so it is likely that I am doing something wrong. Does anyone have any thoughts on how to improve the performance of the application?
You can find a performance timeline here.
A few ideas:
You're using dygraph-dev.js rather than the production bundle. The dev version includes some debugging code that can slow down chart rendering.
The profile indicates that each frame takes ~500ms to render. If you're seeing two second renders, then perhaps you're updating the charts too frequently.
I see the dreaded "Not optimized: Optimized too many times" warning on the stroke calls. It would be interesting to see if this happened before.
It's hard to say much else without seeing a live demo.
I use ko.mapping pluging for binding a complex model (f.e. Project having list of branches, each having list of Tasks). Data are refreshed let's say every 10 seconds with ajax request and updated by the following function: ko.mapping.fromJS(data, viewModel);
After several updates and leaving page (or page Refresh) in Chrome is one core of my CPU busy and page leaving takes about 10s to a minute. Situation in Firefox looks better, but observed rarely.
Thanks
Have you looked in the debugger to see how many branches are in your Project object when the CPU is getting hammered? It'd be worth checking that mapping is updating the branches, rather than creating new ones with each update.
I am using Max Zoom Service of Google Maps API to get the maximum zoom level of a given coordinates. Most of the time it is fast and each request only takes around 150 ms. However, there have been a few occasions that the service became extremely slow, around 20 seconds.
maxZoomService = new google.maps.MaxZoomService();
maxZoomService.getMaxZoomAtLatLng(center, function(response) {
// my process
});
Have you experienced similar issue?
Yes we have experienced the same problems.
We're using wkhtmltoimage to generate map images for inclusion into PDF files using wkhtmltopdf.
We have to set a maximum time in which the image is generated. If it's too short, then you often will not have all the map tiles downloaded in order to create the image. Too long and there's a really noticeable delay for users (with our comnection 5 seconds seems optimal)
We wanted to make sure that generated satellite maps do not exceed the maximum zoom level using the MaxZoomService and that's when we ran into problems with the service.
As it is an asynchronous service, ideally we wait for the service to report the max zoom level BEFORE creating the map and therefore triggering the correct .map tile downloads.
Setting a default "fallback" zoom for the map in case the service is being really slow is not really an option as subsequently updating the zoom level when getting a return value from the service will in most cases cause new tiles to be reloaded, requiring more delay...
If like us you're interested in specific, repeatable locations (e.g. from a database) then one option might be to cache (store) the max zoom levels in advance and periodically check for updated values.
In our specific case the only other option would be to allow the service a specific amount of time (say 2 seconds) and if it does not respond then fall back to a default zoom.
Not ideal, but can handle the services' "Bad hair days"...
I'm writing an application which pulls up to several dozen images from a server using Loader objects. It works fine in all browsers except Firefox, where I'm finding that, with over 6 or so connections, some simply never load - and I cease to get progress events (and can detect no errors/error events)
I extended the Loader class so that it will kill and reopen the transfer if it takes longer than ten seconds, but this temporary hack has created a new problem, in that when there are quite a few connections open, many of them will load 90-odd percent of the image, get killed for exceeding the time limit, open again, load 90-odd percent etc...until the traffic is low enough for it to actually complete. This means I'm getting transfers of many times the amount of data that is actually being requested!
It doesn't happen in any other browser (I was anticipating IE errors, so for Firefox to be the anomaly was unexpected!), I can write a class to manage Loaders, but wondered if anyone else had seen this problem?
Thanks in advance for any help!
Maybe try to limit number of concurrent connections.
Instead of loading all assets at once (then FP or browser manages the connections) try to build a queue.
Building a simple queue is fairly easy - just create an array of URLs and shift or pop a value every time loader has finished loading previous asset.
You might use an existing loader manager like LoaderMax or BulkLoader - they allow to create a queue, limit number of connections and are fairly robust. LoaderMax is my favourite.