I was planning to use Google Geocoding API. I was wondering what is the latency I should expect in getting the response back? I cannot find out these details on the website.
Does anyone aware of what will be the actual latency if I am using Google Geocoding API?
Meaning how much time it will take to get the response back from the Geocoding API.
we have a live app working in the playstore and we get roughly 120-150 hits per hour. Our median latency is around 210 ms and latency (98%) is 510 ms.
We have an application 24x7 with ~2 requests per second.
Median: 197.08 ms
98th percentile (slowest 2%): 490.54 ms
Could be a high bottle neck for you application... use some strategies to help you:
Memory cache
Secondary cache
batch persistence
Related
I am a google cloud user which created a cloud HPC system. The web application is really consuming in terms of hardware resources [it's quite common that I have 4/5 N1 instances allocated with 96 cores each for a total of more than 400 cores].
I found that currently in certain zones it's possible to use N2D and C2 instances which are higher in terms of CPU the first ones and dedicated to the computing the latter. Unluckily I can't use these two instances because, for some reason, I have troubles increasing the quota N2D_CPUS and C2_CPUS above the default value of 24 [which is nothing considering my needs].
Can anyone help me?
The only way to increase the quota is to submit a Quota Increase request. Once you submit the request, you should receive an email saying that the request has been submitted and that it is being reviewed.
I know google analytics has this way of handling rate limits, now how does firebase handle this (maybe a maximum of hits per second)?
There is no rate limit. However unless you are in debug mode events will generally be bundled and sent about once per hour.
I am running a Tornado webserver on Google Compute engine. The webserver returns a very simple JSON response. When I test the throughput capacity of this server, it seems to be throttled at 20 req/s. I can not achieve a higher throughput than 20 req/s.
I know that there is a Google Compute Engine API rate limit at 20 req/s. Is there some sort of Network/Instance rate limit that prevents my server fulfilling more than 20 req/s? How do I increase this limit?
The rate limit of 20 requests per second is not on the server, it is on the GCE API - like when you make calls from gcloud to create instances (it calls the GCE API underneath the covers).
As documented here, the network bandwidth of a GCE VM is limited mainly by the software you run on it, and to some extent by the size of the VM (VMs get up to 2 Gbps per core until 8 cores for a max rate of 16 Gbps). Nothing in the VM subsystem knows anything about requests or responses, it's all just IP traffic to us.
I've been using GCE for several weeks with free credits and have repeatedly found that the quota values keep changing. The default CPU quota is 24 per region, but, depending on what other APIs I enable and in what order, that can silently change to 2 or 8. And today when I went to use it the CPU quota had again changed from 24 to 2 even though I hadn't changed what APIs were enabled. Disabling then re-enabling Compute Engine put the quota back to 24, but that is not a very satisfactory solution. This seems to be a bug to me. Anyone else have this problem and perhaps a solution? I know about the quota increase request form, but it says that if I request an increase than that is the end of my free credits.
Free trial in GCE has some limitation as only 2 concurrent cores at a time, so if for some reason you were able to change it to 24 cores, it's expected that it will be back at 2 cores.
I'd like to calculate the matrix of travel times between US zipcodes. There are about 30k visible zipcodes, so this is 900 million calculations (or 450 million assuming travel time is the same in both directions).
I haven't used graphhopper before but it seems suited to the task. My question are:
What's the best way of doing it?
Will this overload the graphhopper servers?
How long will it take?
I can supply latitude and longitude for each pair of zip codes.
Thanks - Steve
I've not tested GraphHopper yet for these large amount of points, but it should be possible.
What's the best way of doing it?
It would be probably faster if you avoid the HTTP overhead and use the Java lib directly like in this example. Be sure to assign enough RAM as the matrix itself is already 2g if you only use a short value for the distance or time. See also this question.
Will this overload the graphhopper servers?
The API is not allowed to be used without an API key which you can grab here. Or set up your own GraphHopper server.
How long will it take?
Will take probably some days though.
Warning - enterprisy note: we provide support to setup your servers or for your usecase. And also we sell a matrix add-on which makes those calculations at least 10 times faster.