Graphhopper local instance travel time - graphhopper

I have local install of graph hopper. We use that for getting travel time between point A to point B.
in local instance how is the travel time computed by graphhopper? I know it uses OSM image but is the travel time returned is based on some historical time data?
Thanks

If you use the open source routing engine there are some heuristics based on maximum speed, road type and more to calculate the average speed estimate for a street.

Related

How to lower costs of having MySQL db in Google Cloud

I set up Google Cloud MySQL, I store there just one user (email, password, address) and I'm querying it quite often due to testing purposes of my website. I set up minimal zone availability, the lowest SSD storage, memory 3.75GB, 1vCPUs, automatic backups disabled but running that database from the last 6 days costing me £15... How can I decrease the costs of having MySQL database in the cloud? I'm pretty sure paying that amount is way too much. Where is my mistake?
I suggest using the Google Pricing Calculator to check the different configurations and pricing you could have for a MySQL database in Cloud SQL.
Choosing Instance type
As you've said in your question, you're currently using the lowest standard instance, which is based on CPU and memory pricing.
As you're currently using your database for testing purposes, I could suggest to configure your database with the lowest Shared-Core Machine Type which is db-f1-micro, as shown here. But note that
The db-f1-micro and db-g1-small machine types are not included in the Cloud SQL SLA. These machine types are designed to provide low-cost test and development instances only. Do not use them for production instances.
Choosing Storage type
As you have selected the lowest allowed disk space, you could lower cost changing the storage type to HDD instead of a SSD if you haven't done so, as stated in the documentation:
Choosing SSD, the default value, provides your instance with SSD storage. SSDs provide lower latency and higher data throughput. If you do not need high-performance access to your data, for example for long-term storage or rarely accessed data, you can reduce your costs by choosing HDD.
Note that Storage type could only be selected when you're creating the instance and could not be changed later, as stated in the message when creating your instance.
Choice is permanent. Storage type affects performance.
Stop instance when is not in use
Finally, you could lower costs by stopping the database instance when it is not in use as pointed in the documentation.
Stopping an instance suspends instance charges. The instance data is unaffected, and charges for storage and IP addresses continue to apply.
Using Google Pricing Calculator
The following information is presented as a calculation exercise based in the Google Pricing Calculator
The estimated fees provided by Google Cloud Pricing Calculator are for discussion purposes only and are not binding on either you or Google. Your actual fees may be higher or lower than the estimate. A more detailed and specific list of fees will be provided at time of sign up
Following the suggestions above, you could get a monthly estimate of 6.41 GBP. Based on a 24 hour per 7 days running instance.
And using a SSD, it increases to 7.01 GBP. As said before, the only way to change the storage type would be to create a new instance and load your data.
And this could lower to 2.04 GBP if you only run it for 8 hours 5 days a week running on HDD.

Best FIWare architecture?

We are developing a FiWare city sensor network that:
Inside the sensor processes data in real time and publishes their average every N minutes to our server;
some server side math to do with those reported averages, which will generate new fields or averages of already reported fields (e.g. average by day);
In the end, there will be a Wirecloud component showing a map with the location of every sensor and a plot showing the several fields acquired, by sensor.
Aditionally, sensors can raise alarms and every server and sensor access must be secure and server database scalability it's a future concern. At the moment we have this architecture (OCB stands for Orion Context Broker):
Where the "Webservice" and "Processing" components are house made, but after reading a little bit more about FIWare components (particulary the IOT stack) I've realised that there are more components that we might integrate in here.
What are you using for a solution like this? It looks fairly generic (secure attributes publish, storage, post-processing and value plot).

getting historical data from google maps traffic

Is it possible to get historical data for the traffic layer in google maps, i.e. specify a time and location and receive whether traffic was normal or slow etc? I don't need the typical traffic conditions given a time and day, I need concrete data as described above.
Some APIs provide historical data only for traffic incidents like congestion and accidents.

Find out crash from OBD (On Board Diagnostic) data?

In our application we are collecting OBD data continuously from vehicles port. Our requirement is to find out the vehicle crash scenario (accidents etc.). Currently we are reading the OBD parameters speed, temperature, rpm etc. Is it possible to identify the vehicle accidents scenario from these parameters or we need to use more parameter.
Please share your knowledge thanks in advance.
I think you can check for the sudden deviation in speed. We can get the speed and rpm information from the vehicle.

Given a location's coordinate, locally find out the zip-code without any Web API

I'm working on the visualization of a large spatial data set(around 2.5 million records) using Google maps on a web site. The data points correspond to geographic coordinates within a City. I'm trying to group the data points based on their zip codes so that I can visualize some form of statistical information for each of the zip codes within the city.
The Google API provides an option for "reverse-geocoding" a point to get the zipcode but it enforces a usage limit of 2500 per IP per day. Given the size of my data set I don't think Google or any other limit enforced Web API is practical.
Now, after a few rounds of Google search I've found out that it is possible to set up a spatial database if we have the boundary data for the zip codes. I've downloaded the said boundary data but they're in what is known as "shapefile" format. I'm trying to use PostGIS to set up a spatial database that I can query on for each data point to find out which zipcode it belongs to.
Am I headed in the right direction?
If yes, can someone please share any known examples that have done something similar as I seem to have reached a dead end with PostGIS.
If no, can you please point me to the right course of action given my requirements.
Thanks.