Graphhopper: Adding weight to edges using OSM id - graphhopper

I have a pbf file of the greater Chicago area and have been able to load this file using the graphhopper web viewer.
I would like to improve bike routing using additional usage data that I have for road segments (each identified by an OSM id).
From this SO answer how-to-quickstart-graphhopper-with-my-own-multimodal-graph, I understand that I will need to feed the usage data I have into the GraphHopperStorage. A few questions about doing so:
1) My usage data references OSM ids. How do I reference edges in my pbf file to an OSM id? I also have the lat/lon coordinates of the head/tail of each segment -- from what I've read, I believe that I will have to use these for my mapping?
2) Once I have created MyGraphHopperStorage, can I persist the addition of the weighting so that I do not have to perform this at every run?
Thanks
tom

My usage data references OSM ids. How do I reference edges in my pbf file to an OSM id?
You'll need to keep a (Hash)Map while import for convert between internal and OSM IDs (either edge or node). To keep using osm id even after the import see this example project
can I persist the addition of the weighting so that I do not have to perform this at every run?
You can store this into the flags and call graph.flush, see my recent traffic data post

Related

Is it possible to get urns of models which are translated as references via zip translation?

When I upload and translate a zip-file with one rootFile and some models which act as references to Autodesk-Forge, I could only find one model-urn afterwards. Are all models uploaded separately under the hood and do you have the possibilty to get the urns of each model?
One usecase would be to open any other model from the package than the predefined root, to get to view the 2D-sheets from this model.
Another usecase would be to save data in relation to elements/referenced models with their dbId/guid and urn.
I was expecting to get each models urns by selecting parts from different models and running this.viewer.getAggregateSelection().lastItem.model as it would do the trick if I would've translated them separately and aggregated the view. But this way there's just one urn for all elements.
I also tried inspecting the buckets and objects via the awesome "Autdesk Forge Tools" extension for VSCode, but couldn't get any deeper than the .zip file as an object in the bucket.
Is the only possibility to upload/translate the same .zip-package for every model i want to open with a new defined rootFilename again? Is this still the only possibility as stated in an answer from 2016? (https://stackoverflow.com/a/38720162/19956654)
Appreciate any help with this one, thanks in advance!
Unfortunately, one ZIP will have one URN only. So, you will need to have the ZIP uploaded with different names and request translations with different rootFilenames separately.
However, you don't really need to upload the same file several times. Just call PUT buckets/:bucketKey/objects/:objectKey/copyto/:newObjectKey to duplicate the uploaded ZIP with different names.

Autodesk-Forge bucket system: New versioning

I am wondering of what is the best practise for handling new version of the same model in the Data Management API Bucket system
Currently, I have one bucket per user and the files with same name overwrites the existing model when doing a svf/svf2 conversion.
In order to handle model versioning in be the best manner, should I :
create one bucket per file converted
or
continue with one bucket per user.
If 1): is there a limitation of number of buckets which is possible to create?
else 2): How do I get the translation to accept an bucketKey different than the file name? (As it is now, the uploaded file need to be the filename to get the translation going.)
In advance, cheers for the assistance.
In order to translate a file, you do not have to keep the original file name, but you do need to keep the file extension (e.g. *.rvt), so that the Model Derivative service knows which translator to use. So you could just create files with different names: perhaps add a suffix like "_v1" etc or generate random names and keep track of which file is what version of what model in a database. Up to you.
There is no limit on number of buckets, but it might be an overkill to have a separate one for each file.

How to load longitude and latitude information into cesium kmlDataSource?

I want to display the kml file in cesium globe, and as described in the cesium's workshop code, we need to load the file by passing it with the file location (or URL). The following line of code is the specific line of code that cesium loads the kml.
var geocachePromise = Cesium.KmlDataSource.load('./Source/SampleData/sampleGeocacheLocations.kml', kmlOptions);
The entire code is available here
My question is: I have the longitude, and latitude, (and height) information saved in variables and instead of always saving them into .kml file and then load them via folder, I want to pass this information to cesium kmlDataSource (the code above) directly.
It would be great if anyone has any solution to this.
Please let me know if further information or code snapshot is required. Thanks
If you already have the information you need stored in JavaScript variables, there's no need to export to KML and import it back into Cesium. You can directly add the indicators you need as Cesium Entities, which is what the KML loader is creating when it reads a KML.
Typically, a KML-like pin is represented by a Cesium Entity containing either a point or a billboard, and optionally an associated label.
Here are some relevant demos that show how this is done:
Billboard demo
Map Pin demo
Label demo
Each of these demos calls viewer.entities.add({ ... }) along with a position for the Entity and some sort of graphical indication(s) to display to the user. You may place one of each on an Entity, for example a billboard and a label are often both defined when adding a typical KML-like Entity.
If your data is stored on the server however, you will need some mechanism to stream it to the browser. CZML is Cesium's native format for doing so, but KML is also available as an alternative for certain kinds of graphics. You may also use any API of your own design, and create Entities when the data becomes available in JavaScript.

Autodesk Forge - Post Jobs - Must files be in buckets and proper URN

I am working on doing a post job and I am confused about where files need to be to run the job and the proper urn.
The examples all use a file that the user uploads to a bucket. I am trying to run the post job on a file that a user has created in Fusion 360 and that he has selected through a GUI I created. The urn in question is obtained by letting the user select the hub, project, folder(s), and file. I then use this file urn on the post job.
I keep getting back the response of :
Failed to download the design description for the input design.
My questions are:
Is it possible to do this from a users hub or do all items have to be in buckets?
Where are those translated files stored once created? If I want to get data like volume and mass without storing the translated file, is that possible?
I took the "urn:" off the front of the urn and got a different error, which I believe meant that it couldn't find any file.
Invalid 'design' parameter.
So, it looks like the urn I am using is finding a file but there is an issue somewhere that is preventing that file from being accessed or translated or something.
I keep getting back the response of : Failed to download the design description for the input design.
For Fusion 360 files make sure the extension name of the object is f2d/f3d. BTW Forge Viewer support these two formats directly so you don't have to translate to SVF for Viewer to visualize them.
Is it possible to do this from a users hub or do all items have to be in buckets?
For hub project items use the Data Management API to obtain the object ID - be sure to include the version parameter in your URN - see GET projects/:project_id/folders/:folder_id/contents and use the id of the item as your URN as well as tutorial here to help you understand how project folder items work.
Where are those translated files stored once created? If I want to get data like volume and mass without storing the translated file, is that possible?
The translated derivatives would be stored separately and you can access them through the derivative manifest. Use GET :urn/metadata/:guid/properties to query derivative properties but you will need to translate the model (to any format will do) in order to extract properties - see tutorial here.

Combining additional data with Shapefile using GeoJSON and Gdal

I used Mike Bostock's great tutorial to make a simple map using downloaded shapefiles and processing them with GDAL into GeoJSON files.
http://bost.ocks.org/mike/map/
I'm trying to build on this learning by taking a county-level shapefile map and marrying it with additional demographic data (CSV) so that I can load a single GeoJSON file and not have to use Javascript to merge the data at runtime. The goal is to have a county-level heatmap.
The CSV file has an ID column that looks like this: 01348. While the Shapefile has two ID columns that are 01 and 348.
Is it possible to use GeoJSON to store this kind of data? If so, what kind of terminal commands must I use to combine the two?
Little trick:
When converting from Shape file to GeoJSON, keep "id-a":"01";"id-b":"348" as neighbors in this order.
use a simple regex to delete all ";"id-b":" and thus obtain "id-a":"01348".
go ahead to inject your CSV property given the common ID, see: How to add properties to topojson file?
That's should work.