I try to make .dwg to .shp with Arcgis for autocad and Qgis
I define the coordinate system as EPSG:4326. like:
and display fine on the Qgis:
But use GeoServer, Repeat rendering :
The coordinate system I set:
But when I set the SRS to EPSG:3857,It work fine!
what happened? can you help me?
The data is not in 4326.
With this projection (4326), the bounds are +- 180 and +- 90 degrees. Your first screenshot and the Geoserver one show coordinates with values around 73000.
--> the data source has the wrong coordinate.
--> QGIS manage to display it
--> Geoserver fails at this
You need to fix the coordinate system at the source
Related
for Forge Autodesk.AEC.LevelsExtension, where is the cut plane? looks like it doesn't follow the view range in Revit?
Appended to what Cyrille mentioned.
Revit's AEC model data will be generated automatically during the Forge translation. The level info is dumping from Revit Level elements, e.g., the name, guid, elevations, extensions of the Revit Level which you can see with Revit API. The Levels extension just uses it to rebuild level ranges.
For example, the cutting range of level 1 will be from level1's elevation to (level2's elevation - a height adjustment) with my research. The height adjustment is to avoid cutting on top of floors to brining a better view. So, it's not following Revit's view range of the floor plan view.
If the levels extension doesn't match your need, you may check out my level section tool. This sample demonstrates the concept of how to create cut planes by levels.
In this extension, floors cut planes are defined based on the bounding box of the floor node in the scene instance tree. Floors are defined in an additional AEC data json file which contains additional information.
i'm new to the autodesk viewer and the others api and I could use some help figuring out what tools are the best to do what I want.
I'm using the autodesk viewer to let users generate 2d views of cut planes, in order to do this I simply use the getScreenshot function from the viewer and save it as a Blueprint in my app.
What I would like to do now is that when the user updates his 3d model, to automatically update my 2d views with the new 3d model.
Currently the only solution I came up with is to store the position of the camera when taking the screenshot and then when the 3d model is updated, have another computer in the background go in the viewer and take the screenshots again at the same location.
This does not seems to be a very elegant solution so I would like to know if there's an alternative, like a way to generate 2d views from an api call or maybe use the Design Automation API with the viewer to take the screenshots ?
Another thing i'm struggling with is getting precise measure of the 2ds views i'm generating, my current solution is to calculate the distance between the camera and the cut plane and then use the fov to get an approximate measure, the formula looks like this :
Math.tan((viewer.getCamera().fov / 2) * Math.PI / 180) * distanceBetweenCameraAndPlaneCut * 4;
but it is very dependent on the user facing the plane cut at a 90° angle and i'm thinking there should be something better to do with the measure tool.
Thanks a lot for your time!
You should be able to run the Viewer on a server using puppeteer (without client-side components) to generate screenshots: https://forge.autodesk.com/blog/running-forge-viewer-headless-chrome-puppeteer
Note: you could also use Design Automation API in order to do something similar, but then you are limited to the file formats the given product (e.g. AutoCAD) supports as input
You could also simply save the state of the Viewer and reset it next time you load the same model in order to take a screenshot of the exact same area using getState()/restoreState(): https://adndevblog.typepad.com/cloud_and_mobile/2015/02/managing-viewer-states-from-the-api.html
Why are you trying to measure the distance between cut plane and camera position? Is that in order to restore the Viewer state/camera? If so, then the solution mentioned in 2. should help
I am training a Faster-RCNN(VGG-16 architecture) on INRIA Person dataset. I was trained for 180,000 training steps. But when I evaluate the network, it gives varying result with the same image.
Following are the images
I am not sure why does it give different results for the same set of weights.The network is implemented in caffe.
Any insight into the problem is much appreciated.
Following image shows different network losses
Recently, I also prepared my own dataset to training, and got the similar results as yours.
The following is my experiences and share with you:
Check input format include images and your bounding box csvfile or xml (where always put on Annotation file) whether all bounding box (x1, y1, x2, y2) correct?
Then check roidb/imdb loading python script (put on FasterRCNN/lib/datasets/pascal_roi.py, and maybe yours is inria.py),
make sure _load_xxx_annotation() correctly loaded all bounding box by print bounding_box and filename. Importantly, if your script is copied and modified the pascal_roi.py or any prototype script, please check whether it will saved all roi and image info into cache file, if yes you need to delete that cache file when you change any configure files and re-try.
Finally, make sure that all bounding box correctly generating when network is training (e.g. print minibatch variable to show filename and corresponding x1, y1, x2, y2 shown at FasterRCNN/lib/roi_data_layer/layer.py). If roi generator generate correctly, the bounding box will not differ with your manually select bounding box largely.
Some similar issue may cause this problem as well.
I am trying to apply the polygon feature attribute of a zone code from the shape file to the patches. It should be easy enough with gis:apply-coverage, but the value prints out as NaN for all the patches (should be either 0,1,2,3... the value of the Zone_Code attribute).
I have already tried changing the minimum threshold value to 0.000001, and the zones are fairly large, so I don't think that is the problem. The rest of the shape files have worked with no problems, although I haven't used apply-coverage with them. I'm using Netlogo version 5.3
code:
gis:set-coverage-minimum-threshold 0.000001
gis:apply-coverage zones-dataset "ZONE_CODE" landuse_type
ask patches
[ print landuse_type ]
It does seem like that should work if your shapefile polygons are covering some of your patches. Have you checked that the loaded zone shapefile has the correct projection/envelope etc? If the shapefile is offset from the NetLogo world it may not overlay any patches. You could quickly confirm that your shapefile is at the very least intersecting your patches:
ask patches gis:intersecting zones-dataset [
set pcolor blue
]
If that doesn't work, maybe your world-envelope and the envelope of "zones-dataset" do not overlap. Otherwise, your code looks fine and worked for me using a made-up polygon shapefile.
So here is my problem. I plan to implement a localized map for my college presenting all the locations such as main block, Tech park etc. Not only do i plan to develop a GUI but also I also want to run my own algorithms, such as finding the quickest route from one block to another etc (Note: the algorithm is something i will be writing since i don't want to take the shortest route as the quickest but want to add my own parameters as weights). I want to host the map locally (say on a in house system) and should be able to cater real time request (displaying route to the nearest cafeteria) and display current data (such as what event is taking place in what corner of the campus). I know Google Maps API or Openstreetmap/OpenLyers API will enable me to build my own map, but can i run my own algorithms on them? also can I add elements that i have created and replace the traditional building/office components with my own?
You can do the following :
1. Export a part of open street map from their website. (go to the export tab)
2. Use ElementTree in python to parse the exported the xml data.
3. Use networkx to add the parsed data into a graph.
4. Run your algorithms on it.