So I was playing around with the autocad forge demos on https://extract.autodesk.io/ and uploaded a few test files.
I noticed that the 3d viewer appears to struggle when rendering objects at large x,y,z values. For a lack of better words, normally straight/curved lines become "squiggly", and jump around wildly. I've done a lot of threejs work, and this happens because you are not moving the original data to some local coordinates.
Here is the original file:
https://extract.autodesk.io/explore/90377-test2dwg
Here is a more local version that i also uploaded:
https://extract.autodesk.io/explore/84705-testdwg
The strange thing is.. in 2d they are both fine, but in 3d the first one renders very poorly. I was hoping that oViewer.model.getGeometryList().geoms[id].vb would have some local values and i could apply an offset to get the real x,y,z data back, but that doesn't seem to be the case..
Any idea if there is a fix for this coming soon? Is the vertex list in geoms used for rendering, or is it just a copy of the values created at render time? If i can edit some code and change it to a float64array i'd be fine with that.
Thanks.
Related
Final edit- This turned out to be a bug with routing regression in AnyLogic and is getting fixed in the next update.
I'm developing a town simulation with pedestrian agents moving around it. The GIS region data I use is loaded from a .osm file in foot-traffic mode, and I randomly spawn building agents around the region.
They walk around the city fine at first, but around halfway to their destination, they suddenly fly across the town at high speed in a straight line to where they're going. They seem to arrive at the GIS node closest to the building, and then walk the last couple of meters inside. I believe this occurs with every pedestrian I've tried.
I am using Dijkstra bidirectional path-finding (built-in to AnyLogic). I have tried A* bidirectional with the same results, as well as using a different .osm/.pbf file, and trying it on fast and short route mode with no luck.
I have a feeling it is a bug in the bidirectional pathfinding, however AnyLogic doesn't allow non-bidirectional routing, or a way of implementing your own for GIS regions...
Any ideas would be appreciated.
-edit-
I came back to this bug and have determined it is 100% a path-finding bug. When creating a GIS route between 2 points using the AnyLogic online server, it works as expected and we can see a completed route. However, when using a loaded offline OSM or PBF file (I tried different map sources), I observe that the route it draws goes correctly halfway but then draws a straight line to the node closest to the destination, then a straight line to the destination. I have attached 2 pictures to demonstrate this.
Note that it doesn't make it halfway in distance, it is halfway in node count.
We can see the nodes exist based on the pictures below, but the routing ignores the second half of them. I'm sure it's not a disconnected network, as I tested a lot and sometimes it would route over a section of road fine, but other times it would fail at it.
So I suppose now my question is: how do I prevent or get around this? I tried using a custom GraphHopper router to get around the bidirectional routing and just use regular A* search, but didn't have any luck as the other algorithms I tried just made straight lines.
This is what happens when making the first point at the bottom:
This is what happens when making the first point at the top:
This is with online routing:
as Benjamin already stated, this probably happens because you have multiple networks and routing acts in unpredictable ways sometimes when you have networks that are not defined as you would like.
Search on network on the projects section and be sure there is only 1 network..
For instance in the following image, you see that there are 2 networks... find a way to connect things in order to end up with 1 network...
This was just confirmed by AnyLogic to be a bug in the path-finding regression. It is being fixed in the next update.
I have a hard time understanding how to use vis.js network with a large amount of data generated dynamically. From what I read in the documentation, there are only two easy ways to import data: from gephi or in dot language; right? Isn't that a bit restrictive?
I have no knowledge of gephi or dot language so I decided to use my mysql database which I am used to working with.
So I query my data with php, and generate javascript to build the nodes and edges for the network.
But so far, I only have about 200 nodes and edges (which is like 1/5 of the data I'll have in the end) and it's already very slow to load, it seems like it takes a lot of ressources to display the network (my MacBook Pro gets really loud anytime I open the network page), when vis.js is supposed to be quick and lightweight.
Is that because all the nodes and edges are "written" in the code of the page? Or is it the fact that I use php to query the mysql data?
I don't refuse the idea to work with a json file, or dot language, I just have no idea how to do that... but if it can get me better performances, I'd like to learn how to do it. Can anyone explain in details how it all works? And with either of these methods, can I get different sizes and colors for the nodes and the edges according to the data I need to show (right now I do that in php after querying the data from the database)?
The format required by Vis Network can be serialized and deserialized using const object = JSON.parse(string); and const string = JSON.stringify(object);. There's no need to use Gephi or DOT to simply store data in the data base.
Nodes have size property to change size and both nodes and edges have color to change color. Edges can also inherit color from connected nodes. For more details see the docs for nodes at https://visjs.github.io/vis-network/docs/network/nodes.html and edges at https://visjs.github.io/vis-network/docs/network/edges.html.
Regarding performance there is not much I can tell you without some sample code and data to play with. I tried putting more that 200 nodes to https://thomaash.github.io/me/#/canvas which was built with Vis Network. As I expected it loads instantly and works just fine but I have no idea how fast or slow is MacBook Pro compared to my machine.
I dont know this is the right place to ask, but since Autodesk redirects here on their 'get help' page, im trying anyway....
We have a couple of autodesk models that we display using their viewer. Also, we had a couple of 'presets' configured: camera positions based on XYZ of the camera and XYZ of the target the camera is looking at. So, when you push the preset, the camera position changes towards the XYZ and the target is set as well.
This worked fine - untill this weekend (23-04-2018): The positions are completely off... E.G. one of the presets did center the viewable area on a specific part of the model and now it seems the model is zoomed out 50 times and in another angle (we are using the perspective camera). I'm not sure what's the cause of this, but if i had to guess, it would be that the parsing of the source DWG is done again automatically and the center of the model as SVG shifted, and thus the stored XYZ coordinates are useless.
Obviously we can reinitialise the presets, but since i dont know what caused this, im unsure if thats just wasted time. Now the question: Does anyone know what is the cause of this and can we avoid it?
Ahh - ok, seems to be related to the version of the viewer - although im still unsure why it switched (there was no new version released afaik) manually specifying
<script src="https://developer.api.autodesk.com/modelderivative/v2/viewers/viewer3D.min.js?v=v4.1.0"></script>
fixed it.
There were indeed some changes that might have affected the scenes that "manipulates" with positioning (be it camera or component).
The scene I usually use to illustrate the component transformations: http://giro-watch.tk/ was "broke" when I updated to the latest Viewer version.
In case you refer to the Forge Viewer lib without specifying the version:
... src="https://.../v2/viewers/viewer3D.min.js?v=v4.1"></script>
as in this case, omitting ?v=v4.1,
you will get the latest version of it and since recently the Forge Viewer changed from v3 to v4, some breaking changes were expected.
However, between you and Autodesk server, might be a couple of services that caches this file and this is why you project might work fine even after the Forge Viewer was updated and just now the caches were cleaned/renewed.
This is why we recommend to always use versioning in production code.
I've been playing with Mapbox's blog post about converting heat maps to contour lines, and I'm stuck at the extraction part of this process.
I used the CSV plugin to create a vector layer of points, then a heat map raster layer based on that, but I can't seem to be able to see the contour lines based on that raster layer after that. When I looked at the properties and selected categorized, it didn't seem to have any symbols listed.
I'm guessing this is probably some kind of type error since I had to create the raster layer based off of map units and not meters, but I don't know how to correct it. What am I missing?
Okey dokey. After the holiday weekend I figured it out.
It was basically a type error with the CRS. Although it's not explicitly stated, I got in touch with MapBox and found out they were working in Google Mercator. It's also a perfectly acceptable CRS to use when you eventually have to import the .shp file into MapBox/Tile Mill.
I am looking into plotting a very large data. I've tried with FLOT, FLOTR and PROTOVIS (and other JS based packages) but there is one constant problem I'm faced with. I've tested 1600, 3000, 5000, 8000 and 10k points on a 1000w 500h graph which are rendered all within a reasonable time on PC browsers (IE and FF). But when rendered on MACs FF/Safari, starting with 500 data points, the page becomes significantly slow and/or crashes.
Has anyone come across this issue?
Yes, don't do that. It seems pretty unlikely to me that 10k points are actually going to be visible/useful to the user all at once.
You should aggregate your data (server-side) and then if they want to zoom in on areas of the data, use AJAX requests to get that area and replot.
If you use flot, they have examples showing selection, i.e. here: http://people.iola.dk/olau/flot/examples/zooming.html
(I can't comment the Ryley answer yet, that's why I put some remarks here)
What about an offline use. Html is a great format for documents, set aside the server/client stuff.
JavaScript, Canvas and all those fancy client-side technologies could be used to build nice interactive files, like data reports containing graphs with zoom and pan features ...