I was trying to make distance based agent(people) connection. It was working in a continuous space (see attached). But not working in GIS space. In GIS space we don't have option to select network type under "space and network type". So, I have created a function to do that. That function is also working perfectly in continuous space but not in GIS space. Anybody can explain why it is happening or is there any way to do it GIS space.
My GIS space has 190 regions and random number of people located randomly (uploaded from database) in each region.
You need to define networks and connections yourself, the option you used so far is only a little helper, but the actual capabilities are much larger, also in GIS:
Just use the custom "Agent Link" object and its properties. It is designed to connect agents in any space, also GIS.
Check the help and example models to learn all about it, it is worth it :)
Related
Final edit- This turned out to be a bug with routing regression in AnyLogic and is getting fixed in the next update.
I'm developing a town simulation with pedestrian agents moving around it. The GIS region data I use is loaded from a .osm file in foot-traffic mode, and I randomly spawn building agents around the region.
They walk around the city fine at first, but around halfway to their destination, they suddenly fly across the town at high speed in a straight line to where they're going. They seem to arrive at the GIS node closest to the building, and then walk the last couple of meters inside. I believe this occurs with every pedestrian I've tried.
I am using Dijkstra bidirectional path-finding (built-in to AnyLogic). I have tried A* bidirectional with the same results, as well as using a different .osm/.pbf file, and trying it on fast and short route mode with no luck.
I have a feeling it is a bug in the bidirectional pathfinding, however AnyLogic doesn't allow non-bidirectional routing, or a way of implementing your own for GIS regions...
Any ideas would be appreciated.
-edit-
I came back to this bug and have determined it is 100% a path-finding bug. When creating a GIS route between 2 points using the AnyLogic online server, it works as expected and we can see a completed route. However, when using a loaded offline OSM or PBF file (I tried different map sources), I observe that the route it draws goes correctly halfway but then draws a straight line to the node closest to the destination, then a straight line to the destination. I have attached 2 pictures to demonstrate this.
Note that it doesn't make it halfway in distance, it is halfway in node count.
We can see the nodes exist based on the pictures below, but the routing ignores the second half of them. I'm sure it's not a disconnected network, as I tested a lot and sometimes it would route over a section of road fine, but other times it would fail at it.
So I suppose now my question is: how do I prevent or get around this? I tried using a custom GraphHopper router to get around the bidirectional routing and just use regular A* search, but didn't have any luck as the other algorithms I tried just made straight lines.
This is what happens when making the first point at the bottom:
This is what happens when making the first point at the top:
This is with online routing:
as Benjamin already stated, this probably happens because you have multiple networks and routing acts in unpredictable ways sometimes when you have networks that are not defined as you would like.
Search on network on the projects section and be sure there is only 1 network..
For instance in the following image, you see that there are 2 networks... find a way to connect things in order to end up with 1 network...
This was just confirmed by AnyLogic to be a bug in the path-finding regression. It is being fixed in the next update.
I have to create a fictional map as a visual representation for a particular community in a video game that I would not disclose.
The map would have multiple landmarks that requires to be drawn a radius around it, creating a circle. This circle should change in color depending on certain events that occurred, often monthly.
The reason why it is in Google Sheets was the data that is being collected from the game is needed to be calculated and graphed alongside the said interactive map.
I can confidently say that no one have perform this sort of thing with Google Sheets before.
My skills are essentially beginner-level, so if there are some topics that are outside the scope of general amateur skillset, please describe it for me.
I first thought that I could simply overlap images together, but images in Google Sheets can't change it's position in the sheets through the app-script, like placing it in a coordinate system. Because there are many landmarks that are needed to be drawn a circle around with multiple different kinds of color existing in the palette, it's not exactly a good idea to have all possible combination of different landmarks or color of the circle and then display one correct one out of hundreds of possibility.
Changing the color can be manually be dealt by hand, as monthly changes are not difficult to perform.
If there are other more suitable program for this job, please let me know!
What you have is GIS data (see GIS community at Stack Exchange), and you need to visualize it.
I don’t know what kind of raw data you have but you can probably load it directly to a GIS program like QGIS (link to site) to visualize it, or even do things like heat maps or aggregations without need of programming.
If it doesn’t have an specific functionality that you need and there’s no plugin that does so, you can also make a content service (see documentation on how to do so) that returns a GeoJSON with points on the landmarks with the computed data as extra information. Then I’d use the program to visualize it. This obviously requires programming.
If for some reason you cannot install a program, you could also make a WebApp (see documentation on how to do so) and use a GIS library (Leaflet, for example) to make a visualizate of the data. In this case the data would be fetched via google.script.run. This requires even more programming.
References
Content Service (Google Developers)
HTML Service: Create and Serve HTML (Google Developers)
google.script.run (Google Developers)
QGIS webpage
Leaflet webpage
I have a hard time understanding how to use vis.js network with a large amount of data generated dynamically. From what I read in the documentation, there are only two easy ways to import data: from gephi or in dot language; right? Isn't that a bit restrictive?
I have no knowledge of gephi or dot language so I decided to use my mysql database which I am used to working with.
So I query my data with php, and generate javascript to build the nodes and edges for the network.
But so far, I only have about 200 nodes and edges (which is like 1/5 of the data I'll have in the end) and it's already very slow to load, it seems like it takes a lot of ressources to display the network (my MacBook Pro gets really loud anytime I open the network page), when vis.js is supposed to be quick and lightweight.
Is that because all the nodes and edges are "written" in the code of the page? Or is it the fact that I use php to query the mysql data?
I don't refuse the idea to work with a json file, or dot language, I just have no idea how to do that... but if it can get me better performances, I'd like to learn how to do it. Can anyone explain in details how it all works? And with either of these methods, can I get different sizes and colors for the nodes and the edges according to the data I need to show (right now I do that in php after querying the data from the database)?
The format required by Vis Network can be serialized and deserialized using const object = JSON.parse(string); and const string = JSON.stringify(object);. There's no need to use Gephi or DOT to simply store data in the data base.
Nodes have size property to change size and both nodes and edges have color to change color. Edges can also inherit color from connected nodes. For more details see the docs for nodes at https://visjs.github.io/vis-network/docs/network/nodes.html and edges at https://visjs.github.io/vis-network/docs/network/edges.html.
Regarding performance there is not much I can tell you without some sample code and data to play with. I tried putting more that 200 nodes to https://thomaash.github.io/me/#/canvas which was built with Vis Network. As I expected it loads instantly and works just fine but I have no idea how fast or slow is MacBook Pro compared to my machine.
I have data that I want to create a GIS type application that will have the typical features of adding and removing layers of different types. What is the best architectural approach?
The data consists of property locations in Eastings and Northings.
I also have ordnance survey data in GML and Shapefiles.
I know this is a very broad question but the subject area also seems very broad to me and I am unsure which direction to go.
I was thinking of using SQL 2008 spatial and Bing Silverlight control to visualise that maps.To do this would I have to convert the eastings and northings to GWS84 geography datatype? But then if I converted the shapefiles to GML and imported all the GML files to sql using GeomFromGML they would be in geometry datatypes. Wouldn’t the two types be incompatible?
Also, should ESRI ArcGIS API for Silverlight
feature in the equation? Is this a good environment to create maps that I can point as SQL sqerver 2008 as the datasource (using a WCF service if necessary)?
Any advice greatly appreciated!
This is something I've done several times, using OS data from SQL Server in both Bing Maps AJAX and Silverlight controls. Some general comments below (in no particular order!):
Don't expect to implement full-blown
GIS functionality using Bing Maps.
Simple querying, retrieval and
display of the data is all fine (+
some simple editing), but after that
you'll be struggling with what can be
achieved in the browser.
All vector shapes supplied to Bing Maps need to be in (geography)
WGS85 coordinates, EPSG:4326.
However, all data will be projected
and displayed using (projected)
Spherical Mercator system, EPSG:3857.
In terms of vector shapes, you can expect to achieve a similar level of performance as you get in the SSMS spatial results tab - that is, (with careful architecture) you can plot up to about 5,000 features on the map at once, zoom / pan around them, click on them to bring up various properties and attributes etc. However, after that you'll find the UI becomes fairly unresponsive (which
I imagine is the reason why the spatial results tabs itself limits you to displaying 5,000 records at once).
If you want to display more features than this, one approach is to rasterize them by projecting them into the EPSG:3857 projection, creating a .PNG/.JPG image file of the features and then cutting that image into tiles according to the Bing Maps quadkey tile numbering system as explained here:
http://msdn.microsoft.com/en-us/library/bb259689.aspx and displaying them as a tilelayer. Tile layers are significantly faster than displaying the equivalent vector shapes, although it means the data is static.
If you do create raster tiles, you can either render them dynamically
or pre-render them to improve performance - i.e. you could set
up a job to render and update the tileset for slowly-changing data
every night/every month etc.
If you're talking about OS Mastermap data, the sheer level of detail
involved means that you need to think more carefully about what
features you want to display, and how you want to display them. Take
greater London, for example, which covers an area about 50km x 40km. To
create raster tiles (each of which are 256px x 256px) at zoom level 19
covering this area you'd need to render and store 1.3 million separate
tiles. If each of those is generated from a database query that takes, say
200ms to run, it's gonna take a looooonggggg time to prepare all your
data. Also, once the files have been generated, you might want to think
about storing them in a DB rather than saving them on a file system.
As for loading the OS data into SQL Server in the first place - there
are several tools that will import either from GML or shapefile into SQL
Server, and handle the projection from EPSG:27700 (Ordnance Survey
National Grid) to WGS84 along the way. Try GDAL/OGR or Safe FME for
starters.
I have a blog at http://alastaira.wordpress.com that has several blog posts that you may find useful describing various aspects of integrating Bing Maps and SQL Server. Particularly, you might want to look at:
http://alastaira.wordpress.com/2011/02/16/loading-ordnance-survey-open-data-into-sql-server-2008/
http://alastaira.wordpress.com/2011/01/23/the-google-maps-bing-maps-spherical-mercator-projection/
http://alastaira.wordpress.com/2011/02/21/using-ogr2ogr-to-convert-reproject-and-load-spatial-data-to-sql-server/
I have a json file thats roughly 480mb of geolocation points. I was wondering if someone knows of a good 'pattern' to use when trying to visualise the data. The problem I'm encountering is that the data has to be loaded into Google maps from the get go. This is causing all kinds of obvious issues.
I don't have to do this through google. It just seemed like the obvious choice.
With that much data, it may make more sense to handle it on the server side instead of client side. You could set up Geoserver with your appropriate data points. Using OpenLayers, you could overlay your points from Geoserver on top of Google Maps or potentially even on top of your own map if you want to cut out Google Maps all together. The heavy duty processing then happens on the server and only images are displayed in the browser. This cuts down on network traffic and the amount of processing the browser has to do. If you set up Geoserver to do caching, the server won't even have to work very hard.
It really depends on what kind of data this is.
If these are points for polylines or polygons you might try to encode the points (http://code.google.com/apis/maps/documentation/utilities/polylinealgorithm.html and http://code.google.com/apis/maps/documentation/utilities/polylineutility.html). There are also functions which you can use to encode the points. This will significantly reduce the size of your data.
You might also want to consider loading data depending on zoom level on the map. (I am not sure what you mean by "data has to be loaded from the get go" - you can load the data into the map depending on events, etc...) .
Fusion tables mentioned above will only accept 100MB of data.
I can be more specific if you explain the nature of your data and what you trying to do in more details. Hope this helps.
Try Google Fusion Tables