Is there a way to simplify shapefiles such that smaller polygons have more detail, and larger polygons have less detail? - gis

I have a shapefile representing territories, some large and some small, that I would like to reduce in size. If I simplify the shapefile a lot, the larger territories are still easily recognisable, but the small ones lose a lot of detail and become hard to recognise.
Is there a simplification algorithm that will preferentially simplify the larger polygons?
I have been using https://mapshaper.org/ so far, but am open to using other tools
Thanks

Related

Slicing up large heterogenous images with binary annotations

I'm working on a deep learning project and have encountered a problem. The images that I'm using are very large and extremely detailed. They also contain a huge amount of necessary visual information, so it's hard to downgrade the resolution. I've gotten around this by slicing my images into 'tiles,' with resolution 512 x 512. There are several thousand tiles for each image.
Here's the problem—the annotations are binary and the images are heterogenous. Thus, an annotation can be applied to a tile of the image that has no impact on the actual classification. How can I lessen the impact of tiles that are 'improperly' labeled.
One thought is to cluster the tiles with something like a t-SNE plot and compare the ratio of the binary annotations for different regions (or 'classes'). I could then assign weights to images based on where it's located and then use that as an extra layer in my training. Very new to all of this, so wouldn't be surprised if that's an awful idea! Just thought I'd take a stab.
For background, I'm using transfer learning on Inception v3.

High quality screen capture

I'm making a manual for a web-based app. I take screenshots and put them into adobe illustrator and they lose their quality extremely fast when zooming in. Is there anyway I can take high resolution or vector based screenshots that don't loose image quality when zoomed in? This seems to be only a problem with illustrator, with photoshop when I zoom in it gets slightly fuzzy but thats it.
There is no exact way to capture as a vector. Because of the capture program, not knowing of any geometry shapes.
Although, you can capture a raster image and convert it to a vector. There are numerous tools out there, that will allow you to do this conversion. Then you will have to do a little bit of tweaking. But in reality you cannot take a capture as a vector, or convert it and have it be "pixel-perfect".
Hope this helps!
Zachary

What is good size ratio for a 2D tile-based game's chunks ? [Screen Ratio or 1:1 Ratio ?]

I am making a 2D tile based game and I expect to have a really big world.
As I want movement between areas to be seamless I will obviously need to load the world in chunks.
So the question is :
Is it better if my chunk's size is based on my game's resolution
Is it better if my chunk's size is a perfect square
Let's have an example with simple numbers ;
If my game's resolution is 1024x768 and my tiles are 32x32,
I can fit 32x24 tiles in one screen.
Let's say I'd like my chunks a bit bigger than the screen,
Is it better to have a 128x128 tiles chunk
Is it better to have a 128x96 tiles chunk
As far as I know my question is irrelevant and either would do but I'm afraid I might end up facing an unexpected error if I choose the wrong one.
I think either direction you decide to take with handling chunk size, it is definitely going to be a wise decision to leave it abstracted enough to allow for some flexibility in size (if not for your own unit tests).
That being said, this is a question of performance really and doing textures/assets in powers of 2 was a good restriction back before dedicated GPU's were around. I'm not really sure if you'll see a huge difference between the two nowadays (although you might with it being flash) but it's usually a safe route to keep the tiles as a power of 2. In the past when working with rendering, keeping assets to a power of 2 meant it would always divide evenly and that saves on some computations.
Hope this helps! :)

Electrically charging edges in a force-based graph drawing algorithm?

I'm attempting to write a short mini-program in Python that plays around with force-based algorithms for graph drawing.
I'm trying to minimize the number of times lines intersect. Wikipedia suggests giving the lines an electrical charge so that they repel each other. I asked my physics teacher how I might simulate this, and she mentioned using calculus with Coulomb's Law, but I'm uncertain how to start.
Could somebody give me a hint on how I could do this? (Or alternatively, another way to tweak a force-based graph drawing algorithm to minimize the number of times the lines cross?) I'm just looking for a hint; no source code please.
In case anybody's interested, my source code and a youtube vid I made about it.
You need to explicitly include a term in your cost function that minimizes the number of edge crossings. For example, for every pair of edges that cross, you incur a fixed penalty or, if the edges are weighted, you incur a penalty that is the product of the two weights.

How to simplify (reduce number of points) in KML?

I have a similar problem to this post. I need to display up to 1000 polygons on an embedded Google map. The polygons are in a SQL database, and I can render each one as a single KML file on the fly using a custom HttpHandler (in ASP.NET), like this http://alpha.foresttransparency.org/concession.1.kml .
Even on my (very fast) development machine, it takes a while to load up even a couple dozen shapes. So two questions, really:
What would be a good strategy for rendering these as markers instead of overlays once I'm beyond a certain zoom level?
Is there a publicly available algorithm for simplifying a polygon (reducing the number of points) so that I'm not showing more points than make sense at a certain zoom level?
For your second question: you need the Douglas-Peucker Generalization Algorithm
For your first question, could you calculate the area of a particular polygon, and relate each zoom level to a particular minimum area, so as you zoom in or out polygon's disappear and markers appear depending on the zoom level.
For the second question, I'd use Mark Bessey's suggestion.
I don't know much aobut KML, but I think the usual solution to question #2 involves iterating over the points, and deleting any line segments under a certain size. This will cause some "unfortunate" effects in some cases, but it's relatively fast and easy to do.
I would recommend 2 things:
- Calculate and combine polygons that are touching. This involves a LOT of processing and hard math, but I've done it so I know it's possible.
- Create your own overlay instead of using KML in PNG format, while you combine them in the previous suggestion. You'll have to create a LOT of PNGs but it is blazing fast on the client.
Good luck :)
I needed a solution to your #2 question a little bit ago and after looking at a few of the available line-simplification algorithms, I created my own.
The process is simple and it seems to work well, though it can be a bit slow if you don't implement it correctly:
P[0..n] is your array of points
Let T[n] be defined as the triangle formed by points P[n-1], P[n], P[n+1]
Max is the number of points you are trying to reduce this line to.
Calculate the area of every possible triangle T[1..n-1] in the set.
Choose the triangle T[i] with the smallest area
Remove the point P[i] to essentially flatten the triangle
Recalculate the area of the affected triangles T[n-1], T[n+1]
Go To Step #2 if the number of points > Max