Effective data structure for overlapping spatial areas - language-agnostic

I'm writing a game where a large number of objects will have "area effects" over a region of a tiled 2D map.
Required features:
Several of these area effects may overlap and affect the same tile
It must be possible to very efficiently access the list of effects for any given tile
The area effects can have arbitrary shapes but will usually be of the form "up to X tiles distance from the object causing the effect" where X is a small integer, typically 1-10
The area effects will change frequently, e.g. as objects are moved to different locations on the map
Maps could be potentially large (e.g. 1000*1000 tiles)
What data structure would work best for this?

Providing you really do have a lot of area effects happening simultaneously, and that they will have arbitrary shapes, I'd do it this way:
when a new effect is created, it is
stored in a global list of effects
(not necessarily a global variable,
just something that applies to the
whole game or the current game-map)
it calculates which tiles
it affects, and stores a list of those tiles against the effect
each of those tiles is
notified of the new effect, and
stores a reference back to it in a
per-tile list (in C++ I'd use a
std::vector for this, something with
contiguous storage, not a linked
list)
ending an effect is handled by iterating through
the interested tiles and removing references to it, before destroying it
moving it, or changing its shape, is handled by removing
the references as above, performing the change calculations,
then re-attaching references in the tiles now affected
you should also have a debug-only invariant check that iterates through
your entire map and verifies that the list of tiles in the effect
exactly matches the tiles in the map that reference it.

Usually it depends on density of your map.
If you know that every tile (or major part of tiles) contains at least one effect you should use regular grid – simple 2D array of tiles.
If your map is feebly filled and there are a lot of empty tiles it make sense to use some spatial indexes like quad-tree or R-tree or BSP-trees.

Usually BSP-Trees (or quadtrees or octrees).

Some brute force solutions that don't rely on fancy computer science:
1000 x 1000 isn't too large - just a meg. Computers have Gigs. You could have an 2d array. Each bit in the bytes could be a 'type of area'. The 'effected area' that's bigger could be another bit. If you have a reasonable amount of different types of areas you can still use a multi-byte bit mask. If that gets ridiculous you can make the array elements pointers to lists of overlapping area type objects. But then you lose efficiency.
You could also implement a sparse array - using a hashtable key'd off of the coords (e.g., key = 1000*x+y) - but this is many times slower.
If course if you don't mind coding the fancy computer science ways, they usually work much better!

If you have a known maximum range of each area effect, you could use a data structure of your choosing and store the actual sources, only, that's optimized for normal 2D Collision Testing.
Then, when checking for effects on a tile, simply check (collision detection style, optimized for your data structure) for all effect sources within the maximum range and then applying a defined test function (for example, if the area is a circle, check if the distance is less than a constant; if it's a square, check if the x and y distances are each within a constant).
If you have a small (<10) amount of effect "field" shapes, you can even do a unique collision detection for each effect field type, within their pre-computed maximum range.

Related

calculate distance between camera and different sized objects

I have been trying to develop a small object detection system for my college project.
The main idea is that i have a robot , that can pick one particular "object" from the surroundings, for this purpose i am using only a single camera, with known intrinsic parameters.
I have already developed an object detection system, which can predict bounding box coordinates,
using these coordinates and size of bounding boxes, i am able to predict perceived depth, using "Triangle similarity" method,
The problem that i am facing is , this particular "object" can vary in size, which means the objects located at the same distance can also have different sized bounding boxes.
What could be the other way to detect rough estimate from camera to object, given an object doesn't have a fixed size.
Cannot be done in general, since scale information is lost in camera projection.
Depending on your particular case, you may be able to use more indirect methods to infer distance. For example, if the subject rests on a ground plane, you may be able to exploit knowledge of the shape and size of patterns on that floor. More sophisticated methods were analyzed many years ago - the general subject goes under the heading of "single-view metrology". A good reference is Antonio Criminisi's 1999 PhD thesis.
As suggested above, you can not get the absolute depth of objects from monocular camera (single view).
I would suggest to try out following approaches:
Use some reference scale attached to each object eg. you can add and detect ArUco marker on each object and find the corresponding object's orientation and depth.
Above approach might not be feasible if you have unkonwn number of objects, you can use deep learning based models for monocular depth estimation

Isometric depth sorting issue with big objects

I'm currently building an as3 isometric game, but I'm having a lot of problem with depth sorting. I've searched for a solution, but didn't found anything that match my problem (rectangle objects).
Here is a screenshot of my game:
As you can see, depth sorting works well when it's between 1x1 tiles objects. I simply use their x and y coordinates (relative to the isometric map) to sort them.
The problem comes when I have bigger objects, like 2x2 or 1x4 or 4x1.
Any idea how should I handle depth sorting then?
I don't think it is possible to sort a scene based on a single x,y value for each object if some of them can be long enough that one end should be at a different depth than the other. For instance, consider how you'd handle the rendering if the brown chair in your picture was moved one square down-left (to the square between the blue chair and the long couch). It would be deeper in the scene than the red table behind the couch, but would need to be rendered on top of the couch, which would need to be on top of the table.
I think there are two simple solutions:
Design your
levels using only one sort of overlap for large objects. For
instance, you could specify that an object's depth is based on its
nearest corner, which would require you to avoid putting things in
front of its most distant bits (since it will render on top of them).
Or you could stick with your current code (which seems to use the
most distant corner for depth) and avoid putting anything behind the
nearer parts. You may still have trouble with characters and other
objects that move around though. You might be able to make the
troublesome tiles inaccessible if you're careful with your design,
but in some cases this may be too restrictive.
Break up your large objects into smaller ones
which would have their own depths. You will probably want to go right
down to 1x1 pieces, each of which will have an unambiguous depth. You
might choose keep the larger objects in the code as invisible
containers for the smaller pieces, or they could be eliminated
entirely, whichever makes it easier for you to load up and enable
interaction with the various bits.
Splitting larger objects in to 1x1 sized pieces can also be nice since you can make them modular. That is, you can build differently sized objects by putting together 1x1 pieces in different combinations. If you cut your 2x1 tables in your image in half vertically, for instance, and created a 1x1 middle tile that fit in between them, you could stretch the design out to 3x1 or 10x1, depending on how many times you repeat the middle tile. There's a lot of other ways to make tiled graphics look good with only a modest amount of art required.
Ultima Online emulators (specifically, POL, though there may be others) achieve this through the implementation and usage of the concept of a 'multi' -- a single object comprised of sections of cut-up larger graphics. These cut-up graphics are such that their sprites are vertically-split at the left- and right-corner points of iso grid boundaries.
Other considerations:
- render 'multi' pieces sorted screen-Y axis from top-to-bottom.
- the southern (i.e. screen bottom-left) component of a 'multi' becomes the anchoring tile position (in the case of your couch, its left-most piece).
- consider that each map location can also hold its own vertical stack of objects; offsetting each object's render by screen-Y simulates height/altitude, and these must be sorted bottom-to-top (e.g. lowest-altitude to highest altitude).
Good luck!

Store a "routine" which, given some input, generates a 3d model

Well, it's the time of the year were I get busy on my next-generation, cutting edge, R&D project (just for the fun of it...and maybe some profit eventually).
This time, I've had a great idea for a service, which unfortunately I can't detail much.
However, a major part of this project is the ability to generate a 3d model out of certain input criteria. The generated model must be different on each generation.
As such, this is much different than the static models used in games - I think I will have to store actual code more than just model coords.
To give an example of some output:
var apple = new AppleGenerator();
apple->set_size_between(30, 50); // these two numbers are just samples...
apple->set_seeds_between(3, 8); // apple must have at least 3 seeds*
var apple_model = apple->generate();
// * I realize seeds may not be exactly part of the model, but I can't of anything else
So I need to tackle some points here:
How do I store these models as data?
Do you know of any tools that may help?
I need to incorporate a randomness factor (for example, the apples would have slightly different shapes each time)
I suppose math will play a good part here, but since these are complex shapes, it's going to be infeasible to cook up the necessary formulae for each model, right?
Also, textures must be relevant to each part of the model, as well as making the model look random (eg; I could be detailing a 40 to 60 percent red, and the rest green, for the generated apple).
This is in fact not a simple task. The solution varies a LOT depending on the complexity and variety of the objects you are trying to create.
Let's consider a few cases though:
Object is more or less known:
The most simple case is, to have a 3d model in the conventional way, and then randomize it a bit. Take the apple for example. The randomization can vary from the size of the apple to its texture colors to fruit damage.
All your objects can be described using NURBS surfaces:
In this case, you need to store enough data for the surface to be able to be generated, where of course this data can be randomized a bit.
Your objects have rotational symmetry:
In this case, generating a single curve and rotating it around the an axis can give you a shape. An apple is an example. You would need to store only the curve data and randomizing the shape could either be done on the curve (keeping symmetry) or on the final mesh.
On textures
This is way more complicated than the mesh generation. This is mainly because textures carry much more information than meshes (they are more detailed). You can have many texture generation strategies. In the case of your apple, you could select a few vertices, give them colors (one red, one green, another red etc) and interpolate the other vertex colors. This creates a smooth transition of colors which may look nice on an apple. If you are generating a knife however that just looks terrible.
In most cases, you need to be aware of which part of your mesh represents what, and generate the texture part by part. In the knife example above, you can generate the mesh in two steps; blade and handle each part's texture generated separately.
Conclusion
You can have a mixture of these of course. A meshGenerator class can take the data and based on whichever type they are, generates a mesh accordingly. Perhaps the first solution for object creation is the most suitable as any complicated object can be more easily defined by its triangles rather than NURBS.
Take a look at some of the basic architectural principles used to code Spore, the video game about evolving living creatures: http://chrishecker.com/My_liner_notes_for_spore
Here's an example of how to XML-serialize a mesh, along with some random morph behavior: http://www.ogre3d.org/tikiwiki/Morph+animation#The_XML_format_of_meshes_with_morph_animation
To make your apples all a bit different, you can apply a random transformation (or deformation). See for example: http://wiki.blender.org/index.php/Doc:2.4/Manual/Modifiers/Deform/MeshDeform
You want to use an established file format to avoid strange problems. It's more geometry than pure math. Your generate function would plot the polygons, and then your save method would interact with the formats.
https://stackoverflow.com/questions/441388/most-common-3d-model-format

Element point map for html5 canvas element, need algorithm

I'm currently working on a pure html 5 canvas implementation of the "flying tag cloud sphere", which many of you have undoubtedly seen as a flash object in some pages.
The tags are drawn fine, and the performance is satisfactory, but there's one thing in the canvas element that's kind of breaking this idea: you can't identify the objects that you've drawn on a canvas, as it's just a simple flat "image"..
What I have to do in this case is catch the click event, and try to "guess" which element was clicked. So I have to have some kind of matrix, which stores a link to a tag object for each pixel on the canvas, AND I have to update this matrix on every redraw. Now this sounds incredibly inefficient, and before I even start trying to implement this, I want to ask the community - is there some "well known" algorithm that would help me in this case? Or maybe I'm just missing something, and the answer is right behind the corner? :)
This is called the point location problem, and it's one of the basic topics in computational geometry. There are a lot of methods you could use that would be much faster than the approach you're thinking of, but the details depend on what exactly you want to accomplish.
For example, each text string is contained in a bounding box. Do you just want to test whether the user clicked somewhere in that box? Then simply store the minimum and maximum coordinates of each rendered string, and test the point against each bounding box to see if it's contained in that range. If you have a large number of points to test, you can build any number of data structures to speed this up (e.g. R-trees), but for a single point the overhead of constructing such a structure probably isn't worthwhile.
If you care about whether the point actually falls within the opaque area of the stroked characters, the problem is slightly trickier. One solution would be to use the bounding box approach to first eliminate most of the possibilities, and then render the remaining strings one at a time to an offscreen buffer, checking each time to see if the target point has been touched.

How to simplify (reduce number of points) in KML?

I have a similar problem to this post. I need to display up to 1000 polygons on an embedded Google map. The polygons are in a SQL database, and I can render each one as a single KML file on the fly using a custom HttpHandler (in ASP.NET), like this http://alpha.foresttransparency.org/concession.1.kml .
Even on my (very fast) development machine, it takes a while to load up even a couple dozen shapes. So two questions, really:
What would be a good strategy for rendering these as markers instead of overlays once I'm beyond a certain zoom level?
Is there a publicly available algorithm for simplifying a polygon (reducing the number of points) so that I'm not showing more points than make sense at a certain zoom level?
For your second question: you need the Douglas-Peucker Generalization Algorithm
For your first question, could you calculate the area of a particular polygon, and relate each zoom level to a particular minimum area, so as you zoom in or out polygon's disappear and markers appear depending on the zoom level.
For the second question, I'd use Mark Bessey's suggestion.
I don't know much aobut KML, but I think the usual solution to question #2 involves iterating over the points, and deleting any line segments under a certain size. This will cause some "unfortunate" effects in some cases, but it's relatively fast and easy to do.
I would recommend 2 things:
- Calculate and combine polygons that are touching. This involves a LOT of processing and hard math, but I've done it so I know it's possible.
- Create your own overlay instead of using KML in PNG format, while you combine them in the previous suggestion. You'll have to create a LOT of PNGs but it is blazing fast on the client.
Good luck :)
I needed a solution to your #2 question a little bit ago and after looking at a few of the available line-simplification algorithms, I created my own.
The process is simple and it seems to work well, though it can be a bit slow if you don't implement it correctly:
P[0..n] is your array of points
Let T[n] be defined as the triangle formed by points P[n-1], P[n], P[n+1]
Max is the number of points you are trying to reduce this line to.
Calculate the area of every possible triangle T[1..n-1] in the set.
Choose the triangle T[i] with the smallest area
Remove the point P[i] to essentially flatten the triangle
Recalculate the area of the affected triangles T[n-1], T[n+1]
Go To Step #2 if the number of points > Max