multi-level floor plan graph mapping - language-agnostic

There's a collection of buildings each having multiple floors that are interconnected by stairs and lifts. Currently, I'm attempting to design a system that will find the shortest-path between two points across the any of the buildings, being the same building or in another building.
At the moment each floor is modeled in a graph as follows:
the door of each room is a vertex. the junctions of edges connecting the rooms to the main edge(corridor) is also a vertex.
The stairs between the floors are edges.
The question that remains is how should I represent the lifts(elevators) (which are right next to the stairs)?
To have it as an edge makes me wonder what weight it should have, given that I'll have to run a graph traversal algorithm after for finding the shortest path.
Lift(elevator) as edge or as vertex? That is the question.
thanks!

Edges
Using an edge is the most immediate answer, as you do that for stairs. However, while stairs can only go from floor X to floor X+1, a lift can go from any floor to any floor, with slightly different times - I usually find the stairs quicker for two floors, but slower for more than 2. To mirror this you'll need an edge from every floor to every other floor, complete with weightings for each.
Vertices
You could instead have some additional vertices as well as edges. If you had a vertex at each floor of the liftshaft, then you'd only need a single path of edges connecting all the floors together, rather than a combinatorial number of edges.
If you also added an additional vertex outside the doors at each level, then you could add the average delay for getting into a lift and so reflect the fact that a lift can pass multiple floors quickly. However, lifts are going to need average timings at best. At busy times, they can end up stopping at almost every floor anyway, so for a busy campus you wouldn't really gain from these extra vertices.
My vote is for a vertex for each floor of the lift and a single edge to link adjacent floors. It should simplify the graph and reduce the effort of any path-optimisation algorithm as there are fewer paths. Plus it is a more accurate reflection of reality and minimises your workload to set up the edge weights.

If the lifts are a possible shortest path from one floor to the next, then they must be edges with weights. The entrances to each level are vertices. If close enough to the stairs then they are possible shared with the stair vertices.

I vote for edge.
Say you choose to use an elevator. You walk to it, press button and wait a bit. You then get in, wait some more, get out and continue your walk. Now, although you are physically not moving much, in time you are moving. Taking a lift between floors is like walking, say, 50 meters.
What I mean is that the time spent standing around the elevator is equivalent to a distance that you travel if walking. So treat the elevator as an edge that you are walking along during the duration that you are using it. Use that distance to compare, say, walking down the stairs.

Related

Determining the convex hull in the presence of outliers

I made a software to create and optimize a racing line in a racetrack.
Now I want to integrate it using real data recorded from GPS, so I need to obtain the g-g diagram, where g is the acceleration. The real g-g diagram is a set of points, in a scatter graph. I need to obtain the contour of that scatter plot, to use it as boundary of limits accelerations.
To obtain data to work on it I recorded myself on two different racetrack.
The code I wrote translate the x-y coordinate to polar R-theta.
Then I divide the circle in a definite number of sector (say, 20).
I calculate the histogram of all R's values in each sector, then from histogram I take the last value with an acceptable number of samples.
Then I draw these lines, and this is the result:
It's not bad, but this boundary is a little inside from the real data, real acceleration is a little bit bigger. I cannot take only the max value, because in this way I take in consideration the absurd values (like 3g in right corner, for sure an error). Moreover, the limit change if I change the number of bins on the histogram, but I cannot find a way to choose the right number of bins.
How can I determine the "true" convex hull, ignoring the outliers?

How to separate/partition polygons into existing regions?

I'm facing a problem regarding "partitioning"/subsetting polygons into regions (bigger polygons) so that each region should have disjoint meaningful elements.
For example, we have the following regions/polygons. At a given time, we know only the form of one region (let's say R1 for now).
It is clear that L3 would belong to R1.
How about L1, L2 and P1?
I thought about creating bounding boxes around them and check if the South-East coordinate( minX and minY) belongs to R1.
In this way L1 would belong to R2, even though it doesn't even crosses R2.
Do you have any concrete idea what I should look into for these sort of algorithms or how to solve this space separation problem?
If your regions and polygons are all described as polygons (discrete sequences of vertices), you can resort to the available polygon clipping techniques.
In particular, have a look at the Sutherland–Hodgman and Weiler–Atherton techniques.
Some optimization is possible if preprocessing of the windows is allowed (when there are many subject polygons for the same windows), using scanline techniques. This is a little more sophisticated.
The case of line segment entities is a little easier.

Make the computer never lose at Tic-Tac-Toe

I am working on a simple game of Tic Tac Toe code for C. I have most of the code finished, but I want the AI to never lose.
I have read about the minimax algorithm, but I don't understand it. How do I use this algorithm to enable the computer to either Win or Draw but never lose?
The way to approach this sort of problem is one of exploring possible futures. Usually (for a chess or drafts AI) you'd consider futures a certain number of moves ahead but because tic tac toe games are so short, you can explore to the end of the game.
Conceptual implementation
So you create a branching structure:
The AI imagines itself making every legal move
The AI them imagines the user making each legal move they can make after each of its legal move
Then the AI imagines each of its next legal moves
etc.
Then, going from the most branched end (the furthest forward in time) the player whose turn it is (AI or user) chooses which future is best for it (win, lose or draw) at each branching point. Then it hands over to the player higher up the tree (closer to the present); each time choosing the best future for the player whose imaginary turn it is until finally you're at the first branching point where the AI can see futures which play out towards it losing, drawing and winning. It chooses a future where it wins (or if unavailable draws).
Actual implementation
Note that conceptually this is what is happening but it's not necessary to create the whole tree, then judge it like this. You can just as easily work though the tree getting to the furthest points in time and choosing then.
Here, this approach works nicely with a recursive function. Each level of the function polls all its branches; passing the possible future to them and returns -1,0,+1; choosing the best score for the current player at each point. The top level chooses the move without actually knowing how each future pans out, just how well they pan out.
Pseudo code
I assume in this pseudo code that +1 is AI winning, 0 is drawing, -1 is user losing
determineNextMove(currentStateOfBoard)
currentBestMove= null
currentBestScore= - veryLargeNumber
for each legalMove
score=getFutureScoreOfMove(stateOfBoardAfterLegalMove , AI’sMove)
if score>currentBestScore
currentBestMove=legalMove
currentBestScore=score
end
end
make currentBestMove
end
getFutureScoreOfMove(stateOfBoard, playersTurn)
if no LegalMoves
return 1 if AI wins, 0 if draw, -1 if user wins
end
if playersTurn=AI’sTurn
currentBestScore= - veryLargeNumber //this is the worst case for AI
else
currentBestScore= + veryLargeNumber //this is the worst case for Player
end
for each legalMove
score=getFutureScoreOfMove(stateOfBoardAfterLegalMove , INVERT playersTurn)
if playersTurn ==AI’sTurn AND score>currentBestScore //AI wants positive score
currentBestScore=score
end
if playersTurn ==Users’sTurn AND score<currentBestScore //user wants negative score
currentBestScore=score
end
end
return currentBestScore
end
This pseudo code doesn't care what the starting board is (you call this function every AI move with the current board) and doesn't return what path the future will take (we can't know if the user will play optimally so this information is useless), but it will always choose the move that goes towards the optimum future for the AI.
Considerations for larger problems
In this case where you explore to the end of the game, it is obvious which the best possible future is (win, lose or draw), but if you're only going (for example) five moves into the future, you'd have to find some way of determining that; in chess or drafts piece score is the easiest way to do this with piece position being a useful enhancement.
I have been doing such a thing about 5 years ago. I've made a research. In tic tac toe it doesn't take long, you just need to prepare patterns for first two or three moves.
You need to check how to play:
Computer starts first.
Player starts first.
There are 9 different start positions:
But actually just 3 of them are different (others are rotated).
So after that you will see what should be done after some specific moves, I think you don't need any algorithms in this case because tic tac toe ending is determining by first moves. So in this case you will need a few if-else or switch statements and random generator.
tic tac toe belong to group of games, which won't be lost if you know how to play, so for such a games you do not need to use trees and modified sorting algorithms. To write such algorithm you need just a few functions:
CanIWin() to check if computer has 2 in a row and possible to win.
ShouldIBlock() to check if player do not have 2 in a row and need to block it.
Those two functions must be called in this order, if it returns true you need either to win or not to let player win.
After that you need to do other calculations for move.
One exclusive situation is when computer starts the game. You need to chose cell which belongs to the biggest amount of different directions (there are 8 of them - 3 horizontal, vertical and 2 diagonal). In such a algorithm computer will always choose center because it has 4 directions, you should add small possibility to choose second best option to make game a bit more attractive.
So when you reach situation where are some chosen parts of the board and computer have to move you need to rate every free cell. (If first or second function returned true you have to take an action before reaching this place!!!). So to rate cell you need to count how many open directions left on every cell, also you need to block at least one opponent direction.
After that you will have a few possible cells to put your mark. So you need to check for necessary moves sequence, because you will have a few options and it may be that one of them lead you into loosing. So after that you will have set and you can randomly choose move, or to choose the one with biggest score.
I have to say similar thing, as said at the beginning of post. Bigger games do not have perfect strategy and, lets say chess are much based on patterns, but also on forward thinking strategy (for such a thing is using patricia trie). So to sum up you do not need difficult algorithms, just a few functions to count how much you gain and opponent loses with move.
Make a subsidiary program to predict the cases with which the user can win. Then you can say your ai to do the things that the user has to do to win.

Efficient represention for growing circles in 2D space?

Imagines there's a 2D space and in this space there are circles that grow at different constant rates. What's an efficient data structure for storing theses circles, such that I can query "Which circles intersect point p at time t?".
EDIT: I do realize that I could store the initial state of the circles in a spatial data structure and do a query where I intersect a circle at point p with a radius of fastest_growth * t, but this isn't efficient when there are a few circles that grow extremely quickly whereas most grow slowly.
Additional Edit: I could further augment the above approach by splitting up the circles and grouping them by there growth rate, then applying the above approach to each group, but this requires a bounded time to be efficient.
Represent the circles as cones in 3d, where the third dimension is time. Then use a BSP tree to partition them the best you can.
In general, I think the worst-case for testing for intersection is always O(n), where n is the number of circles. Most spacial data structures work by partitioning the space cleverly so that a fraction of the objects (hopefully close to half) are in each half. However, if the objects overlap then the partitioning cannot be perfect; there will always be cases where more than one object is in a partition. If you just think about the case of two circles overlapping, there is no way to draw a line such that one circle is entirely on one side and the other circle is entirely on the other side. Taken to the logical extreme, assuming arbitrary positioning of the circles and arbitrary radiuses, there is no way to partition them such that testing for intersection takes O(log(n)).
This doesn't mean that, in practice, you won't get a big advantage from using a tree, but the advantage you get will depend on the configuration of the circles and the distribution of the queries.
This is a simplified version of another problem I have posted about a week ago:
How to find first intersection of a ray with moving circles
I still haven't had the time to describe the solution that was expected there, but I will try to outline it here(for this simplar case).
The approach to solve this problem is to use a kinetic KD-tree. If you are not familiar with KD trees it is better to first read about them. You also need to add the time as additional coordinate(you make the space 3d instead of 2d). I have not implemented this idea yet, but I believe this is the correct approach.
I'm sorry this is not completely thought through, but it seems like you might look into multiplicatively-weighted Voronoi Diagrams (MWVDs). It seems like an adversary could force you into computing one with a series of well-placed queries, so I have a feeling they provide a lower-bound to your problem.
Suppose you compute the MWVD on your input data. Then for a query, you would be returned the circle that is "closest" to your query point. You can then determine whether this circle actually contains the query point at the query time. If it doesn't, then you are done: no circle contains your point. If it does, then you should compute the MWVD without that generator and run the same query. You might be able to compute the new MWVD from the old one: the cell containing the generator that was removed must be filled in, and it seems (though I have not proved it) that the only generators that can fill it in are its neighbors.
Some sort of spatial index, such as an quadtree or BSP, will give you O(log(n)) access time.
For example, each node in the quadtree could contain a linked list of pointers to all those circles which intersect it.
How many circles, by the way? For small n, you may as well just iterate over them. If you constantly have to update your spatial index and jump all over cache lines, it may end up being faster to brute-force it.
How are the centres of your circles distributed? If they cover the plane fairly evenly you can discretise space and time, then do the following as a preprocessing step:
for (t=0; t < max_t; t++)
foreach circle c, with centre and radius (x,y,r) at time t
for (int X = x-r; X < x+r; x++)
for (int Y = x-r; Y < y+r; y++)
circles_at[X][Y][T].push_back (&c)
(assuming you discretise space and time along integer boundaries, scale and offset however you like of course, and you can add circles later on or amortise the cost by deferring calculation for distant values of t)
Then your query for point (x,y) at time (t) could do a brute-force linear check over circles_at[x][y][ceil(t)]
The trade-off is obvious, increasing the resolution of any of the three dimensions will increase preprocessing time but give you a smaller bucket in circles_at[x][y][t] to test.
People are going to make a lot of recommendations about types of spatial indices to use, but I would like to offer a bit of orthogonal advice.
I think you are best off building a few indices based on time, i.e. t_0 < t_1 < t_2 ...
If a point intersects a circle at t_i, it will also intersect it at t_{i+1}. If you know the point in advance, you can eliminate all circles that intersect the point at t_i for all computation at t_{i+1} and later.
If you don't know the point in advance, then you can keep these time-point trees (built based on the how big each circle would be at a given time). At query time (e.g. t_query), find i such that t_{i-1} < t_query <= t_i. If you check all the possible circles at t_i, you will not have any false negatives.
This is sort of a hack for a data structure that is "time dynamics aware", but I don't know of any. If you have a threaded environment, then you only need to maintain one spacial index and be working on the next one in the background. It will cost you a lot of computation for the benefit of being able to respond to queries with low latency. This solution should be compared at the very least to the O(n) solution (go through each point and check if dist(point, circle.center) < circle.radius).
Instead of considering the circles, you can test on their bounding boxes to filter out the ones which do not contain the point. If your bounding box sides are all sorted, this is essentially four binary searches.
The tricky part is reconstructing the sorted sides for any given time, t. To do that, you can start off with the original points: two lists for the left and right sides with the x coordinate, and two lists for top and bottom with the y coordinates. For any time greater than 0, all the left side points will move to left, etc. You only need to check each location to the one next to it to obtain a points where the element and the one next to it are are swapped. This should give you a list of time points to modify your ordered lists. If you now sort these modification records by time, for any given starting time and an ending time you can extract all the modification records between the two, and apply them to your four lists in order. I haven't completely figured out the algorithm, but I think there will be edge cases where three or more successive elements can cross over exactly at the same time point, so you may need to modify the algorithm to handle those edge cases as well. Perhaps a list modification record that contains the position in list, and the number of records to reorder would suffice.
I think it's possible to create a binary tree that solves this problem.
Each branch should contain a growing circle, a static circle for partitioning and the latest time at which the partitioning circle cleanly partitions. Further more the growing circle that is contained within a node should always have a faster growing rate than either of it's child nodes' growing circles.
To do a query, take the root node. First check it's growing circle, if it contains the query point at the query time, add it to the answer set. Then, if the time that you're querying is greater than the time at which the partition line is broken, query both children, otherwise if the point falls within the partitioning circle, query the left node, else query the right node.
I haven't quite completed the details of performing insertions, (the difficult part is updating the partition circle so that the number of nodes on the inside and outside is approximately equal and the time when the partition is broken is maximized).
To combat the few circles that grow quickly case, you could sort the circles in descending order by rate of growth and check each of the k fastest growers. To find the proper k given t, I think you can perform a binary search to find the index k such that k*m = (t * growth rate of k)^2 where m is a constant factor you'll need to find by experimentation. The will balance the part the grows linearly with k with the part that falls quadratically with the growth rate.
If you, as already suggested, represent growing circles by vertical cones in 3d, then you can partition the space as regular (may be hexagonal) grid of packed vertical cylinders. For each cylinder calculate minimal and maximal heights (times) of intersections with all cones. If circle center (vertex of cone) is placed inside the cylinder, then minimal time is zero. Then sort cones by minimal intersection time. As result of such indexing, for each cylinder you’ll have the ordered sequence of records with 3 values: minimal time, maximal time and circle number.
When you checking some point in 3d space, you take the cylinder it belongs to and iterate its sequence until stored minimal time exceeds the time of the given point. All obtained cones, which maximal time is less than given time as well, are guaranteed to contain given point. Only cones, where given time lies between minimal and maximal intersection times, are needed to recalculate.
There is a classical tradeoff between indexing and runtime costs – the less is the cylinder diameter, the less is the range of intersection times, therefore fewer cones need recalculation at each point, but more cylinders have to be indexed. If circle centers are distributed non-evenly, then it may be worth to search better cylinder placement configuration then regular grid.
P.S. My first answer here - just registered to post it. Hope it isn’t late.

How should I start designing an AI algorithm for an artillery warfare game?

Here's the background... in my free time I'm designing an artillery warfare game called Staker (inspired by the old BASIC games Tank Wars and Scorched Earth) and I'm programming it in MATLAB. Your first thought might be "Why MATLAB? There are plenty of other languages/software packages that are better for game design." And you would be right. However, I'm a dork and I'm interested in learning the nuts and bolts of how you would design a game from the ground up, so I don't necessarily want to use anything with prefab modules. Also, I've used MATLAB for years and I like the challenge of doing things with it that others haven't really tried to do.
Now to the problem at hand: I want to incorporate AI so that the player can go up against the computer. I've only just started thinking about how to design the algorithm to choose an azimuth angle, elevation angle, and projectile velocity to hit a target, and then adjust them each turn. I feel like maybe I've been overthinking the problem and trying to make the AI too complex at the outset, so I thought I'd pause and ask the community here for ideas about how they would design an algorithm.
Some specific questions:
Are there specific references for AI design that you would suggest I check out?
Would you design the AI players to vary in difficulty in a continuous manner (a difficulty of 0 (easy) to 1 (hard), all still using the same general algorithm) or would you design specific algorithms for a discrete number of AI players (like an easy enemy that fires in random directions or a hard enemy that is able to account for the effects of wind)?
What sorts of mathematical algorithms (pseudocode description) would you start with?
Some additional info: the model I use to simulate projectile motion incorporates fluid drag and the effect of wind. The "fluid" can be air or water. In air, the air density (and thus effect of drag) varies with height above the ground based on some simple atmospheric models. In water, the drag is so great that the projectile usually requires additional thrust. In other words, the projectile can be affected by forces other than just gravity.
In a real artillery situation all these factors would be handled either with formulas or simply brute-force simulation: Fire an electronic shell, apply all relevant forces and see where it lands. Adjust and try again until the electronic shell hits the target. Now you have your numbers to send to the gun.
Given the complexity of the situation I doubt there is any answer better than the brute-force one. While you could precalculate a table of expected drag effects vs velocity I can't see it being worthwhile.
Of course a game where the AI dropped the first shell on your head every time wouldn't be interesting. Once you know the correct values you'll have to make the AI a lousy shot. Apply a random factor to the shot and then walk to towards the target--move it say 30+random(140)% towards the true target each time it shoots.
Edit:
I do agree with BCS's notion of improving it as time goes on. I said that but then changed my mind on how to write a bunch of it and then ended up forgetting to put it back in. The tougher it's supposed to be the smaller the random component should be.
Loren's brute force solution is appealing as because it would allow easy "Intelligence adjustments" by adding more iterations. Also the adjustment factors for the iteration could be part of the intelligence as some value will make it converge faster.
Also for the basic system (no drag, wind, etc) there is a closed form solution that can be derived from a basic physics text. I would make the first guess be that and then do one or more iteration per turn. You might want to try and come up with an empirical correction correlation to improve the first shot (something that will make the first shot distributions average be closer to correct)
Thanks Loren and BCS, I think you've hit upon an idea I was considering (which prompted question #2 above). The pseudocode for an AIs turn would look something like this:
nSims; % A variable storing the numbers of projectile simulations
% done per turn for the AI (i.e. difficulty)
prevParams; % A variable storing the previous shot parameters
prevResults; % A variable storing some measure of accuracy of the last shot
newParams = get_new_guess(prevParams,prevResults);
loop for nSims times,
newResults = simulate_projectile_flight(newParams);
newParams = get_new_guess(newParams,newResults);
end
fire_projectile(newParams);
In this case, the variable nSims is essentially a measure of "intelligence" for the AI. A "dumb" AI would have nSims=0, and would simply make a new guess each turn (based on results of the previous turn). A "smart" AI would refine its guess nSims times per turn by simulating the projectile flight.
Two more questions spring from this:
1) What goes into the function get_new_guess? How should I adjust the three shot parameters to minimize the distance to the target? For example, if a shot falls short of the target, you can try to get it closer by adjusting the elevation angle only, adjusting the projectile velocity only, or adjusting both of them together.
2) Should get_new_guess be the same for all AIs, with the nSims value being the only determiner of "intelligence"? Or should get_new_guess be dependent on another "intelligence" parameter (like guessAccuracy)?
A difference between artillery games and real artillery situations is that all sides have 100% information, and that there are typically more than 2 opponents.
As a result, your evaluation function should consider which opponent it would be more urgent to try and eliminate. For example, if I have an easy kill at 90%, but a 50% chance on someone who's trying to kill me and just missed two shots near me, it's more important to deal with that chance.
I think you would need some way of evaluating the risk everyone poses to you in terms of ammunition, location, activity, past history, etc.
I'm now addressing the response you posted:
While you have the general idea I don't believe your approach will be workable--it's going to converge way too fast even for a low value of nSims. I doubt you want more than one iteration of get_new_guess between shells and it very well might need some randomizing beyond that.
Even if you can use multiple iterations they wouldn't be good at making a continuously increasing difficulty as they will be big steps. It seems to me that difficulty must be handled by randomness.
First, get_initial_guess:
To start out I would have a table that divides the world up into zones--the higher the difficulty the more zones. The borders between these zones would have precalculated power for 45, 60 & 75 degrees. Do a test plot, if a shell smacks terrain try again at a higher angle--if 75 hits terrain use it anyway.
The initial shell should be fired at a random power between the values given for the low and high bounds.
Now, for get_new_guess:
Did the shell hit terrain? Increase the angle. I think there will be a constant ratio of how much power needs to be increased to maintain the same distance--you'll need to run tests on this.
Assuming it didn't smack a mountain, note if it's short or long. This gives you a bound. The new guess is somewhere between the two bounds (if you're missing a bound, use the value from the table in get_initial_guess in it's place.)
Note what percentage of the way between the low and high bound impact points the target is and choose a power that far between the low and high bound power.
This is probably far too accurate and will likely require some randomizing. I've changed my mind about adding a simple random %. Rather, multiple random numbers should be used to get a bell curve.
Another thought: Are we dealing with a system where only one shell is active at once? Long ago I implemented an artillery game where you had 5 barrels, each with a fixed reload time that was above the maximum possible flight time.
With that I found myself using a strategy of firing shells spread across the range between my current low bound and high bound. It's possible that being a mere human I wasn't using an optimal strategy, though--this was realtime, getting a round off as soon as the barrel was ready was more important than ensuring it was aimed as well as possible as it would converge quite fast, anyway. I would generally put a shell on target on the second salvo and the third would generally all be hits. (A kill required killing ALL pixels in the target.)
In an AI situation I would model both this and a strategy of holding back some of the barrels to fire more accurate rounds later. I would still fire a spread across the target range, the only question is whether I would use all barrels or not.
I have personally created such a system - for the web-game Zwok, using brute force. I fired lots of shots in random directions and recorded the best result. I wouldn't recommend doing it any other way as the difference between timesteps etc will give you unexpected results.