Volume of the intersection of two polyhedron using CGAL - intersection

I am looking for a fast method to compute the approximate volume of the intersection of two polyhedrons. My program runs 20k iterations wherein each iteration I need to compute the volume of difference of two polyhedrons with ~100 vertices each (one of these polyhedrons is stationary, and the other changes its pose in each iteration). I require this program to terminate within a second.
I have tried using Nef_polyhedron_3, but it requires an exact kernel, and thus takes a large amount of time. Next, I tried to use Polygon_mesh_processing::corefine_and_compute_difference, but I could not find a method to convert my Polyhedron_3 to Surface_mesh (without the use of Nef_polyhedron).
I would be really grateful if someone could help me with this problem. Thank you in advance!

You can try to use this faster function from the polygon mesh processing package and then that one to get the volume of the intersection.
If still too slow then you can use Monte Carlo to get an estimation of the volume, using point sampled within the static polyhedron. This class will help knowing if a point is inside or outside the bounded volume.

For the conversion you can use the function copy_face_graph()

Related

Stable-Baselines3 package, model.learn() function - how do total_timesteps and num_eval_episodes work together?

I am using the SB3 package for RL, and I'm trying out the model.learn() function.
I don't understand exactly what model.learn() parameters do in terms of how they work together and with my environment.
My RL is working from a tabular dataset, so there's an inherent limitation to number of timesteps possible.
Let's say these are my conditions:
I have a dataset with 20,000 rows (possible timesteps)
In my environment, my step() function contains an if-statement which flips "done" to True when the number of steps taken reaches 1,000 (the step() function counts the number of times it's been called since the initialization of the env).
I run model.learn() with total_timesteps = 30,000 .
I encounter no errors when I do this. Can someone please explain what is happening? Is model.learn() running my environment through the first 1,000 timesteps, then re-starts and keeps looping this way until 30,000 total timesteps have been taken?
If so, how does num_eval_episodes feed into this? Does it change how the function runs? If so, how?
I'm sorry for the scattered question, I appreciate any clarification.
I'm working with SB3 as well these days and I think your own assessment that "model.learn() is running the environment through the first 1,000 timesteps, then re-starts and keeps looping this way until 30,000 total timesteps have been taken" is probably correct.
Have you ever set the if-statement that flips "done" to True to a number of steps greater than your dataset?
As far as I know SB3 works that way so that you can train on environments with or without a fixed number of timesteps without getting problems with infinite training in cases where the terminal state is never reached.
In my own application which also has episodes with a fixed number of timesteps per episode (n_max_timesteps) I always set "total_timesteps=n_episodes**n_max_timesteps*" in model.learn().
"n_eval_episodes" runs the agent for a specified number of episodes from reset to reaching a final / terminal state.

Anylogic: How to create an objective function using values of two dataset (for optimization experiment)?

In my Anylogic model I have a population of agents (4 terminals) were trucks arrive at, are being served and depart from. The terminals have two parameters (numberOfGates and servicetime) which influence the departures per hour of trucks leaving the terminals. Now I want to tune these two parameters, so that the amount of departures per hour is closest to reality (I know the actual departures per hour). I already have two datasets within each terminal agent, one with de amount of departures per hour that I simulate, and one with the observedDepartures from the data.
I already compare these two datasets in plots for every terminal:
Now I want to create an optimization experiment to tune the numberOfGates and servicetime of the terminals so that the departure dataset is the closest to the observedDepartures dataset. Does anyone know how to do create a(n) (objective) function for this optimization experiment the easiest way?
When I add a variable diff that is updated every hour by abs( departures - observedDepartures) and put root.diff in the optimization experiment, it gives me the eq(null) is not allowed. Use isNull() instead error, in a line that reads the database for the observedDepartures (see last picture), but it works when I run the simulation normally, it only gives this error when running the optimization experiment (I don't know why).
You can use the absolute value of the sum of the differences for each replication. That is, create a variable that logs the | difference | for each hour, call it diff. Then in the optimization experiment, minimize the value of the sum of that variable. In fact this is close to a typical regression model's objectives. There they use a more complex objective function, by minimizing the sum of the square of the differences.
A Calibration experiment already does (in a more mathematically correct way) what you are trying to do, using the in-built difference function to calculate the 'area between two curves' (which is what the optimisation is trying to minimise). You don't need to calculate differences or anything yourself. (There are two variants of the function to compare either two Data Sets (your case) or a Data Set and a Table Function (useful if your empirical data is not at the same time points as your synthetic simulated data).)
In your case it (the objective function) will need to be a sum of the differences between the empirical and simulated datasets for the 4 terminals (or possibly a weighted sum if the fit for some terminals is considered more important than for others).
So your objective is something like
difference(root.terminals(0).departures, root.terminals(0).observedDepartures)
+ difference(root.terminals(1).departures, root.terminals(1).observedDepartures)
+ difference(root.terminals(2).departures, root.terminals(2).observedDepartures)
+ difference(root.terminals(3).departures, root.terminals(2).observedDepartures)
(It would be better to calculate this for an arbitrary population of terminals in a function but this is the 'raw shape' of the code.)
A Calibration experiment is actually just a wizard which creates an Optimization experiment set up in a particular way (with a UI and all settings/code already created for you), so you can just use that objective in your existing Optimization experiment (but it won't have a built-in useful UI like a Calibration experiment). This also means you can still set this up in the Personal Learning Edition too (which doesn't have the Calibration experiment).

Implementing Dijkstra's algorithm using CUDA in c

I am trying to implement Dijsktra's algorithm using cuda.I got a code that does the same using map reduce this is the link http://famousphil.com/blog/2011/06/a-hadoop-mapreduce-solution-to-dijkstra%E2%80%99s-algorithm/ but i want to implement something similar as given in the link using cuda using shared and global memory..Please tell me how to proceed as i am new to cuda ..i dont know if it is necessary that i provide the input on host and device both in the form of matrix and also what operation should i perform in the kernel function
What about something like this(Dislaimer this is not a map-reduce solution).
Lets say you have a Graph G with N states an adjacency matrix A with entries A[i,j] for the cost of going from node i to node j in the graph.
This Dijkstras algorithm consists of having a vector denoting a front 'V' where V[i] is the current minimum distance from the origin to node i - in Dijkstras algorithm this information would be stored in a heap and loaded poped of the top of the heap on every loop.
Running the algorithm now starts to look a lot like matrix algebra in that one simply takes the Vector and applyes the adjancicy matrix to it using the following command:
V[i] <- min{V[j] + A[j,i] | j in Nodes}
for all values of i in V. This is run as long as there are updates to V (can be checked on the device, no need to load V back and forth to check!), also store the transposed version of the adjacency matrix to allow sequential reads.
At most this will have a running time corresponding to the longest non-looping path through the graph.
The interesting question now becomes how to distribute this across compute blocks, but it seems obvious to shard based on row indexes.
I suggest you study these two prominent papers on efficient graph processing on GPU. First can be found here. It's rather straightforward and basically assigns one warp to process a vertex and its neighbors. You can find the second one here which is more complicated. It efficiently produces the queue of next level vertices in parallel hence diminishing load imbalance.
After studying above articles, you'll have a better understanding why graph processing is challenging and where pitfalls are. Then you can make your own CUDA program.

high-dimensional Spatial-temporal clustering

My question is, how I can make a cluster analysis from spatial - temporal and high dimensional data? my purpose is to find subspace clusters that can show patterns in the space and in the time. over here space mean a geographic position, so I should use autocorrelation law (also knowns like Tobler law or the first law from geography).
is this right?, first I make a transformation from time to frequency through Wavelets transform from every variable (because all variables have time and geographic position related) and after that, taking that coefficients and applying one subspace clustering algorithm for temporal high-dimensional clustering. once I have the temporal clusters I try to find a spatial "cluster" trough regionalization between temporal clusters.
Thanks in Advance any light.
I understand that you use the toblers law as an interpretation of the spatial correlation (regionalization). Its not clear what the final application would be but, a few verification steps i would do in such circumstances would be: to check if the all(150) variables are all corresponding to the same scale in space and time, affected by the same kind of autocorrelation (stationarity) which can simplify the problems in few cases. And finally also has to understand what features or patterns are to be extracted and how they are characterized. Check this out: http://www.geokernels.org/pages/modern_indexpag.html
Hope it helped !
Cheers
Ravi
Its not clear what you would like to achieve here. In general for spatio temporal clustering one could use a distribution based model like a multivariate Guassian Mixture Model for a given patch in the Dataset, and update the covariance matrice parameters (http://en.wikipedia.org/wiki/Multivariate_normal_distribution) - In case of the Wavelet transform coefficient clustering we ignore any spatial correlation to exist.
I am not sure by what you mean here by "regionalization"
You could treat time as just another dimension, depending on your application.
What about constructing a temporal cluster data with a correlation coefficient against cluster which gives a variance equal to 1. A spatial cluster will be a scatter plot which obviously might derive from lognormal, skewed and regression plots.

How should I start designing an AI algorithm for an artillery warfare game?

Here's the background... in my free time I'm designing an artillery warfare game called Staker (inspired by the old BASIC games Tank Wars and Scorched Earth) and I'm programming it in MATLAB. Your first thought might be "Why MATLAB? There are plenty of other languages/software packages that are better for game design." And you would be right. However, I'm a dork and I'm interested in learning the nuts and bolts of how you would design a game from the ground up, so I don't necessarily want to use anything with prefab modules. Also, I've used MATLAB for years and I like the challenge of doing things with it that others haven't really tried to do.
Now to the problem at hand: I want to incorporate AI so that the player can go up against the computer. I've only just started thinking about how to design the algorithm to choose an azimuth angle, elevation angle, and projectile velocity to hit a target, and then adjust them each turn. I feel like maybe I've been overthinking the problem and trying to make the AI too complex at the outset, so I thought I'd pause and ask the community here for ideas about how they would design an algorithm.
Some specific questions:
Are there specific references for AI design that you would suggest I check out?
Would you design the AI players to vary in difficulty in a continuous manner (a difficulty of 0 (easy) to 1 (hard), all still using the same general algorithm) or would you design specific algorithms for a discrete number of AI players (like an easy enemy that fires in random directions or a hard enemy that is able to account for the effects of wind)?
What sorts of mathematical algorithms (pseudocode description) would you start with?
Some additional info: the model I use to simulate projectile motion incorporates fluid drag and the effect of wind. The "fluid" can be air or water. In air, the air density (and thus effect of drag) varies with height above the ground based on some simple atmospheric models. In water, the drag is so great that the projectile usually requires additional thrust. In other words, the projectile can be affected by forces other than just gravity.
In a real artillery situation all these factors would be handled either with formulas or simply brute-force simulation: Fire an electronic shell, apply all relevant forces and see where it lands. Adjust and try again until the electronic shell hits the target. Now you have your numbers to send to the gun.
Given the complexity of the situation I doubt there is any answer better than the brute-force one. While you could precalculate a table of expected drag effects vs velocity I can't see it being worthwhile.
Of course a game where the AI dropped the first shell on your head every time wouldn't be interesting. Once you know the correct values you'll have to make the AI a lousy shot. Apply a random factor to the shot and then walk to towards the target--move it say 30+random(140)% towards the true target each time it shoots.
Edit:
I do agree with BCS's notion of improving it as time goes on. I said that but then changed my mind on how to write a bunch of it and then ended up forgetting to put it back in. The tougher it's supposed to be the smaller the random component should be.
Loren's brute force solution is appealing as because it would allow easy "Intelligence adjustments" by adding more iterations. Also the adjustment factors for the iteration could be part of the intelligence as some value will make it converge faster.
Also for the basic system (no drag, wind, etc) there is a closed form solution that can be derived from a basic physics text. I would make the first guess be that and then do one or more iteration per turn. You might want to try and come up with an empirical correction correlation to improve the first shot (something that will make the first shot distributions average be closer to correct)
Thanks Loren and BCS, I think you've hit upon an idea I was considering (which prompted question #2 above). The pseudocode for an AIs turn would look something like this:
nSims; % A variable storing the numbers of projectile simulations
% done per turn for the AI (i.e. difficulty)
prevParams; % A variable storing the previous shot parameters
prevResults; % A variable storing some measure of accuracy of the last shot
newParams = get_new_guess(prevParams,prevResults);
loop for nSims times,
newResults = simulate_projectile_flight(newParams);
newParams = get_new_guess(newParams,newResults);
end
fire_projectile(newParams);
In this case, the variable nSims is essentially a measure of "intelligence" for the AI. A "dumb" AI would have nSims=0, and would simply make a new guess each turn (based on results of the previous turn). A "smart" AI would refine its guess nSims times per turn by simulating the projectile flight.
Two more questions spring from this:
1) What goes into the function get_new_guess? How should I adjust the three shot parameters to minimize the distance to the target? For example, if a shot falls short of the target, you can try to get it closer by adjusting the elevation angle only, adjusting the projectile velocity only, or adjusting both of them together.
2) Should get_new_guess be the same for all AIs, with the nSims value being the only determiner of "intelligence"? Or should get_new_guess be dependent on another "intelligence" parameter (like guessAccuracy)?
A difference between artillery games and real artillery situations is that all sides have 100% information, and that there are typically more than 2 opponents.
As a result, your evaluation function should consider which opponent it would be more urgent to try and eliminate. For example, if I have an easy kill at 90%, but a 50% chance on someone who's trying to kill me and just missed two shots near me, it's more important to deal with that chance.
I think you would need some way of evaluating the risk everyone poses to you in terms of ammunition, location, activity, past history, etc.
I'm now addressing the response you posted:
While you have the general idea I don't believe your approach will be workable--it's going to converge way too fast even for a low value of nSims. I doubt you want more than one iteration of get_new_guess between shells and it very well might need some randomizing beyond that.
Even if you can use multiple iterations they wouldn't be good at making a continuously increasing difficulty as they will be big steps. It seems to me that difficulty must be handled by randomness.
First, get_initial_guess:
To start out I would have a table that divides the world up into zones--the higher the difficulty the more zones. The borders between these zones would have precalculated power for 45, 60 & 75 degrees. Do a test plot, if a shell smacks terrain try again at a higher angle--if 75 hits terrain use it anyway.
The initial shell should be fired at a random power between the values given for the low and high bounds.
Now, for get_new_guess:
Did the shell hit terrain? Increase the angle. I think there will be a constant ratio of how much power needs to be increased to maintain the same distance--you'll need to run tests on this.
Assuming it didn't smack a mountain, note if it's short or long. This gives you a bound. The new guess is somewhere between the two bounds (if you're missing a bound, use the value from the table in get_initial_guess in it's place.)
Note what percentage of the way between the low and high bound impact points the target is and choose a power that far between the low and high bound power.
This is probably far too accurate and will likely require some randomizing. I've changed my mind about adding a simple random %. Rather, multiple random numbers should be used to get a bell curve.
Another thought: Are we dealing with a system where only one shell is active at once? Long ago I implemented an artillery game where you had 5 barrels, each with a fixed reload time that was above the maximum possible flight time.
With that I found myself using a strategy of firing shells spread across the range between my current low bound and high bound. It's possible that being a mere human I wasn't using an optimal strategy, though--this was realtime, getting a round off as soon as the barrel was ready was more important than ensuring it was aimed as well as possible as it would converge quite fast, anyway. I would generally put a shell on target on the second salvo and the third would generally all be hits. (A kill required killing ALL pixels in the target.)
In an AI situation I would model both this and a strategy of holding back some of the barrels to fire more accurate rounds later. I would still fire a spread across the target range, the only question is whether I would use all barrels or not.
I have personally created such a system - for the web-game Zwok, using brute force. I fired lots of shots in random directions and recorded the best result. I wouldn't recommend doing it any other way as the difference between timesteps etc will give you unexpected results.