Peg Solitaire / Senku solution algorithm - language-agnostic

I need to program a solver for the game of Peg solitaire / Senku
There's already a question here but the proposed answer is a brute force algorithm with backtracking, which is not the solution I'm looking for.
I need to find some heuristic to apply an A* algorithm. The remaining pegs is not a good heuristic as every move discards one peg so the cost is always uniform.
Any ideas?

I was reading a paper talking about this problem link,
and they propose 3 heuristics:
1 - The number of nodes are available for the next step, considering which more available next's steps, better the node.
2 - Number of isolated peg's - as few isolated peg's better the node.
3 - Less peg's in the board better the node.
This may be not the better heuristics for this problem, but seems to be a simple approach.

You can do as rossum suggested. Another option would be to use the sum of distances (or some other function of the distances) from the center. Or you could combine the two.

Related

If NMDS (vegan) convergence is impossible does that make the output useless?

Maybe this question is not in the right place and if so, I'll delete it.
Probably a very basic question: If NMDS (vegan package) convergence does not reach a solution (regardless of dimensions and iterations) does that make the output meaningless?
Because R does give an output. With a nice low stress level, good plot. All of that.
Thank you.
You could try to change the number of dimensions (k = 3 or more), but display only the two first dimensions.
Some dataset are too complex to reach a solution in a 2d space. Even after trymax=100000 one of my 'sp x site' dataset did not reached a solution. However, the stress kept consistently low in all iterations, then I assumed that no best solution was possible. Maybe after billions of iterations.
see:
https://stackoverflow.com/a/14437200/14708092
There is no sufficient information in this question. So the answer is "it depends". Please give us a reproducible example.

Trilateration different approaches and issues

Although there exists several posts about (multi)lateration, i would like to summarize some approaches and present some issues/questions to better clarify the approach.
It seems that are two ways to detect the target location; using geometric/analytic approach (solving directly the equations with some trick) and fitting approach converting from non-linear to linear system.
With respect to the first one i would like to ask few questions.
Suppose in the presence of perfect range measurements,considering 2D case, the exact solution is a unique point at three circles intersection. Can anyone point some geometric solution for the first case? I found this approach: https://math.stackexchange.com/questions/884807/find-x-location-using-3-known-x-y-location-using-trilateration
but is seems it fails to consider two points with the same y coordinate as we can get a division by 0. Moreover can this be extended to 3D?
The same solution can be extracted using the second approach
Ax=b and latter recovering x = A^-1b or using MLS (x = A^T A)^-1 A^T b.
Please see http://www3.nd.edu/~cpoellab/teaching/cse40815/Chapter10.pdf
What about the case when the three circles have no intersection. It seems that the second approach still finds a solution. Is this normal? How can be explained?
What about the first approach when the range measurements are noisy. Does it find an approximate solution or it fails?
Considering the 3D, it seems that it needs at least 4 anchors to provide a unique solution. However, considering 3 anchors it can provide 2 solutions. I am asking if anyone of u guys can provide such equations to find the two solutions. This can be good even we have two solutions we may discard one by checking the values if they agree with our scenario. E.g., the GPS case where we pick the solution located in the earth. Instead the second approach of LMS would provide always one solution, wrong one.
Do u know any existing library C/C++ which would implement some of this techniques and maybe some more complex fitting functions such as non-linear etc.
Thank you
Regards

Route planning from Pt. A to a list of addresses

I wonder if its possible in google maps to plot a the quickest
route from a specific address, Pt A, to a list of destinations
i.e. Pt B, Pt C, Pt D etc. And if that's possible is it available
thru API ? I'll probably need it in the app I'm developing.
Thanks and apologies if this has been asked before !
You may want to check out this project:
Google Maps Fastest Roundtrip Solver
It is available under a GPL license.
The problem you've described is an example of the Traveling Salesman Problem. This is a famous problem because it's an example of the kind of problem that can't be solved efficiently with any known algorithm. That is, you can't come up with the absolutely best answer effiently, because the number of possible solutions increases exponentially. The number of possible solutions is n!, which means 5 x 4 x 3 x 2 x 1, where n=5. Not a big deal in this case, when you are trying to solve for 5 cities, (120 combinations) but even getting up only as far as 10 raises the number of possible combos to 3,628,800. Once you get to 100 nodes, you're counting your CPU time in years. This is why the "Fastest Roundtrip Solver" listed above only guarantees "optimal" solutions up to 15 points.
Having said all that, it can't be solved efficiently, (a "solution" in this case means the one correct answer, as Gebweb says, the "optimal" answer) but you can come up with a pretty good answer, as long as you don't get hung up on it being the absolute provably best one. If you look in the code, you'll notice that Gebweb's Fastest Roundtrip page switches to an "Ant Colony Optimization" (not technically an algorithm, but rather a heuristic) once you get past 15 points. No sense in my repeating what he says better, look at his behind-the-scenes page.
Anyway, Daniel is right, this should do what you want, but I couldn't help but spill a bit about the fact this is a more complex problem than it seems.

Graph Expansion

I'm currently working on an interesting graph problem, I can't find any algorithms or other stackoverflow questions which mention anything like this.
If I have a graph (undirected, cyclic) and a list of commonly used paths, what is the best way to reduce the average path length by adding in N more edges?
EDIT:: Important point, which might help, all paths start at the same node.
Answering my own question, to cover what I've already considered.
The obvious solution is simply to sort the common paths by order, and slot in a connection between the two ends, and keep doing this until you run out of edges to insert. However, I suspect there is a more intelligent solution.
You could just try inserting all possible edges and see how much the shortest path decreases for each of your given start/end points. Pick the best edge and repeat.
The usefulness of edges depends on what other edges have been added, so if you really want optimality, you'd have to try all sets of N edges. That sounds a tad expensive. Wouldn't surprise me if it was NP-hard.
Interesting question!
Another possible solution, which sounds like it might be the best heuristic, is to take the weighted average of all the end nodes (weighted by path importance), then find the node which is closest to the computed average point. Connect to that node.
Obviously that only works if the nodes are laid out in space somehow, but it's a good analogy.

How should I start designing an AI algorithm for an artillery warfare game?

Here's the background... in my free time I'm designing an artillery warfare game called Staker (inspired by the old BASIC games Tank Wars and Scorched Earth) and I'm programming it in MATLAB. Your first thought might be "Why MATLAB? There are plenty of other languages/software packages that are better for game design." And you would be right. However, I'm a dork and I'm interested in learning the nuts and bolts of how you would design a game from the ground up, so I don't necessarily want to use anything with prefab modules. Also, I've used MATLAB for years and I like the challenge of doing things with it that others haven't really tried to do.
Now to the problem at hand: I want to incorporate AI so that the player can go up against the computer. I've only just started thinking about how to design the algorithm to choose an azimuth angle, elevation angle, and projectile velocity to hit a target, and then adjust them each turn. I feel like maybe I've been overthinking the problem and trying to make the AI too complex at the outset, so I thought I'd pause and ask the community here for ideas about how they would design an algorithm.
Some specific questions:
Are there specific references for AI design that you would suggest I check out?
Would you design the AI players to vary in difficulty in a continuous manner (a difficulty of 0 (easy) to 1 (hard), all still using the same general algorithm) or would you design specific algorithms for a discrete number of AI players (like an easy enemy that fires in random directions or a hard enemy that is able to account for the effects of wind)?
What sorts of mathematical algorithms (pseudocode description) would you start with?
Some additional info: the model I use to simulate projectile motion incorporates fluid drag and the effect of wind. The "fluid" can be air or water. In air, the air density (and thus effect of drag) varies with height above the ground based on some simple atmospheric models. In water, the drag is so great that the projectile usually requires additional thrust. In other words, the projectile can be affected by forces other than just gravity.
In a real artillery situation all these factors would be handled either with formulas or simply brute-force simulation: Fire an electronic shell, apply all relevant forces and see where it lands. Adjust and try again until the electronic shell hits the target. Now you have your numbers to send to the gun.
Given the complexity of the situation I doubt there is any answer better than the brute-force one. While you could precalculate a table of expected drag effects vs velocity I can't see it being worthwhile.
Of course a game where the AI dropped the first shell on your head every time wouldn't be interesting. Once you know the correct values you'll have to make the AI a lousy shot. Apply a random factor to the shot and then walk to towards the target--move it say 30+random(140)% towards the true target each time it shoots.
Edit:
I do agree with BCS's notion of improving it as time goes on. I said that but then changed my mind on how to write a bunch of it and then ended up forgetting to put it back in. The tougher it's supposed to be the smaller the random component should be.
Loren's brute force solution is appealing as because it would allow easy "Intelligence adjustments" by adding more iterations. Also the adjustment factors for the iteration could be part of the intelligence as some value will make it converge faster.
Also for the basic system (no drag, wind, etc) there is a closed form solution that can be derived from a basic physics text. I would make the first guess be that and then do one or more iteration per turn. You might want to try and come up with an empirical correction correlation to improve the first shot (something that will make the first shot distributions average be closer to correct)
Thanks Loren and BCS, I think you've hit upon an idea I was considering (which prompted question #2 above). The pseudocode for an AIs turn would look something like this:
nSims; % A variable storing the numbers of projectile simulations
% done per turn for the AI (i.e. difficulty)
prevParams; % A variable storing the previous shot parameters
prevResults; % A variable storing some measure of accuracy of the last shot
newParams = get_new_guess(prevParams,prevResults);
loop for nSims times,
newResults = simulate_projectile_flight(newParams);
newParams = get_new_guess(newParams,newResults);
end
fire_projectile(newParams);
In this case, the variable nSims is essentially a measure of "intelligence" for the AI. A "dumb" AI would have nSims=0, and would simply make a new guess each turn (based on results of the previous turn). A "smart" AI would refine its guess nSims times per turn by simulating the projectile flight.
Two more questions spring from this:
1) What goes into the function get_new_guess? How should I adjust the three shot parameters to minimize the distance to the target? For example, if a shot falls short of the target, you can try to get it closer by adjusting the elevation angle only, adjusting the projectile velocity only, or adjusting both of them together.
2) Should get_new_guess be the same for all AIs, with the nSims value being the only determiner of "intelligence"? Or should get_new_guess be dependent on another "intelligence" parameter (like guessAccuracy)?
A difference between artillery games and real artillery situations is that all sides have 100% information, and that there are typically more than 2 opponents.
As a result, your evaluation function should consider which opponent it would be more urgent to try and eliminate. For example, if I have an easy kill at 90%, but a 50% chance on someone who's trying to kill me and just missed two shots near me, it's more important to deal with that chance.
I think you would need some way of evaluating the risk everyone poses to you in terms of ammunition, location, activity, past history, etc.
I'm now addressing the response you posted:
While you have the general idea I don't believe your approach will be workable--it's going to converge way too fast even for a low value of nSims. I doubt you want more than one iteration of get_new_guess between shells and it very well might need some randomizing beyond that.
Even if you can use multiple iterations they wouldn't be good at making a continuously increasing difficulty as they will be big steps. It seems to me that difficulty must be handled by randomness.
First, get_initial_guess:
To start out I would have a table that divides the world up into zones--the higher the difficulty the more zones. The borders between these zones would have precalculated power for 45, 60 & 75 degrees. Do a test plot, if a shell smacks terrain try again at a higher angle--if 75 hits terrain use it anyway.
The initial shell should be fired at a random power between the values given for the low and high bounds.
Now, for get_new_guess:
Did the shell hit terrain? Increase the angle. I think there will be a constant ratio of how much power needs to be increased to maintain the same distance--you'll need to run tests on this.
Assuming it didn't smack a mountain, note if it's short or long. This gives you a bound. The new guess is somewhere between the two bounds (if you're missing a bound, use the value from the table in get_initial_guess in it's place.)
Note what percentage of the way between the low and high bound impact points the target is and choose a power that far between the low and high bound power.
This is probably far too accurate and will likely require some randomizing. I've changed my mind about adding a simple random %. Rather, multiple random numbers should be used to get a bell curve.
Another thought: Are we dealing with a system where only one shell is active at once? Long ago I implemented an artillery game where you had 5 barrels, each with a fixed reload time that was above the maximum possible flight time.
With that I found myself using a strategy of firing shells spread across the range between my current low bound and high bound. It's possible that being a mere human I wasn't using an optimal strategy, though--this was realtime, getting a round off as soon as the barrel was ready was more important than ensuring it was aimed as well as possible as it would converge quite fast, anyway. I would generally put a shell on target on the second salvo and the third would generally all be hits. (A kill required killing ALL pixels in the target.)
In an AI situation I would model both this and a strategy of holding back some of the barrels to fire more accurate rounds later. I would still fire a spread across the target range, the only question is whether I would use all barrels or not.
I have personally created such a system - for the web-game Zwok, using brute force. I fired lots of shots in random directions and recorded the best result. I wouldn't recommend doing it any other way as the difference between timesteps etc will give you unexpected results.