It has been a couple years since I last touched on Kalman filtering and I have been approached with a design implementation that I am uncertain of so I figure I would ask here...
The design is for tracking multiple balls in space. We define their system as the standard second order equation x+vt+at^2. Now let us assume that each ball is thrown from a pitching machine. While the pitching machine should be constant we cannot definitively make that assumption... With this in mind the question that I have is: If we have 2 balls in our space that we are tracking, separated by 10 sampling intervals. Assuming that each balls initial position, velocity, and acceleration are the same can we essentially "prime" the filter gains for ball 2 with that of ball 1's values? I.e. Can we use ball 1's filter gain and covariance matrices to reduce our estimation error on ball 2 from the start instead of letting the transients resolve naturally? The goal of this would be to essentially 'train' our Kalman filter for each ball moving forward to better make a better estimator.
Ideally a ball thrown by a pitching machine follows a path that's entirely determined by the initial conditions. What you seem to be describing is one filter to track the ball's position (which should be using a prediction function for an ideal trajectory, not a random jerk model), and another "meta" filter which is modelling the parameters of the pitching machine.
So your frame-by-frame tracking would update a filter that models the initial conditions times time plus some offset model for error from ideal. The results of each pitch would be a final estimate of initial conditions which would be one "sample" for the meta-filter about the machine. The meta-filter would provide initial conditions for the next ball tracking filter.
You could definitely mush all this together into one filter (resetting the per-ball components on each pitch) but I don't see any real advantages.
Related
Background
I'm trying to convert an algorithm from sequential to parallel, but I am stuck.
Point and Figure Charts
I am creating point and figure charts.
Decreasing
While the stock is going down, add an O every time it breaks through the floor.
Increasing
While the stock is going up, add an X every time it breaks through the ceiling.
Reversal
If the stock reverses direction, but the change is less than a reversal threshold (3 units) do nothing. If the change is greater than the reversal threshold, start a new column (X or O)
Sequential vs Parallel
Sequentially, this is pretty straight forward. I keep a variable for the floor and ceiling. If the current price breaks through the floor or ceiling, or changes more than the reversal threshold, I can take the appropriate action.
My question is, is there a way to find these reversal point in parallel? I'm fairly new to thinking in parallel, so I'm sorry if this is trivial. I am trying to do this in CUDA, but I have been stuck for weeks. I have tried using the finite difference algorithms from NVidia. These produce local max / min but not the reversal points. Small fluctuations produce numerous relative max / min, but most of them are trivial because the change is not greater than the reversal size.
My question is, is there a way to find these reversal point in parallel?
one possible approach:
use thrust::unique to remove periods where the price is numerically constant
use thrust::adjacent_difference to produce 1st difference data
use thrust::adjacent_difference on 1st difference data to get the 2nd difference data, i.e the points where there is a change in the sign of the slope.
use these points of change in sign of slope to identify separate regions of data - build a key vector from these (e.g. with a prefix sum). This key vector segments the price data into "runs" where the price change is in a particular direction.
use thrust::exclusive_scan_by_key on the 1st difference data, to produce the net change of the run
Wherever the net change of the run exceeds a threshold, flag as a "reversal"
Your description of what constitutes a reversal may also be slightly unclear. The above method would not flag a reversal on certain data patterns that you might classify as a reversal. I suspect you are looking beyond a single run as I have defined it here. If that is the case, there may be a method to address that as well - with more steps.
I am working on a project to monitor a water tank for energy storage. I have a large water tank, to which 10 thermometers are attached. By measuring the temperature, I can estimate the energy stored in the tank. That is quite simple, but I want to add another feature.
By sampling the amount of energy in time, I want to determine the power that is currently flowing in or out of the tank. I am measuring the energy in the tank every minute and determining the current power by comparing it to the last measurement. The direct result is very noisy (for example jumping between 2-4 kW), so I need some kind of filtering.
I started simply with averaging, which works fine,(current power is the average from last 10 measurements) but I wanted to try something a bit fancier. So I wrote a simple Kalman filter, the one described in this video - https://www.youtube.com/watch?v=PZrFFg5_Sd0&list=PLX2gX-ftPVXU3oUFNATxGXY90AULiqnWT&index=5 The problem is, I guess, that this filter is designed for static measurements, and my measurement changes quite a bit. As an example, in this plot, there are 60 measurements (1 every minute) and the line is the output of the Kalman filter:
Image
Is it possible to modify the filter to follow the actual value more accurately? Or is this just not a suitable system to filter with this kind of filter and I should use something else?
I am trying to come up with an efficient way to characterize two narrowband tones separated by about 900kHz (one at around 100kHZ and one at around 1MHz once translated to baseband). They don't move much in freq over time but may have amplitude variations we want to monitor.
Each tone is roughly about 100Hz wide and we are required to characterize these two beasts over long periods of time down to a resolution of about 0.1 Hz. The samples are coming in at over 2M Samples/sec (TBD) to adequately acquire the highest tone.
I'm trying to avoid (if possible) doing brute force >2MSample FFTs on the data once a second to extract frequency domain data. Is there an efficient approach? Something akin to performing two (much) smaller FFTs around the bands of interest? Ive looked at Goertzel and chirp z methods but I am not certain it helps save processing.
Something akin to performing two (much) smaller FFTs around the bands of interest
There is, it's called Goertzel, and is kind of the FFT for single bins, and you already have looked at it. It will save you CPU time.
Anyway, there's no reason to do a 2M-point FFT; first of all, you only want a resolution of about 1/20 the sampling rate, hence, a 20-point FFT would totally do, and should be pretty doable for your CPU at these low rates; since you don't seem to care about phase of your tones, FFT->complex_to_mag.
However, there's one thing that you should always do: look at your signal of interest, and decimate down to the rate that fits exactly that. Since GNU Radio's filters are implemented cleverly, the filter itself will only run at the decimated rate, and you can spend the CPU cycles saved on a better filter.
Because a direct decimation from 2MHz to 100Hz (decimation: 20000) will really have an ugly filter length, you should do this multi-rated:
I'd try first decimating by 100, and then in a second step by 100, leaving you with 200Hz observable spectrum. The xlating fir filter blocks will let you use a simple low-pass filter (use the "Low-Pass Filter Taps" block to define a variable that contains such taps) as a band-selector.
I am implementing a small grid based, turn based strategy in the lines of Final Fantasy tactics.
Do you have any ideas on how i can approach the target selection, movement and skill selection process?
I am considering having the decisions disconnected, but all these 3 decisions are largely coupled.
(eg. i can't decide where to move unless i know who i am going to attack, and what range the skill i will use has, and vice versa, i can't decide who to attack unless i know how many turns it will take me to reach each target)
I want to move towards a unified system, but trying out things from Potential field research used in a manner like in the Killzone 1 AI has me getting stuck on local maximums.
=== Update 1
I am currently trying to use potential fields / influence maps to generate the data i take decisions upon.
I have no idea how to handle having many skills, and skills that don't do damage but rather buff/debuff or alter the world.
Someone elsewhere suggested using Monte Carlo Tree Search, used currently in Go games.
I believe the space my actors will be using is not good for it, as many many moves in the game don't result in a position from which you can attack and affect the world (i am in a world bigger than final fantasy tactics)
In final fantasy tactics it might be applied successfully, although the branching factor is much bigger than that of 9x9 Go (from what i understand)
===
Thanks in advance, Xtapodi.
ps.1 - A problem is that to know accurately how far an enemy is i would need to pathfind to him, because although the enemy is near, an impassable cliff might be separating us which takes 4 turns to go around. Or worse, a unit is blocking the way on lets say a bridge so there is actually no way to reach him.
One approach I've used is to do a two-pass system.
First, find out where your unit can go. Use A* or whatever to flag out the terrain to see how far the unit can move this turn.
Once you know that, step through your available tactics (melee attack, heal friendly unit, whatever), and assign a fitness function for all available uses of the tactic. If you pass in the flagged terrain, you can very quickly determine what your space of possible tactics are.
This gives you a list of available tactics and their fitness functions for each move. Select the best one or randomize from the top. If there aren't any tactics available, repeat the process with flagging the terrain for two moves, and so on.
What I mean by fitness function is to decide on the "value" of performing the tactic on a certain unit or location. For instance, your "heal a friendly unit" tactical decision phase might step through all friendly units. If a friendly unit is within range (i.e., is reachable from a location your unit can reach), add it to the list of possible tactics and give it a fitness rating equal to, say, 100 * (1.0 - unit health), where unit health ranges from 0 to 1. Thus, healing a character down to only 10% health remaining would be worth 90 points, while a unit only down 5% would only be worth 5, and the unit wouldn't even consider healing an undamaged unit. Special units (i.e., "protect the boss" scenario units required to retain victory conditions) could be given a higher base number, so that they are given more attention by friendly units.
Similarly, your "melee attack" decision phase would step through all reachable enemy units, compute the likely damage, and compare that to the unit's health. Give each unit a "desirability" to attack, and multiply it by the percentage of remaining health you'd likely do, and you've got a pretty detailed fitness function that favors eliminating units when you can, but still goes after high-value targets.
Using a process like this, you'll get a list of options like "Move to location A and heal friendly unit B : 50 points", "Move to location C and attack hostile unit D : 15 points", etc. Suddenly, it's really easy to choose a tactic.
Further detail may be added by multiplying the fitness of the tactic by a fitness for the path you'd have to take to implement it. For instance, if the place you'd have to move to in order to heal a friendly unit puts you in severe danger (i.e., standing on a lava space or something), you might factor that in by multiplying the fitness of that tactic by .2 or so, so that the unit may still consider it, but only if it's really important. All this takes is writing an algorithm to assess the fitness of a given location, and could be as simple as a pre-computed "terrain desirability" number or as complex as maintaining "threat maps" of enemy units.
The hard part, of course, is finding the right measures to make the engine smart. But that's the fun part of your system to tweak.
If the terrain where the battle occurs are pre-determined, or not too wide, there is an article on terrain reasonning in FPS that can be used as a basis for a turn-based game.
In short, you pre-calculate for each cell of the map a set of values, such as suitability for shooting in a given direction, protection, visibility... and so on. the AI can then use these values to choose a correct action. For exemple, fighter will walk as quickly as possible toward ennemy, using protection if available, while thief will take a path where visibility from ennemy direction as low as possible, with the goal of attacking from flank or rear.
if the terrain is randomized and/or too wide, the pre-calcul can be to long to be useful, however.
regards
Guillaume
A good question the answers can be all over the place. Personally, I don't have a lot of experience with this but I would set a strategy around concept not distance.
You are going to create a state machine for each NPC. It will be predicting a character to attack via some settings.
For example a NPC would be flagged as Attack weakest or Attack Strongest or Attack Most Injured. Then I would attempt to position them such that they can damage there desired target.
If you also have healers you can do the same thing in reverse for the healer target.
Target changing will be an important part of this system too. So you will want to think about that. A simple version is to reevaluate changing target a given percentage of the turns.
And finally, I would add random chance into the system. For example a character could be set as follows
Attack Weakest .25
Attack Strongest .50
Attack Most Injured .25
Change target .1
When it's time to attack. You generate a random number from 0-1. If it's under you Change targets you change target by generating another random number of what target to attack.
You can begin to factor distance into your system by augmenting the attack mode percentages.
For example if it would take 3 turns to attack the most injured. Decrease it's percentage of being targeted by dividing that value by 3 and distributing the difference to the other two possibilities.
Here's the background... in my free time I'm designing an artillery warfare game called Staker (inspired by the old BASIC games Tank Wars and Scorched Earth) and I'm programming it in MATLAB. Your first thought might be "Why MATLAB? There are plenty of other languages/software packages that are better for game design." And you would be right. However, I'm a dork and I'm interested in learning the nuts and bolts of how you would design a game from the ground up, so I don't necessarily want to use anything with prefab modules. Also, I've used MATLAB for years and I like the challenge of doing things with it that others haven't really tried to do.
Now to the problem at hand: I want to incorporate AI so that the player can go up against the computer. I've only just started thinking about how to design the algorithm to choose an azimuth angle, elevation angle, and projectile velocity to hit a target, and then adjust them each turn. I feel like maybe I've been overthinking the problem and trying to make the AI too complex at the outset, so I thought I'd pause and ask the community here for ideas about how they would design an algorithm.
Some specific questions:
Are there specific references for AI design that you would suggest I check out?
Would you design the AI players to vary in difficulty in a continuous manner (a difficulty of 0 (easy) to 1 (hard), all still using the same general algorithm) or would you design specific algorithms for a discrete number of AI players (like an easy enemy that fires in random directions or a hard enemy that is able to account for the effects of wind)?
What sorts of mathematical algorithms (pseudocode description) would you start with?
Some additional info: the model I use to simulate projectile motion incorporates fluid drag and the effect of wind. The "fluid" can be air or water. In air, the air density (and thus effect of drag) varies with height above the ground based on some simple atmospheric models. In water, the drag is so great that the projectile usually requires additional thrust. In other words, the projectile can be affected by forces other than just gravity.
In a real artillery situation all these factors would be handled either with formulas or simply brute-force simulation: Fire an electronic shell, apply all relevant forces and see where it lands. Adjust and try again until the electronic shell hits the target. Now you have your numbers to send to the gun.
Given the complexity of the situation I doubt there is any answer better than the brute-force one. While you could precalculate a table of expected drag effects vs velocity I can't see it being worthwhile.
Of course a game where the AI dropped the first shell on your head every time wouldn't be interesting. Once you know the correct values you'll have to make the AI a lousy shot. Apply a random factor to the shot and then walk to towards the target--move it say 30+random(140)% towards the true target each time it shoots.
Edit:
I do agree with BCS's notion of improving it as time goes on. I said that but then changed my mind on how to write a bunch of it and then ended up forgetting to put it back in. The tougher it's supposed to be the smaller the random component should be.
Loren's brute force solution is appealing as because it would allow easy "Intelligence adjustments" by adding more iterations. Also the adjustment factors for the iteration could be part of the intelligence as some value will make it converge faster.
Also for the basic system (no drag, wind, etc) there is a closed form solution that can be derived from a basic physics text. I would make the first guess be that and then do one or more iteration per turn. You might want to try and come up with an empirical correction correlation to improve the first shot (something that will make the first shot distributions average be closer to correct)
Thanks Loren and BCS, I think you've hit upon an idea I was considering (which prompted question #2 above). The pseudocode for an AIs turn would look something like this:
nSims; % A variable storing the numbers of projectile simulations
% done per turn for the AI (i.e. difficulty)
prevParams; % A variable storing the previous shot parameters
prevResults; % A variable storing some measure of accuracy of the last shot
newParams = get_new_guess(prevParams,prevResults);
loop for nSims times,
newResults = simulate_projectile_flight(newParams);
newParams = get_new_guess(newParams,newResults);
end
fire_projectile(newParams);
In this case, the variable nSims is essentially a measure of "intelligence" for the AI. A "dumb" AI would have nSims=0, and would simply make a new guess each turn (based on results of the previous turn). A "smart" AI would refine its guess nSims times per turn by simulating the projectile flight.
Two more questions spring from this:
1) What goes into the function get_new_guess? How should I adjust the three shot parameters to minimize the distance to the target? For example, if a shot falls short of the target, you can try to get it closer by adjusting the elevation angle only, adjusting the projectile velocity only, or adjusting both of them together.
2) Should get_new_guess be the same for all AIs, with the nSims value being the only determiner of "intelligence"? Or should get_new_guess be dependent on another "intelligence" parameter (like guessAccuracy)?
A difference between artillery games and real artillery situations is that all sides have 100% information, and that there are typically more than 2 opponents.
As a result, your evaluation function should consider which opponent it would be more urgent to try and eliminate. For example, if I have an easy kill at 90%, but a 50% chance on someone who's trying to kill me and just missed two shots near me, it's more important to deal with that chance.
I think you would need some way of evaluating the risk everyone poses to you in terms of ammunition, location, activity, past history, etc.
I'm now addressing the response you posted:
While you have the general idea I don't believe your approach will be workable--it's going to converge way too fast even for a low value of nSims. I doubt you want more than one iteration of get_new_guess between shells and it very well might need some randomizing beyond that.
Even if you can use multiple iterations they wouldn't be good at making a continuously increasing difficulty as they will be big steps. It seems to me that difficulty must be handled by randomness.
First, get_initial_guess:
To start out I would have a table that divides the world up into zones--the higher the difficulty the more zones. The borders between these zones would have precalculated power for 45, 60 & 75 degrees. Do a test plot, if a shell smacks terrain try again at a higher angle--if 75 hits terrain use it anyway.
The initial shell should be fired at a random power between the values given for the low and high bounds.
Now, for get_new_guess:
Did the shell hit terrain? Increase the angle. I think there will be a constant ratio of how much power needs to be increased to maintain the same distance--you'll need to run tests on this.
Assuming it didn't smack a mountain, note if it's short or long. This gives you a bound. The new guess is somewhere between the two bounds (if you're missing a bound, use the value from the table in get_initial_guess in it's place.)
Note what percentage of the way between the low and high bound impact points the target is and choose a power that far between the low and high bound power.
This is probably far too accurate and will likely require some randomizing. I've changed my mind about adding a simple random %. Rather, multiple random numbers should be used to get a bell curve.
Another thought: Are we dealing with a system where only one shell is active at once? Long ago I implemented an artillery game where you had 5 barrels, each with a fixed reload time that was above the maximum possible flight time.
With that I found myself using a strategy of firing shells spread across the range between my current low bound and high bound. It's possible that being a mere human I wasn't using an optimal strategy, though--this was realtime, getting a round off as soon as the barrel was ready was more important than ensuring it was aimed as well as possible as it would converge quite fast, anyway. I would generally put a shell on target on the second salvo and the third would generally all be hits. (A kill required killing ALL pixels in the target.)
In an AI situation I would model both this and a strategy of holding back some of the barrels to fire more accurate rounds later. I would still fire a spread across the target range, the only question is whether I would use all barrels or not.
I have personally created such a system - for the web-game Zwok, using brute force. I fired lots of shots in random directions and recorded the best result. I wouldn't recommend doing it any other way as the difference between timesteps etc will give you unexpected results.