extended kalman filter(EKF): add a bias to measurent and estimation - kalman-filter

I working on EKF and try to add a bias(1 degree or 1 arcsec) to the measurements. I have two measurements(angles) and finding the position(3) and velocity(3) so total state(6) so I want to extend the filter and for bias estimation (7th and 8th state). The idea is that add bias value to measurements and estimate this values. For example, one of the measurement(190 degree) and the second one(5 degree). If I add 1-degree bias to my measurements new values are 191 and 6 degrees, respectively. My simulation result is, starting from 1 degree and going through zero If I am estimating the bias but I am expecting the result that will start about zero and approximately goes 1 degree.
Where am I wrong? Can you share your ideas or provide some documents for this idea?

Related

Negative binomial regression SPSS - Quantity vs Distance

I have quite a simple dataset of quantities of litter found in a national park located on an island. For each data point I have corresponding GPS coordinates, and I've derived the distance of each point to the shore. My aim: observe if the quantities of litter increase or decrease with the distance to shore. I'm assuming that quantities of litter will increase with a decrease in distance, as litter is commonly found on beaches etc.
Quantities of litter are counts, i.e. non-parametric. Additionally I've tested the data to see if it follows a Poisson model and it does not (p-value <0.05), and I have a larger variance than the mean for each variable (quantity and distance) seemingly overdispersed. Therefore, I went on using a negbin regression, with an output as follows:
Omnibus test is highly significant (p=0.000). I was just slightly puzzled on the parameter estimates, and generally hoping that this approach makes sense. Any input much appreciated.
Interpreting the parameter estimates requires knowing the link function specified, which would be a log link if you specified your model as a negative binomial with log link on the Type of Model tab, but could be something else if you specified a custom model using a negative binomial distribution with another link (which could be identity, negative binomial, or power, instead).
If it's a log link, then for a distance of 0 (at the shore), you predict exp(2,636) for the count, or about 13,957. For a given distance from the shore, multiply the distance by -,042 and add that to the 2,636 value, then take the exponential function to the resulting power. So for every unit away from the shore you move, the log of the prediction decreases by ,042, and the prediction is multiplied by about ,959. One unit away, you predict about 13,383 for the count, two units away, about 12,833, etc. So the results are in general accord with your hypothesis. Different specific calculations would be required if you used a different link function.

Why is alpha set to 15 in NLTK - VADER?

I am trying to understand what the VADER does for analysis of sentences.
Why is the hyper-parameter Alpha set to 15 here? I understand that the it is unstable when left unbound, but why 15?
def normalize(score, alpha=15):
"""
Normalize the score to be between -1 and 1 using an alpha that
approximates the max expected value
"""
norm_score = score/math.sqrt((score*score) + alpha)
return norm_score
Vader's normalization equation is which is the equation for
I have read the paper of the research for Vader from here:http://comp.social.gatech.edu/papers/icwsm14.vader.hutto.pdf
Unfortunately, I could not find any reason why such a formula and 15 as the value for alpha was chosen but the experiments and the graph show that as x grows which is the sum of sentiments' scores grow the value becomes closer to -1 or 1 which indicates that as number of words grow the score tends more towards -1 or 1. Which means that Vader works better with short documents or tweets compared to long documents.

Kalman Filter corrected by known path

I am trying to get filtered velocity/spacial data from noisy position data from a tracked vehicle. I have a set of noisy position/time data = (x_i,y_i,t_i) and a known curve along which the vehicle is traveling, curve = (x(s),y(s)), where s is total distance along the curve. I can run a Kalman filter on the data, but I don't know how to constrain it to the 'road' without throwing out data that is too far from the road, which I don't want to do.
Alternately, I'm trying to estimate the value of s along the constrained path with position data that is noisy in x and y
Does anyone have an idea of how to merge the two types of data?
Thanks!
Do you understand what a Kalman filter does? Fundamentally, it assigns a probability to each possible state given just observables. In simple cases, this doesn't use a priori knowledge. But in your case, you can simply set the off-road estimates to zero and renormalizing the remaining probabilities.
Note: this isn't throwing out observables which are too far off the road, or even discarding outcomes which are too far off. It means that an apparent off-road position strongly increases the probabilities of an outcome on, but near the edge of the road.
If you want the model to allow small excursions away from the road, you can use a fast decaying function to model the low but non-zero probability of a car being off the road.
You could have as states the distance s along the path, and the rate of change of s. The position observations X and Y will then be non-linear functions of the state (assuming your track is not a line) so you'll need to use an extended or unscented filter.

How to print probability for repeated measures logistic regression?

I would like SAS to print the probability of my binary dependent variable occurring (“Calliphoridae” a particular fly family being present (1) or not (0), at a specific instance for my continuous independent variable (“degree_index” that was recorded from .055 to 2.89, but can be continuously recorded past 2.89 and always increases as time goes on) using Proc GENMOD. How do I change my code to print the probability, for example, that Calliphoridae is present at degree_index=.1?
My example code is:
proc genmod data=thesis descending ;
class Body_number ;
model Calliphoridae = degree_index / dist=binomial link=logit ;
repeated subject=Body_number/ type=cs;
estimate 'degreeindex=.1' intercept 1 degree_index 0 /exp;
estimate 'degree_index=.2' intercept 1 degree_index .1 /exp;run;
I get an output for the contrast estimate results as mean estimate at degree_index=.1 is ..99; degree_index=.2 is .98.
I think that it is correctly modeling the probability...I just didn't include the square of
the degree-day index. If you do, it allows the probability to increase and decrease. I
realized this when I did the probability by hand
(e^-1.1307x+.2119)/(1+e^-1.1307x+.2119) to verify that this really was modeling
probability when y=1 for the mean estimates at specific x values...and then I realized that it is
fitting a regression line and cannot increase and decrease because there is only
one x value. http://www.stat.sc.edu/~hansont/stat704/chapter14a.pdf

evaluating a rank based on several variables (Game Programming)

I am designing a fairly simple space combat desktop game with no graphics, but i want the back end to be robust enough for lots of expansion. I want to rank three different aspects of a ship's capabilities on a scale from 1 to 100 (although i'm willing to reconsider these numbers.)
For instance, i have the hitpoint section of the ship class as follows:
// section private defense
float baseHull;
float hullMod;
float baseArmor;
float armorMod
float baseSheild;
float ShieldMod;
float miscMod = 1.0; // this can be “rarer ship types, i.e. elites or bosses or stations or the rich.**
these can be any arbitrary value, for now. i haven't designed anything to fit in the variables yet, because I'm trying to figure out how to rank the ships based on these sections... one each for movement, hitpoints, and offensive capabilities. As an added bonus, a global ranking would be nice too. the hitpoints section as above would just be "hitpoints" on the screen, like 50,000HP for a moderate support class ship and 100 for the space shuttles we have on earth.
the ranking would determine likelihood of winning a fight, and the "XP" rewarded for winning a fight. Adding them all up wouldn't work, because a ship with 10 meters of uranium plating isn't necessarily better than one with 1 meter of lead plating and shields. for reference, earth clothing would be a rank 1, an M1A1 tank would be like a 5, and the death star would be up around 40-50ish.
I've been searching for ways to do this with real world data, but i am neither a mathematics whiz or a statistician. is there a way to weight this into a handy function? is it possible to reverse the function to say input a value and have it assign the internals (this would be really cool, but not necessary.)
Well, a simple way to combine those variables to a total hitpoint value would be:
hitpoints = baseHull * hullMod + baseArmor * armorMod + baseShield * shieldMod;
You could then assign, say, values between 0 and 100 for the base values determining "how much" of hull, armor, and shield they have, and values between 1 and 10 for the modifying values, which define "how strong" each item is.
Calculating the winner of a fight could be done like this, for example:
totalPoints = ship1Points + ship2Points;
ship1won = (rand() % totalPoints) < ship1Points;
Where the points of the ships are some values calculated by the hitpoints and the offense values of the ships. So you calculate the total points of the two combating ships and pick a random number between 0 and the total points. If ship1points is, say, 20, and ship2points is 50, ship 1 has the possibility of winning of 20 / 70. To reduce the probability even more (say you want to be more sure that the stronger ship wins), you could multiply both points by a constant or square them before the final calculation.