Does 3d position tracking require a linear or a non-linear Kalman filter? - kalman-filter

I want to design a Kalman filter with the following details.
state matrix = [Px, Py, Pz, Vx, Vy, Vz] (3d position, 3d velocity)
input control vector = [Ax, Ay, Az] (3d acceleration)
measurement matrix = [Px, Py, Pz] (3d position)
For me it seems to be a non-linear problem due to the presence of acc(t * t) factor. However I came across some videos dealing with such problems in a 2D scenario with a Linear Kalman filter.
Could you please help me in clarifying whether my scenario is an EKF problem or a simple KF problem?

The 0.5*t*t that occurs due to integrating the acceleration is in the transition matrix F. The requirement is to be linear in terms of your state, and t is not part of your state. So your system as described is linear.

Related

Keras LSTM: dropout vs recurrent_dropout

I realize this post is asking a similar question to this.
But I just wanted some clarification, preferably a link to some kind of Keras documentation that says the difference.
In my mind, dropout works between neurons. And recurrent_dropout works each neurons between timesteps. But, I have no grounding for this whatsoever.
The documentation on the Keras webite is not helpful at all.
Keras LSTM documentation contains high-level explanation:
dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
recurrent_dropout: Float between 0 and 1. Fraction of the units to drop for the linear transformation
of the recurrent state.
But this totally corresponds to the answer you refer to:
Regular dropout is applied on the inputs and/or the outputs, meaning the vertical arrows from x_t and to h_t. ...
Recurrent dropout masks (or "drops") the connections between the recurrent units; that would be the horizontal arrows in your picture.
If you're interested in details on the formula level, the best way is to inspect the source code: keras/layers/recurrent.py, look for rec_dp_mask (recurrent dropout mask) and dp_mask. One is affecting the h_tm1 (the previous memory cell), the other affects the inputs.

Implementing a Kalman filter for position tracking given only position measurements (along with covariance)

I'm trying to track an object moving through space. The actual movement of the object should generally be fairly straight and even when not straight it should be smooth.
My measurements consist of the 3D coordinates of the object, the timestamp, as well as a 3x3 covariance matrix, but that's it. I do not have the velocity or acceleration (except insofar as it could be estimated from different position measurements).
Is it possible for me to use a Kalman filter with this data?
Yes.
I wouldn't bother faking up velocity observations as in effect the kalman filter will be doing that.
I'd guess you'd want position and velocity in the state vector; whether to have acceleration too is trickier; if the object is turning/accelerating slowly I'd first try not having acceleration in the state.
I've found that most of the work in implementing such filters goes into tuning, that is choosing, and perhaps adapting, the process-noise covariance matrix.

Orthographic projection - What is the process converting 3d point to 2d

I'm implementing a simple penalty shootout game using actionscript 3.0. The view of the game is similar to view of the old "Sensible World of Soccer". I want to use 3d game logic by using dimension z as I think that it could help me in order to achieve better collision detection - response results. However, I would like to keep the graphics style and view equivalent to old 2d soccers'. Hence, I assume that orthographic projection is suitable for this implementation. Although there is plenty of information in the internet regarding orthographic projection, I'm a little bit confused about how someone can apply it in his/her code.
So my questions are:
Which is the procedure step by step in order for someone to convert a 3d (x, y, z) point to 2d (x', y') point in orthographic projection?
Can we avoid using matrices? If yes, what are the equations that associate coordinates x', y' with x, y, z?
Do we have to define a camera position and angle before applying the conversion? In my case, camera will be in a fixed position and angle.
DisplayObjects and their descendants (ie MovieClip and Sprite) have a z property you can use to do this without the headaches - they also have rotationX/Y/Z and scaleX/Y/Z properties too!
Using 'z' will adjust the position and scale of an object accordingly (though it will convert vectors to bitmaps), there's no depth sorting, so it will stay on top of objects even if its z co-ord suggests it should be behind them, but for the project you have in mind I can't see this being a problem - it's pretty easy to fix anyway, have an array of objects in the scene, sort it according to z-position and reset the depth index of each/re-add to stage in sorted order.
You can use the perspectiveProjection member of a clip to adjust the FOV, origin etc -
Perspective Tutorial
..but you don't need to get any more sophisticated than that. Certainly you don't need to dabble with matrices with a fixed camera view, even if you wanted to calculate this manually as an experiment.
Hope this helps

CUDA - Rotating particles

I'm new to CUDA and experimenting with the samples of the NVidia GPU SDK.
The goal is to rotate the spheres in the Particles example. So while the sphere is falling, it's also rotating, any pointers please?
In the particles sample the particles are just points with a radius. They have no angular momentum in the simulation, because they are assumed to be point masses, not sphere masses (i.e. all of their mass is assumed to be exactly at their centers.)
If you want to do this physically, you would have to use a simplified rigid body dynamics rather than just point masses.
If you just want to visually rotate the particles (non-physically), you can do that by just applying a rotation matrix to the GL matrix stack before you draw the object that you display for each particle (you mentioned a torus). This could be done in OpenGL, independently of the CUDA simulation code.

Why use a Vector to represent velocity in a game?

Velocity = length / time
so why a vector (x, y, z) is used to represent it?
Technically speaking, length divided by time gives you the speed, not velocity. Speed doesn't tell you which direction you are travelling in, while velocity does. In a three dimensional space, in order to describe where you are going and how fast, you need to supply three values: the direction AND speed you are going in each of the three fundamental directions (normally called axes and referred to by x, y, and z). But you could refer to them as forward/backward, sideways, and up/down if you want. For example, if you are travelling at 5km/hour upwards, the vector could be (0,0,5). Travelling 5km/hour downwards, your speed is just the same but the vector would be (0,0,-5). Travelling at 5km/hour at a 45 degree angle forward, the SPEED along each of the x and z axex would be the square root of 5, so the vector would be (approximately) (2.2,0,2.2). And so on.
Because velocity is not "length/time". It is the first derivative of position. Position is a vector, and so its derivatives are also vectors.
Most likely to measure the change in three dimensional space for the object.
Magnitude of the vector should be the speed you expect, and as the object changes direction, the vector components will most likely change.
You would use a vector because you can have velocity in 3 dimensions. In other words, the 3D velocity is the combination of distance/time in all 3 dimensions. It might be better to name the variables xPrime, yPrime, and zPrime, so that the vector more clearly represents velocity, rather than position.
Perhaps it is the speed that the object is moving in each of the directions in a 3D space, doing it this way means that you can extrapolate a direction of movement, after all velocity is movement with a direction.