Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I can calculate distance between inclined line and my ball (with normal vector), But how can I calculate new velocity?
Anders' answer was a good one but I realise that you may not have a great mathematical back ground so I will elaborate. The problem you have at the moment is poorly stated. However, see the following figure
This will allow us to derive the equation you require. Now, the scalar product of two vectors a and b, a.b gives the magnitude of a multiplied by the projection of b onto a. Basically, if we take n as a unit vector (magnitude 1 in each component direction) then a.n gives the magnitude of the components of a which act in the direction of n.
So, splitting the velocity components into those parallel and perpendicular to the plain; to get the velocity V we first split U into components.
Perpendicular to the plane in direction n, we have a vector velocity w = (U.n) n. This means that in fact we can write U = (U.n) n + [U - (U.n) n]. This is saying that U is made up of the perpendicular component of itself + the parallel component of itself. Now, -V is very similar to U but the parallel components acts in the reverse direction, so we can write -V = (U.n) n - [U - (U.n) n].
Combining the above gives the result Anders stated, i.e. V = U -2[(U.n) n]. The dot/scalar product is defined as a.b = |a||b|cos(A) where A is the angle between the vectors laid together tail-to-tail, this should enable you to solve your problem.
I hope this helps
If The vector v=(vx,vy) is the initial velocity and the plane has normal n=(nx,ny) then the new reflected velocity vector r will be
r=v−2(v⋅n)*n
The product (v⋅n) is the dot product of v and n, defined as vxnx+vyny. Note that the plane normal must be normalized (length 1.0). A related question with the same answer https://math.stackexchange.com/questions/13261/how-to-get-a-reflection-vector
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed last year.
Improve this question
I need help with understanding the shaping theorem for MDPs. Here's the relevant paper: https://people.eecs.berkeley.edu/~pabbeel/cs287-fa09/readings/NgHaradaRussell-shaping-ICML1999.pdf it basically says that a markov decision process that has some reward function on transitions between states and actions R(s, a, s') has the same optimal policy as a different markov decision process with it's reward defined as R'(s, a, s') = R(s, a, s') + gamma*f(s') - f(s), where gamma is the time-discount-rate.
I understand the proof, but it seems like a trivial case where it breaks down is when R(s, a, s') = 0 for all states and actions, and the agent is faced with the path A -> s -> B versus A -> r -> t -> B. With the original markov process we get an EV of 0 for both paths, so both paths are optimal. But with the potential added to each transition we get, gamma^2*f(B)-f(A) for the first path, and gamma^3*f(B) - f(A) for the second. So if gamma < 1, and 0 < f(B), f(A), then the second path is no longer optimal.
Am I misunderstanding the theorem, or am I making some other mistake?
You are missing the assumption that for every terminal, and starting state s_T, s_0 we have f(s_T) = f(s_0) = 0. (Note, that in the paper there is an assumption that after terminal state there is always the new starting state, and the potential "wraps around).
I am writing some CUDA code for finding the 3 parameters of a circle (centre X,Y & radius) from many (m) measurements of positions around the perimeter.
As m > 3 I am (successfully) using Singular Value Decomposition (SVD) for this purpose (using the cuSolver library). Effectively I am solving m simulaneous equations with 3 unknowns.
However, not all of my perimeter positions are valid (say q of them), and so I have to go through my initial set of m measurements and remove the q invalid ones. This involves moving the size m data array from the card to the host, processing linearly to remove the q invalid entries and then re loading the smaller (m-q) array back onto the card...
My question is; if I were to set all terms on both sides of the q invalid equations to zero, could I just run the m equations (including the zeros) through my SVD analysis (without the data transfer etc) or would this cause other problems?
My instinct tells me that this is a bit like applying weights to the data but instinct and SVD are not terms that sit well together in my experience...
I am hesitant just to try this as I don't know if it will work in some cases and not in others...
I have tested the idea by inserting rows of zeros into my matrix. The solution that I am getting is not significantly affected by this.
So I am answering my own question with a non-rigorous Yes it is OK do do this.
If anybody has a more rigorous or more considered answer I would very much like to hear it.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have to matrices A(32*32) and B(32*n), in which 'n' is coming from inputs and is between 2000 to 2000000.
I have two kind of inputs one is integers between 0 to 255 and the other one is 0,1. this multiplication is in a loop that iterates 3000 times. B(32*n) comes form input and is constant in all of the iterations but A(32*32) can change in each iteration.
//read B from file
//read A from file
double D[3000];
for(int i = 0; i < 3000; i++)
{
C = multiply(A, B);
// D[i] = mean of all elements in C
// build A from B using D[i] (this part is really complicated sequential process that contains lots of if and switches)
}
What is the fastest way to do this?
thank you.
Nobody here is going to write code for you, that is not what Stack Overflow is intended for. However, it would appear to be that there are a number of characteristics of the problem which you should be looking to exploit to improve the performance of your code:
Recognise that because one of the matrices only contains 0 or 1 and you are performing this in integer, what you are describing as matrix multiplication is really a large number of independent sparse sums
Recognise that because the next operation is to compute an average, you don't actually have to store the intermediate dot products and could directly perform a reduction on partial results of the matrix row summation
There are probably parallel primitives in the thrust library which you could use for prototyping, and an optimal hand written kernel would be aiming to fuse both the first and most of the second part of the operation into a single kernel.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Let there be the following definition of gradient descent cost function
with the hypothesis function defined as
what I've come up with for multivariate linear regression is
theta = theta - alpha * 1/m * ([theta', -1]*[X';y']*X)';
h_theta = 1/(2*m)* (X*theta - y)'*(X*theta-y);
(octave notation, ' means matrix transpose, [A, n] means adding a new column to matrix A with scalar value n, [A; B] means appending matrix B to matrix A row-wise)
It's doing its job correctly how far I can tell (the plots look ok), however I have a strong feeling that it's unnecessarily complicated.
How to write it with as little matrix operations as possible (and no element-wise operations, of course)?
I don't think that is unnecessarily complicated, and instead this is what you want. Matrix operations are good because you don't have to loop over elements yourself or do element-wise operations. I remember taking a course online and my solution seems pretty similar.
The way you have it is the most efficient way of doing it as it is fully vectorized. It can be done by having a for loop over the summation and so on, however this is very inefficient in terms of processing power.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
Are there any examples available that give a hands-on example of Principal Component Analysis on a dataset? I am reading articles discussing only theory and am really looking for something that will show me how to use PCA and then interpret the results and transform the original dataset into the new dataset. Any suggestions please?
If you know Python, here is a short hands-on example:
# Generate correlated data from uncorrelated data.
# Each column of X is a 3-dimensional feature vector.
Z = scipy.randn(3, 1000)
C = scipy.randn(3, 3)
X = scipy.dot(C, Z)
# Visualize the correlation among the features.
pylab.scatter(X[0,:], X[1,:])
pylab.scatter(X[0,:], X[2,:])
pylab.scatter(X[1,:], X[2,:])
# Perform PCA. It can be shown that the principal components of the
# matrix X are equivalent to the left singular vectors of X, which are
# equivalent to the eigenvectors of X X^T (up to indeterminacy in sign).
U, S, Vh = scipy.linalg.svd(X)
W, Q = scipy.linalg.eig(scipy.dot(X, X.T))
print U
print Q
# Project the original features onto the eigenspace.
Y = scipy.dot(U.T, X)
# Visualize the absence of correlation among the projected features.
pylab.scatter(Y[0,:], Y[1,:])
pylab.scatter(Y[1,:], Y[2,:])
pylab.scatter(Y[0,:], Y[2,:])
You can check http://alias-i.com/lingpipe/demos/tutorial/svd/read-me.html SVD and LSA is very similar approach to PCA both are space reduction methods. The only difference in basis evaluation approach.
Since you're asking for available hands-on examples, here you have an interactive demo to play with.