I have the following problem. I have a 2D numpy matrix of integers, let's say they're in the range 0-19. The matrix has shape (r, c). Call this matrix M. I have an additional array, a lookup table, A, of length 19 whose elements are small numpy vectors, each with length N. What I want to do is to create a new matrix of shape (r, c, N) where I've replaced the integer in the original matrix by it's lookup table counterpart.
Simple enough, it's just a function on a matrix which produces a new matrix with an additional dimension. I have written code for this which looks like:
num_rows, num_cols = M.shape
result = np.zeros(num_rows, num_cols, 3)
for col in range(num_cols):
for row in range(num_ros):
idx = M[row, col]
result[row, col] = np.array(A[idx])
This works but the problem is it's is too slow. If M has 500,000 elements then it takes like 600ms per matrix which is the bottleneck in my code. There has to be a clever numpy way of handling this which is faster but I can't think of it.
Related
Suppose results is a vector of tuple with length M where typeof(result) = Vector{NamedTuple{(:x, :p, :step), Tuple{Vector{Float64}, Float64, Int64}}}.
p is also a vector of length N where typeof(results[1].p) = Vector{Float64}. I want to extract the first N-1 elements of all the p inside results and express it as an M x (N-1) matrix. I know how to do it in a for loop, but is there a more element way of doing it?
These should both do what you ask but they return an (N-1 x M) matrix and I think they are very similar
hcat(map(x->x.p[1:N-1], results)...)
hcat([x.p[1:N-1] for x in results]...)
For the (M x N-1) output
vcat(map(x->x.p[1:N-1]', results)...)
vcat([x.p[1:N-1]' for x in results]...)
should work but it is a bit slower.
The conventional 2-loop comprehension may be 10X faster than vcat for this. You get an M x (N-1) matrix directly.
[results[i].p[j] for i=1:M, j=1:N-1]
For example, ndgrid has inputs as n vectors where n is decided by the user.
For my use case, n can change. I would like to prepare the input list before hand, based on n, then feed the inputs to ndgrid. How do I do that please?
For example, say I have 3 row vectors x1, x2, and x3. Then, if n=3 and I put inputs = [x1,x2,x3], and I use ndgrid(inputs) then MATLAB treats this as ndgrid([x1,x2,x3]) instead of ndgrid(x1,x2,x3). I want the latter, not the former. How do I solve this please?
When we use a cell array, for example x = {some,stuff,inside} you can unpack the cell with x{:}, in a function call, each elements of the cell will be passed as an argument: myfunction(x{:}) is equivalent to myfunction(some, stuff, inside).
In your case:
% Number of inputs
k = 3;
% Put your k input in a cell array
x = {1:3,1:3,1:3};
% If needed you can also have a dynamic output variable
X = cell(k,1);
% Get the result
[X{:}] = ndgrid(x{:})
I'm currently trying to solve Pendulum-v0 from the openAi gym environment which has a continuous action space. As a result, I need to use a Normal Distribution to sample my actions. What I don't understand is the dimension of the log_prob when using it :
import torch
from torch.distributions import Normal
means = torch.tensor([[0.0538],
[0.0651]])
stds = torch.tensor([[0.7865],
[0.7792]])
dist = Normal(means, stds)
a = torch.tensor([1.2,3.4])
d = dist.log_prob(a)
print(d.size())
I was expecting a tensor of size 2 (one log_prob for each actions) but it output a tensor of size(2,2).
However, when using a Categorical distribution for discrete environment the log_prob has the expected size:
logits = torch.tensor([[-0.0657, -0.0949],
[-0.0586, -0.1007]])
dist = Categorical(logits = logits)
a = torch.tensor([1, 1])
print(dist.log_prob(a).size())
give me a tensor a size(2).
Why is the log_prob for Normal distribution of a different size ?
If one takes a look in the source code of torch.distributions.Normal and finds the definition of the log_prob(value) function, one can see that the main part of the calculation is:
return -((value - self.loc) ** 2) / (2 * var) - some other part
where value is a variable containing values for which you want to calculate the log probability (in your case, a), self.loc is the mean of the distribution (in you case, means) and var is the variance, that is, the square of the standard deviation (in your case, stds**2). One can see that this is indeed the logarithm of the probability density function of the normal distribution, minus some constants and logarithm of the standard deviation that I don't write above.
In the first example, you define means and stds to be column vectors, while the values to be a row vector
means = torch.tensor([[0.0538],
[0.0651]])
stds = torch.tensor([[0.7865],
[0.7792]])
a = torch.tensor([1.2,3.4])
But subtracting a row vector from a column vector, that the code does in value - self.loc in Python gives a matrix (try!), thus the result you obtain is a value of log_prob for each of your two defined distribution and for each of the variables in a.
If you want to obtain a log_prob without the cross terms, then define the variables consistently, i.e., either
means = torch.tensor([[0.0538],
[0.0651]])
stds = torch.tensor([[0.7865],
[0.7792]])
a = torch.tensor([[1.2],[3.4]])
or
means = torch.tensor([0.0538,
0.0651])
stds = torch.tensor([0.7865,
0.7792])
a = torch.tensor([1.2,3.4])
This is how you do in your second example, which is why you obtain the result you expected.
I have a vector of size M (say 500), which I up-sample by a factor of MM=500, so that my new vector is now size N=500 x 500=250000. I am using an optimisation algorithm, and need to carryout the fft/dft of the up-sampled vector of size N using the DFT Matrix, and not the inbuilt function.
However, this becomes prohibitive due to memory constraints. Is there any way to go about it? I have seen a similar question here Huge Fourier matrix - MATLAB but this is regarding a Huge Matrix, where the solution is to break the matrix into columns and do the operation column by column. In my case, the vector has 250000 rows.
Would it be wise to split the rows into pieces, say 500 each and iterate the same thing 500 times, and concatenate the results in end ?
If using the FFT is an option, the matrix of twiddle factors does not appear explicitly, so the actual memory requirements are on the order of O(N).
If you must use the explicit DFT matrix, then it is possible to break down the computations using submatrices of the larger DFT matrix. Given an input x of length N, and assuming we wish to divide the large DFT matrix into BlockSize x BlockSize submatrices, this can be done with the following matlab code:
y = zeros(size(x));
Imax = ceil(N / BlockSize); % divide the rows into Imax chunks
Jmax = ceil(N / BlockSize); % divide the columns into Jmax chunks
% iterate over the blocks
for i=0:Imax-1
imin = i*BlockSize;
imax = min(i*BlockSize+BlockSize-1,N-1);
for j=0:Jmax-1
jmin = j*BlockSize;
jmax = min(j*BlockSize+BlockSize-1,N-1);
[XX,YY] = meshgrid(jmin:jmax, imin:imax);
% compute the DFT submatrix
W = exp(-2* pi * 1i * XX .* YY / N);
% apply the DFT submatrix on a chunk of the input and add to the output
y([imin:imax] + 1) = y([imin:imax] + 1) + W * x([jmin:jmax] + 1);
end
end
If needed it would be fairly easy to adapt the above code to use different block size along the rows than along the columns.
Suppose function g takes a function f as a parameter, and inside g we have something like
x = t*feval(f, u);
however, f can be either scalar-valued or vector-valued. If it is vector valued, we want x to be a vector as well, i.e. the feval statement to return the whole vector returned by f. How do we make this work for both scalar and vector cases?
As far as I can tell, what you are asking is already the default behavior in matlab.
This means that if f returns a scalar, x will be a scalar and if it returns a vector x will be a vector.
In your example, this holds as long as t is also a scalar - otherwise the result will depend on how t*[output of f] is evaluated.
Example
function o1 = f(N)
o1 = zeros(1,N);
end
Here f returns a scalar if N=1 and a vector for N>1.
Calling your code gives
x=feval('f', 1); % Returns x = 0
x=feval('f', 4); % Returns x = [0 0 0 0]
If the output of feval(f,u) can be either a scalar or a vector, and you want the result x to be the same (i.e. a scalar or a vector of the same length and dimension), then it will depend on what t is:
If t is a scalar, then what you have is fine. You can use either of the operators * or .* to perform the multiplication.
If t is a vector of the same length and dimension as the result from feval(f,u), then use the .* operator to perform element-wise multiplication.
If t is a vector of the same length but with different dimension that the result from feval(f,u) (i.e. one is a row vector and one is a column vector), then you have to make the dimensions match by transposing one or the other with the .' operator.
If t is a different length than the result of feval(f,u), then you can't do element-wise multiplication.