New to this forum.
I'm trying to run octave's LU decomposition function with complete pivoting as such:
[L, U, p, q] = lu(A)
for a matrix A I have and I keep getting this error:
"element number 4 undefined in return list"
Element 4 is the matrix of column permutations Q. What's going on? Why doesn't it show? Thanks in advance
If the matrix A is full, the lu function does not perform column exchanges in Octave (emphasis mine):
When called with two or three output arguments and a spare[sic] input matrix, lu does not attempt to perform sparsity preserving column permutations. Called with a fourth output argument, the sparsity preserving column transformation Q is returned, such that P * A * Q = L * U.
So full pivoting is only performed for sparse matrices to maximize sparsity if the fourth output argument is provided for a sparse matrix. The quote above uses "A", but per the function signature provided at the top of the linked Octave documentation section, I believe they meant to write "S": "[L, U, P, Q] = lu (S)".
There does not appear to be a full pivoting option for full matrices by-default.
I'll note that MATLAB has the same behavior for the fourth output of its lu:
Column permutation ... . Use this output to reduce the fill-in (number of nonzeros) in the factors of a sparse matrix.
Related
This is a very simple question about Cublas library which I strangely couldn't find answer in documentation or elsewhere.
I am using rather old version of CUBLAS (10.2) but it should not matter. I use cublasSgemm to multiply two 32-bit floats matrices A * B and put the result in matrix C:
stat = cublasSgemm(handle, CUBLAS_OP_N, CUBLAS_OP_T, nRows, k, nCols, alpha, A, nRows, B, k, beta, C, nRows);
Is it possible to make CUBLAS to accumulate the result in C? This means that if C contains some data it would not be erased but accumulated with the multiplication result?
This can be used for example when memory is limited and one need to shrink sizes of input matrices if are too big and multiply several times. I however couldn't see such option in cublasSgemm?
Is it possible to make CUBLAS to accumulate the result in C? This means that if C contains some data it would not be erased but accumulated with the multiplication result?
Yes, cublasSgemm does exactly that. Referring to the documentation:
This function performs the matrix-matrix multiplication
C=αop(A)op(B)+βC
^^^
This is the accumulation part of the formula.
If you set beta to zero, then the previous contents of C will not be accumulated.
If you set beta to 1, then the previous contents of C will be added to the multiplication (AxB) result.
If you set beta to some other value, a scaled (multiplied) version of the previous contents of C will be added.
Note that as far as this description and function are concerned, all of this functionality was defined/specified as part of the netlib BLAS description, and should be similar to other BLAS libraries, and is not unique or specific to CUBLAS.
In Octave, one can do an element-wise multiplication between a full matrix and compatible (broadcastable) vector (i.e. MxN .* 1xN or MxN .* Mx1). But this does not seem to be applicable for sparse matrix.
Consider the following example,
v = (1:5)';
s = spdiags(v,0,5,5); % simple sparse matrix
s .* v; % <--- error 'nonconformant arguments (op1 is 5x5, op2 is 5x1)'
full(s) .* v; % <--- works but defeats sparse matrix
In the above simple case, with a diagonal sparse matrix, converting to full matrix can be avoided by converting v to diagonal matrix i.e.
s * diag(v); % <--- returns desired result
diag(v) * s; % <--- also results desired result
but for other cases, i.e. non-diagonal sparse matrix, it gets unnecessarily complicated by operand-order.
Is there a trick to doing broadcastable operation with sparse matrix? ...else is this a bug or feature (i.e. necessary)?
I'm beginner for cuda. I want to try to solve svd for row-major matrix using cusolver API. but I'm confusing about leading dimension for matrix A.
I have a row-major matrix 100x10.(e.g, I have 100 data which is in the 10 dimensional space.)
As the CUDA documentation, cusolverDnDgesvd function needs lda parameter(leading dimenstion for matrix A). My matrix is row-major so I gave 10 to cusolver gesvd function. But function was not working. This function indicated that my lda parameter was wrong.
Ok, I gave 100 to cusolver gesvd function. Function was working but the results of function (U, S, Vt) seems to be wrong. I mean, I can't get the matrix A from USVt.
As my knowledge, cuSolver API assume all matrix is column-major.
If I changed my matrix into column-major, m is lower than n(10x100). But gesvd function only works for m >= n.
Yes, I'm in trouble. How can I solve this problem?
Row-major, col-major and leading dimension are concepts related to the storage. A matrix can be stored in either scheme, while representing the same mathematical matrix.
To get correct result, you could use cublasDgeam() to change your row-major 100x10 matrix into a col-major 100x10 matrix, which is equivalent to matrix transpose while keeing the storage order, before calling cusolver.
There are many sources talking about storage ordering,
https://en.wikipedia.org/wiki/Row-major_order
https://fgiesen.wordpress.com/2012/02/12/row-major-vs-column-major-row-vectors-vs-column-vectors/
https://eigen.tuxfamily.org/dox-devel/group__TopicStorageOrders.html
Confusion between C++ and OpenGL matrix order (row-major vs column-major)
as well as leading dimension
http://www.ibm.com/support/knowledgecenter/SSFHY8_5.3.0/com.ibm.cluster.essl.v5r3.essl100.doc/am5gr_leaddi.htm
You should google them.
I need to calculate the following matrix math:
D * A
Where D is dense, and A is sparse, in CSC format.
cuSPARSE allows multiplying sparse * dense, where sparse matrix is in CSR format.
Following a related question, I can "convert" CSC to CSR simply by transposing A.
Also I can calculate (A^T * D^T)^T, as I can handle getting the result transposed.
In this method I can also avoid "transposing" A, because CSR^T is CSC.
The only problem is that cuSPARSE doesn't support transposing D in this operation, so I have to tranpose it beforehand, or convert it to CSR, which is a total waste, as it is very dense.
Is there any workaround?Thanks.
I found a workaround.
I changed the memory accesses to D in my entire code.
If D is an mxn matrix, and I used to access it by D[j * m + i], now I'm accessing it by D[i * n + j], meaning I made it rows-major instead of columns-major.
cuSPARSE expectes matrices in column-major format, and because rows-major transposed is columns-major, I can pass D to cuSPARSE functions as a fake transpose without the need to make the transpose.
Started learning octave recently. How do I generate a matrix from another matrix by applying a function to each element?
eg:
Apply 2x+1 or 2x/(x^2+1) or 1/x+3 to a 3x5 matrix A.
The result should be a 3x5 matrix with the values now 2x+1
if A(1,1)=1 then after the operation with output matrix B then
B(1,1) = 2.1+1 = 3
My main concern is a function that uses the value of x like that of finding the inverse or something as indicated above.
regards.
You can try
B = A.*2 + 1
The operator . means application of the following operation * to each element of the matrix.
You will find a lot of documentation for Octave in the distribution package and on the Web. Even better, you can usually also use the extensive documentation on Matlab.
ADDED. For more complex operations you can use arrayfun(), e.g.
B = arrayfun(#(x) 2*x/(x^2+1), A)