I was wondering what the fastest way of computing a sparse matrix-vector product y = Ax in CUDA on multiple (let say n) GPUs is.
My naive approach would be to divide the vector x and y into n chunks, 1 chunk on each GPU. Then also split up the matrix A in smaller n^2 blocks A_ij and computing
y_i = \sum_j A_{i,j} x_j, // GPU j stores A_{i,j} and x_j, result is copied
// to and summed up on GPU i
on the different GPUs j=1..n with let's say cuSPARSE. Would this work? With the unified memory architecture, in principle all GPUs should be able to access the global memory.
Is the memory transfer between the GPUs going to be incredibly slow? I don't expect a large speed up but I was wondering if it is going to be slower than doing the matrix-vector multiplication on 1 single GPU.
I would suggest a different approach. Don't break up the vector x into chunks. Transfer x to all GPUs.
Break up the A matrix according to rows. So, for example, if A had 9 rows, and you have 3 GPUs, then transfer rows 1-3 of A to the first GPU, 4-6 of A to the second GPU, and 7-9 of A to the third GPU.
Then compute the 3 individual pieces of y on the 3 GPUs:
y[1-3] = A[1-3]*x
y[4-6] = A[4-6]*x
y[7-9] = A[7-9]*x
Each of those 3 operations could be done with cusparse<T>csrmv, for example (or CUB now has an spmv routine also).
Reassembly of the y vector should be trivial (concatenation).
There is no need for inter-GPU data transfer during the computation, only on transfer of results (y).
A possible "optimization" would be to partition A based on "work" rather than naively by rows. But the benefit of this would depend on the structure of A, so would require analysis. A simplistic approach to this optimization could be to just break up A based on (approximately) equalizing the number of NZ elements in each chunk.
Related
This issue reminds some typical many-body problem, but with some extra calculations.
I am working on the generalized Metropolis Monte-Carlo algorithm for the modeling of large number of arbitrary quantum systems (magnetic ions for example) interacting classically with each other. But it actually doesn't matter for the question.
There is more than 100000 interacting objects, each one can be described by a coordinate and a set of parameters describing its current state r_i, s_i.
Can be translated to the C++CUDA as float4 and float4 vectors
To update the system following Monte-Carlo method for such systems, we need to randomly sample 1 object from the whole set; calculate the interaction function for it f(r_j - r_i, s_j); substitute to some matrix and find eigenvectors of it, from which one a new state will be calculated.
The interaction is additive as usual, i.e. the total interaction will be the sum between all pairs.
Formally this can be decomposed into steps
Generate random number i
Calculate the interaction function for all possible pairs f(r_j - r_i, s_j)
Sum it. The result will be a vector F
Multiply it by some tensor and add another one h = h + dot(F,t). Some basic linear algebra stuff.
Find the eigenvectors and eigenvalues, based on some simple algorithm, choose one vector V_k and write in back to the array s_j of all objects's states.
There is a big question, which parts of this can be computed on CUDA kernels.
I am quite new to CUDA programming. So far I ended up with the following algorithm
//a good random generator
std::uniform_int_distribution<std::mt19937::result_type> random_sampler(0, N-1);
for(int i=0; i\<a_lot; ++i) {
//sample a number of object
nextObject = random_sampler(rng);
//call kernel to calculate the interaction and sum it up by threads. also to write down a new state back to the d_s array
CUDACalcAndReduce<THREADS><<<blocksPerGrid, THREADS>>>(d_r, d_s, d_sum, newState, nextObject, previousObject, N);
//copy the sum
cudaMemcpy(buf, d_sum, sizeof(float)*4*blocksPerGrid, cudaMemcpyDeviceToHost);
//manually reduce the rest of the sum
total = buf[0];
for (int i=1; i<blocksPerGrid; ++i) {
total += buf[i];
}
//find eigenvalues and etc. and determine a new state of the object
//just linear algebra with complex numbers
newState = calcNewState(total);
//a new state will be written by CUDA function on the next iteration
//remember the previous number of the object
previousObject = nextObject;
}
The problem is continuous transferring data between CPU and GPU, and the actual number of bytes is blocksPerGrid*4*sizeof(float) which sometimes is just a few bytes. I optimized CUDA code following the guide from NVIDIA and now it limited by the bus speed between CPU and GPU. I guess switching to pinned memory type will not make any sense since the number of transferred bytes is low.
I used Nvidia Visual Profiler and it shows the following
the most time was waisted by the transferring the data to CPU. The speed as one can see by the inset is 57.143 MB/s and the size is only 64B!
The question is is it worth to move the logic of eigenvalues algorithm to CUDA kernel?
Therefore there will be no data transfer between CPU and GPU. The problem with this algorithm, you can update only one object per iteration. It means that I can run the eigensolver only on one CUDA core. ;( Will it be that slow compared to my CPU, that will eliminate the advantage of keeping data inside the GPU ram?
The matrix size for the eigensolver algorithm does not exceed 10x10 complex numbers. I've heard that cuBLAS can be run fully on CUDA kernels without calling the CPU functions, but not sure how it is implemented.
UPD-1
As it was mentioned in the comment section.
For the each iteration we need to diagonalize only one 10x10 complex Hermitian matrix, which depends on the total calculated interaction function f. Then, we in general it is not allowed to a compute a new sum of f, before we update the state of the sampled object based on eigenvectors and eigenvalues of 10x10 matrix.
Due to the stochastic nature of Monte-Carlo approach we need all 10 eigenvectors to pick up a new state for the sampled object.
However, the suggested idea of double-buffering (in the comments) can work out in a way if we calculate the total sum of f for the next j-th iteration without the contribution of i-th sampled object and, then, add it later. I need to test it carefully in action...
UPD-2
The specs are
CPU 4-cores Intel(R) Core(TM) i5-6500 CPU # 3.20GHz
GPU GTX960
quite outdated, but I might find an access to the better system. However, switching to GTX1660 SUPER did not affect the performance, which means that a PCI bus is a bottleneck ;)
The question is is it worth to move the logic of eigenvalues algorithm
to CUDA kernel?
Depends on the system. Old cpu + new gpu? Both new? Both old?
Generally single cuda thread is a lot slower than single cpu thread. Because cuda compiler does not vectorize its loops but host c++ compiler vectorizes. So, you need to use 10-100 cuda threads to make the comparison fair.
For the optimizations:
According to the image, currently it loses 1 microsecond as a serial part of overall algorithm. 1 microsecond is not much compared to the usual kernel-launch latency from CPU but is big when it is GPU launching the kernel (dynamic parallelism) itself.
CUDA-graph feature enables the overall algorithm re-launch every step(kernel) automatically and complete quicker if steps are not CPU-dependent. But it is intended for "graph"-like workloads where some kernel leads to multiple kernels and they later join in another kernel, etc.
CUDA-dynamic-parallelism feature lets a kernel's cuda threads launch new kernels. This has much better timings than launching from CPU due to not waiting for the synchronizations between driver and host.
Sampling part's copying could be made in chunks like 100-1000 elements at once and consumed by CUDA part at once for 100-1000 steps if all parts are in CUDA.
If I were to write it, I would do it like this:
launch a loop kernel (only 1 CUDA thread) that is parent
start loop in the kernel
do real (child) kernel-launching within the loop
since every iteration needs serial, it should sync before continuing next iteration.
end the parent after 100-1000 sized chunk is complete and get new random data from CPU
when parent kernel ends, it shows in profiler as a single kernel launch that takes a lot of time and it doesn't have any CPU-based inefficiencies.
On top of the time saved from not synching a lot, there would be consistency of performance between 10x10 matrix part and the other kernel part because they are always in same hardware, not some different CPU and GPU.
Since random-num generation is always an input for the system, at least it can be double-buffered to hide cpu-to-gpu data copying latency behind the computation. Iirc, random number generation is much cheaper than sending data over pcie bridge. So this would hide mostly the data transmission slowness.
If it is a massively parallel experiment like running the executable N times, you can still launch like 10 executable instances at once and let them keep gpu busy with good efficiency. Not practical if too much memory is required per instance. Many gpus except ancient ones can run tens of kernels in parallel if each of them can not fully occupy all resources of gpu.
I have n (very large) independent linear systems (Ax = b_i). They all have the same A, but b_i is different for (i = 1, ..., n). I want to solve these n systems in parallel in CUDA.
I was thinking that it might be most efficient to do the LU factorization of A in the host and then copy the new A to the constant memory of GPU (because even if I do the LU in device, only one thread should do it and other threads will be idle. Besides, constant memory is faster). Is there any better way for this?
Another issue is that while all threads are solving their system at the same time with the same algorithm, they are all accessing the same place of memory (A[i]) at the same time, which is not coalesced. How can I optimize this ?
(This is assuming A is an stably-invertible n x n matrix.)
Don't solve a much harder problem just because it seems to parallelize better
Let B be the matrix whose columns are b_1 ... b_n. Under our assumptions about A, you actually need to solve the equation A X = B for an n x n matrix of variables, i.e. your solution is A^{-1}B.
So basically you have one matrix inversion and one matrix multiplication. This holds regardless of what software and hardware you're going to use. For inversion and multiplication just use CUBLAS, or cuSparse, or cuSOLVER, or ArrayFire or whatever solves these things the fastest.
You could do both of them together I suppose, but I'm not sure there are optimizations for that).
I am running CUFFT on chunks (N*N/p) divided in multiple GPUs, and I have a question regarding calculating the performance. First, a bit about how I am doing it:
Send N*N/p chunks to each GPU
Batched 1-D FFT for each row in p GPUs
Get N*N/p chunks back to host - perform transpose on the entire dataset
Ditto Step 1
Ditto Step 2
Gflops = ( 1e-9 * 5 * N * N *lg(N*N) ) / execution time
and Execution time is calculated as:
execution time = Sum(memcpyHtoD + kernel + memcpyDtoH times for row and col FFT for each GPU)
Is this the correct way to evaluate CUFFT performance on multiple GPUs? Is there any other way I could represent the performance of FFT?
Thanks.
If you are doing a complex transform, the operation count is correct (it should be 2.5 N log2(N) for a real valued transform), but the GFLOP formula is incorrect. In a parallel, multiprocessor operation the usual calculation of throughput is
operation count / wall clock time
In your case, presuming the GPUs are operating in parallel, either measure the wall clock time (ie. how long the whole operation took) for the execution time, or use this:
execution time = max(memcpyHtoD + kernel + memcpyDtoH times for row and col FFT for each GPU)
As it stands, your calculation represents the serial execution time. Allowing for the overheads from the multigpu scheme, I would expect that the calculated performance numbers you are getting will be lower than the equivalent transform done on a single GPU.
I am searching for some special functions (CUDA) that dedicate to typical dense matrix multiplications, e.g. A*B, where the size of A is 6*n, the size of B is n*6 and n is very large (n=2^24). I have utilized CUBLAS and some other libraries to test this example, In CUBLAS, for this example, we use 6*6=36 threads, which is far from the total parallelism of GPU, so I split A and B into submatrices(vectors) and then implement dot product function for each of them and the performance has been quite well improved. The problem is, in this case, we need to launch 36 CUDA kernels and in between them there are a lot of same data footprints (same data has been accessed for several times from the global memory of GPU). So I am asking whether there exists any solution to this kind of problem.
I have recently written such a matrix multiplication routine for a client of mine. The trick is to extract more parallelism by splitting the long inner summation into several smaller ones. Then use a separate kernel launch to calculate the full sum from the partial ones.
I am running CUFFT on chunks (N*N/p) divided in multiple GPUs, and I have a question regarding calculating the performance. First, a bit about how I am doing it:
Send N*N/p chunks to each GPU
Batched 1-D FFT for each row in p GPUs
Get N*N/p chunks back to host - perform transpose on the entire dataset
Ditto Step 1
Ditto Step 2
Gflops = ( 1e-9 * 5 * N * N *lg(N*N) ) / execution time
and Execution time is calculated as:
execution time = Sum(memcpyHtoD + kernel + memcpyDtoH times for row and col FFT for each GPU)
Is this the correct way to evaluate CUFFT performance on multiple GPUs? Is there any other way I could represent the performance of FFT?
Thanks.
If you are doing a complex transform, the operation count is correct (it should be 2.5 N log2(N) for a real valued transform), but the GFLOP formula is incorrect. In a parallel, multiprocessor operation the usual calculation of throughput is
operation count / wall clock time
In your case, presuming the GPUs are operating in parallel, either measure the wall clock time (ie. how long the whole operation took) for the execution time, or use this:
execution time = max(memcpyHtoD + kernel + memcpyDtoH times for row and col FFT for each GPU)
As it stands, your calculation represents the serial execution time. Allowing for the overheads from the multigpu scheme, I would expect that the calculated performance numbers you are getting will be lower than the equivalent transform done on a single GPU.