How to compute the throughput of cuFFT in GFLOPs [duplicate] - cuda

I am running CUFFT on chunks (N*N/p) divided in multiple GPUs, and I have a question regarding calculating the performance. First, a bit about how I am doing it:
Send N*N/p chunks to each GPU
Batched 1-D FFT for each row in p GPUs
Get N*N/p chunks back to host - perform transpose on the entire dataset
Ditto Step 1
Ditto Step 2
Gflops = ( 1e-9 * 5 * N * N *lg(N*N) ) / execution time
and Execution time is calculated as:
execution time = Sum(memcpyHtoD + kernel + memcpyDtoH times for row and col FFT for each GPU)
Is this the correct way to evaluate CUFFT performance on multiple GPUs? Is there any other way I could represent the performance of FFT?
Thanks.

If you are doing a complex transform, the operation count is correct (it should be 2.5 N log2(N) for a real valued transform), but the GFLOP formula is incorrect. In a parallel, multiprocessor operation the usual calculation of throughput is
operation count / wall clock time
In your case, presuming the GPUs are operating in parallel, either measure the wall clock time (ie. how long the whole operation took) for the execution time, or use this:
execution time = max(memcpyHtoD + kernel + memcpyDtoH times for row and col FFT for each GPU)
As it stands, your calculation represents the serial execution time. Allowing for the overheads from the multigpu scheme, I would expect that the calculated performance numbers you are getting will be lower than the equivalent transform done on a single GPU.

Related

Optimising Monte-Carlo algorithm | Reduce operation on GPU & Eigenvalues problem | Many-body problem

This issue reminds some typical many-body problem, but with some extra calculations.
I am working on the generalized Metropolis Monte-Carlo algorithm for the modeling of large number of arbitrary quantum systems (magnetic ions for example) interacting classically with each other. But it actually doesn't matter for the question.
There is more than 100000 interacting objects, each one can be described by a coordinate and a set of parameters describing its current state r_i, s_i.
Can be translated to the C++CUDA as float4 and float4 vectors
To update the system following Monte-Carlo method for such systems, we need to randomly sample 1 object from the whole set; calculate the interaction function for it f(r_j - r_i, s_j); substitute to some matrix and find eigenvectors of it, from which one a new state will be calculated.
The interaction is additive as usual, i.e. the total interaction will be the sum between all pairs.
Formally this can be decomposed into steps
Generate random number i
Calculate the interaction function for all possible pairs f(r_j - r_i, s_j)
Sum it. The result will be a vector F
Multiply it by some tensor and add another one h = h + dot(F,t). Some basic linear algebra stuff.
Find the eigenvectors and eigenvalues, based on some simple algorithm, choose one vector V_k and write in back to the array s_j of all objects's states.
There is a big question, which parts of this can be computed on CUDA kernels.
I am quite new to CUDA programming. So far I ended up with the following algorithm
//a good random generator
std::uniform_int_distribution<std::mt19937::result_type> random_sampler(0, N-1);
for(int i=0; i\<a_lot; ++i) {
//sample a number of object
nextObject = random_sampler(rng);
//call kernel to calculate the interaction and sum it up by threads. also to write down a new state back to the d_s array
CUDACalcAndReduce<THREADS><<<blocksPerGrid, THREADS>>>(d_r, d_s, d_sum, newState, nextObject, previousObject, N);
//copy the sum
cudaMemcpy(buf, d_sum, sizeof(float)*4*blocksPerGrid, cudaMemcpyDeviceToHost);
//manually reduce the rest of the sum
total = buf[0];
for (int i=1; i<blocksPerGrid; ++i) {
total += buf[i];
}
//find eigenvalues and etc. and determine a new state of the object
//just linear algebra with complex numbers
newState = calcNewState(total);
//a new state will be written by CUDA function on the next iteration
//remember the previous number of the object
previousObject = nextObject;
}
The problem is continuous transferring data between CPU and GPU, and the actual number of bytes is blocksPerGrid*4*sizeof(float) which sometimes is just a few bytes. I optimized CUDA code following the guide from NVIDIA and now it limited by the bus speed between CPU and GPU. I guess switching to pinned memory type will not make any sense since the number of transferred bytes is low.
I used Nvidia Visual Profiler and it shows the following
the most time was waisted by the transferring the data to CPU. The speed as one can see by the inset is 57.143 MB/s and the size is only 64B!
The question is is it worth to move the logic of eigenvalues algorithm to CUDA kernel?
Therefore there will be no data transfer between CPU and GPU. The problem with this algorithm, you can update only one object per iteration. It means that I can run the eigensolver only on one CUDA core. ;( Will it be that slow compared to my CPU, that will eliminate the advantage of keeping data inside the GPU ram?
The matrix size for the eigensolver algorithm does not exceed 10x10 complex numbers. I've heard that cuBLAS can be run fully on CUDA kernels without calling the CPU functions, but not sure how it is implemented.
UPD-1
As it was mentioned in the comment section.
For the each iteration we need to diagonalize only one 10x10 complex Hermitian matrix, which depends on the total calculated interaction function f. Then, we in general it is not allowed to a compute a new sum of f, before we update the state of the sampled object based on eigenvectors and eigenvalues of 10x10 matrix.
Due to the stochastic nature of Monte-Carlo approach we need all 10 eigenvectors to pick up a new state for the sampled object.
However, the suggested idea of double-buffering (in the comments) can work out in a way if we calculate the total sum of f for the next j-th iteration without the contribution of i-th sampled object and, then, add it later. I need to test it carefully in action...
UPD-2
The specs are
CPU 4-cores Intel(R) Core(TM) i5-6500 CPU # 3.20GHz
GPU GTX960
quite outdated, but I might find an access to the better system. However, switching to GTX1660 SUPER did not affect the performance, which means that a PCI bus is a bottleneck ;)
The question is is it worth to move the logic of eigenvalues algorithm
to CUDA kernel?
Depends on the system. Old cpu + new gpu? Both new? Both old?
Generally single cuda thread is a lot slower than single cpu thread. Because cuda compiler does not vectorize its loops but host c++ compiler vectorizes. So, you need to use 10-100 cuda threads to make the comparison fair.
For the optimizations:
According to the image, currently it loses 1 microsecond as a serial part of overall algorithm. 1 microsecond is not much compared to the usual kernel-launch latency from CPU but is big when it is GPU launching the kernel (dynamic parallelism) itself.
CUDA-graph feature enables the overall algorithm re-launch every step(kernel) automatically and complete quicker if steps are not CPU-dependent. But it is intended for "graph"-like workloads where some kernel leads to multiple kernels and they later join in another kernel, etc.
CUDA-dynamic-parallelism feature lets a kernel's cuda threads launch new kernels. This has much better timings than launching from CPU due to not waiting for the synchronizations between driver and host.
Sampling part's copying could be made in chunks like 100-1000 elements at once and consumed by CUDA part at once for 100-1000 steps if all parts are in CUDA.
If I were to write it, I would do it like this:
launch a loop kernel (only 1 CUDA thread) that is parent
start loop in the kernel
do real (child) kernel-launching within the loop
since every iteration needs serial, it should sync before continuing next iteration.
end the parent after 100-1000 sized chunk is complete and get new random data from CPU
when parent kernel ends, it shows in profiler as a single kernel launch that takes a lot of time and it doesn't have any CPU-based inefficiencies.
On top of the time saved from not synching a lot, there would be consistency of performance between 10x10 matrix part and the other kernel part because they are always in same hardware, not some different CPU and GPU.
Since random-num generation is always an input for the system, at least it can be double-buffered to hide cpu-to-gpu data copying latency behind the computation. Iirc, random number generation is much cheaper than sending data over pcie bridge. So this would hide mostly the data transmission slowness.
If it is a massively parallel experiment like running the executable N times, you can still launch like 10 executable instances at once and let them keep gpu busy with good efficiency. Not practical if too much memory is required per instance. Many gpus except ancient ones can run tens of kernels in parallel if each of them can not fully occupy all resources of gpu.

Sparse Matrix Vector Product on Multiple GPUs

I was wondering what the fastest way of computing a sparse matrix-vector product y = Ax in CUDA on multiple (let say n) GPUs is.
My naive approach would be to divide the vector x and y into n chunks, 1 chunk on each GPU. Then also split up the matrix A in smaller n^2 blocks A_ij and computing
y_i = \sum_j A_{i,j} x_j, // GPU j stores A_{i,j} and x_j, result is copied
// to and summed up on GPU i
on the different GPUs j=1..n with let's say cuSPARSE. Would this work? With the unified memory architecture, in principle all GPUs should be able to access the global memory.
Is the memory transfer between the GPUs going to be incredibly slow? I don't expect a large speed up but I was wondering if it is going to be slower than doing the matrix-vector multiplication on 1 single GPU.
I would suggest a different approach. Don't break up the vector x into chunks. Transfer x to all GPUs.
Break up the A matrix according to rows. So, for example, if A had 9 rows, and you have 3 GPUs, then transfer rows 1-3 of A to the first GPU, 4-6 of A to the second GPU, and 7-9 of A to the third GPU.
Then compute the 3 individual pieces of y on the 3 GPUs:
y[1-3] = A[1-3]*x
y[4-6] = A[4-6]*x
y[7-9] = A[7-9]*x
Each of those 3 operations could be done with cusparse<T>csrmv, for example (or CUB now has an spmv routine also).
Reassembly of the y vector should be trivial (concatenation).
There is no need for inter-GPU data transfer during the computation, only on transfer of results (y).
A possible "optimization" would be to partition A based on "work" rather than naively by rows. But the benefit of this would depend on the structure of A, so would require analysis. A simplistic approach to this optimization could be to just break up A based on (approximately) equalizing the number of NZ elements in each chunk.

How to Benchmark GPU with CUDA GFLOPs/Watt [duplicate]

I want a measure of how much of the peak performance my kernel archives.
Say I have a NVIDIA Tesla C1060, which has a peak GFLOPS of 622.08 (~= 240Cores * 1300MHz * 2).
Now in my kernel I counted for each thread 16000 flop (4000 x (2 subtraction, 1 multiplication and 1 sqrt)). So when I have 1,000,000 threads I would come up with 16GFLOP. And as the kernel takes 0.1 seconds I would archive 160GFLOPS, which would be a quarter of the peak performance. Now my questions:
Is this approach correct?
What about comparisons (if(a>b) then....)? Do I have to consider them as well?
Can I use the CUDA profiler for easier and more accurate results? I tried the instructions counter, but I could not figure out, what the figure means.
sister question: How to calculate the achieved bandwidth of a CUDA kernel
First some general remarks:
In general, what you are doing is mostly an exercise in futility and is the reverse of how most people would probably go about performance analysis.
The first point to make is that the peak value you are quoting is for strictly for floating point multiply-add instructions (FMAD), which count as two FLOPS, and can be retired at a maximum rate of one per cycle. Other floating point operations which retire at a maximum rate of one per cycle would formally only be classified as a single FLOP, while others might require many cycles to be retired. So if you decided to quote kernel performance against that peak, you are really comparing your codes performance against a stream of pure FMAD instructions, and nothing more than that.
The second point is that when researchers quote FLOP/s values from a piece of code, they are usually using a model FLOP count for the operation, not trying to count instructions. Matrix multiplication and the Linpack LU factorization benchmarks are classic examples of this approach to performance benchmarking. The lower bound of the operation count of those calculations is exactly known, so the calculated throughput is simply that lower bound divided by the time. The actual instruction count is irrelevent. Programmers often use all sorts of techniques, including rundundant calculations, speculative or predictive calculations, and a host of other ideas to make code run faster. The actual FLOP count of such code is irrelevent, the reference is always the model FLOP count.
Finally, when looking at quantifying performance, there are usually only two points of comparison of any real interest
Does version A of the code run faster than version B on the same hardware?
Does hardware A perform better than hardware B doing the task of interest?
In the first case you really only need to measure execution time. In the second, a suitable measure usually isn't FLOP/s, it is useful operations per unit time (records per second in sorting, cells per second in a fluid mechanical simulation, etc). Sometimes, as mentioned above, the useful operations can be the model FLOP count of an operation of known theoretical complexity. But the actual floating point instruction count rarely, if ever, enters into the analysis.
If your interest is really about optimization and understanding the performance of your code, then maybe this presentation by Paulius Micikevicius from NVIDIA might be of interest.
Addressing the bullet point questions:
Is this approach correct?
Strictly speaking, no. If you are counting floating point operations, you would need to know the exact FLOP count from the code the GPU is running. The sqrt operation can consume a lot more than a single FLOP, depending on its implementation and the characteristics of the number it is operating on, for example. The compiler can also perform a lot of optimizations which might change the actual operation/instruction count. The only way to get a truly accurate count would be to disassemble compiled code and count the individual floating point operands, perhaps even requiring assumptions about the characteristics of values the code will compute.
What about comparisons (if(a>b) then....)? Do I have to consider them as well?
They are not floating point multiply-add operations, so no.
Can I use the CUDA profiler for easier and more accurate results? I tried the instructions counter, but I could not figure out, what the figure means.
Not really. The profiler can't differentiate between a floating point intruction and any other type of instruction, so (as of 2011) FLOP count from a piece of code via the profiler is not possible. [EDIT: see Greg's execellent answer below for a discussion of the FLOP counting facilities available in versions of the profiling tools released since this answer was written]
Nsight VSE (>3.2) and the Visual Profiler (>=5.5) support Achieved FLOPs calculation. In order to collect the metric the profilers run the kernel twice (using kernel replay). In the first replay the number of floating point instructions executed is collected (with understanding of predication and active mask). in the second replay the duration is collected.
nvprof and Visual Profiler have a hardcoded definition. FMA counts as 2 operations. All other operations are 1 operation. The flops_sp_* counters are thread instruction execution counts whereas flops_sp is the weighted sum so some weighting can be applied using the individual metrics. However, flops_sp_special covers a number of different instructions.
The Nsight VSE experiment configuration allows the user to define the operations per instruction type.
Nsight Visual Studio Edition
Configuring to collect Achieved FLOPS
Execute the menu command Nsight > Start Performance Analysis... to open the Activity Editor
Set Activity Type to Profile CUDA Application
In Experiment Settings set Experiments to Run to Custom
In the Experiment List add Achieved FLOPS
In the middle pane select Achieved FLOPS
In the right pane you can custom the FLOPS per instruction executed. The default weighting is for FMA and RSQ to count as 2. In some cases I have seen RSQ as high as 5.
Run the Analysis Session.
Viewing Achieved FLOPS
In the nvreport open the CUDA Launches report page.
In the CUDA Launches page select a kernel.
In the report correlation pane (bottom left) select Achieved FLOPS
nvprof
Metrics Available (on a K20)
nvprof --query-metrics | grep flop
flops_sp: Number of single-precision floating-point operations executed by non-predicated threads (add, multiply, multiply-accumulate and special)
flops_sp_add: Number of single-precision floating-point add operations executed by non-predicated threads
flops_sp_mul: Number of single-precision floating-point multiply operations executed by non-predicated threads
flops_sp_fma: Number of single-precision floating-point multiply-accumulate operations executed by non-predicated threads
flops_dp: Number of double-precision floating-point operations executed non-predicated threads (add, multiply, multiply-accumulate and special)
flops_dp_add: Number of double-precision floating-point add operations executed by non-predicated threads
flops_dp_mul: Number of double-precision floating-point multiply operations executed by non-predicated threads
flops_dp_fma: Number of double-precision floating-point multiply-accumulate operations executed by non-predicated threads
flops_sp_special: Number of single-precision floating-point special operations executed by non-predicated threads
flop_sp_efficiency: Ratio of achieved to peak single-precision floating-point operations
flop_dp_efficiency: Ratio of achieved to peak double-precision floating-point operations
Collection and Results
nvprof --devices 0 --metrics flops_sp --metrics flops_sp_add --metrics flops_sp_mul --metrics flops_sp_fma matrixMul.exe
[Matrix Multiply Using CUDA] - Starting...
==2452== NVPROF is profiling process 2452, command: matrixMul.exe
GPU Device 0: "Tesla K20c" with compute capability 3.5
MatrixA(320,320), MatrixB(640,320)
Computing result using CUDA Kernel...
done
Performance= 6.18 GFlop/s, Time= 21.196 msec, Size= 131072000 Ops, WorkgroupSize= 1024 threads/block
Checking computed result for correctness: OK
Note: For peak performance, please refer to the matrixMulCUBLAS example.
==2452== Profiling application: matrixMul.exe
==2452== Profiling result:
==2452== Metric result:
Invocations Metric Name Metric Description Min Max Avg
Device "Tesla K20c (0)"
Kernel: void matrixMulCUDA<int=32>(float*, float*, float*, int, int)
301 flops_sp FLOPS(Single) 131072000 131072000 131072000
301 flops_sp_add FLOPS(Single Add) 0 0 0
301 flops_sp_mul FLOPS(Single Mul) 0 0 0
301 flops_sp_fma FLOPS(Single FMA) 65536000 65536000 65536000
NOTE: flops_sp = flops_sp_add + flops_sp_mul + flops_sp_special + (2 * flops_sp_fma) (approximately)
Visual Profiler
The Visual Profiler supports the metrics shown in the nvprof section above.

Does CUDA automatically load-balance for you?

I'm hoping for some general advice and clarification on best practices for load balancing in CUDA C, in particular:
If 1 thread in a warp takes longer than the other 31, will it hold up the other 31 from completing?
If so, will the spare processing capacity be assigned to another warp?
Why do we need the notion of warp and block? Seems to me a warp is just a small block of 32 threads.
So in general, for a given call to a kernel what do I need load balance?
Threads in each warp?
Threads in each block?
Threads across all blocks?
Finally, to give an example, what load balancing techniques you would use for the following function:
I have a vector x0 of N points: [1, 2, 3, ..., N]
I randomly select 5% of the points and log them (or some complicated function)
I write the resulting vector x1 (e.g. [1, log(2), 3, 4, 5, ..., N]) to memory
I repeat the above 2 operations on x1 to yield x2 (e.g. [1, log(log(2)), 3, 4, log(5), ..., N]), and then do a further 8 iterations to yield x3 ... x10
I return x10
Many thanks.
Threads are grouped into three levels that are scheduled differently. Warps utilize SIMD for higher compute density. Thread blocks utilize multithreading for latency tolerance. Grids provide independent, coarse-grained units of work for load balancing across SMs.
Threads in a warp
The hardware executes the 32 threads of a warp together. It can execute 32 instances of a single instruction with different data. If the threads take different control flow, so they are not all executing the same instruction, then some of those 32 execution resources will be idle while the instruction executes. This is called control divergence in CUDA references.
If a kernel exhibits a lot of control divergence, it may be worth redistributing work at this level. This balances work by keeping all execution resources busy within a warp. You can reassign work between threads as shown below.
// Identify which data should be processed
if (should_do_work(threadIdx.x)) {
int tmp_index = atomicAdd(&tmp_counter, 1);
tmp[tmp_index] = threadIdx.x;
}
__syncthreads();
// Assign that work to the first threads in the block
if (threadIdx.x < tmp_counter) {
int thread_index = tmp[threadIdx.x];
do_work(thread_index); // Thread threadIdx.x does work on behalf of thread tmp[threadIdx.x]
}
Warps in a block
On an SM, the hardware schedules warps onto execution units. Some instructions take a while to complete, so the scheduler interleaves the execution of multiple warps to keep the execution units busy. If some warps are not ready to execute, they are skipped with no performance penalty.
There is usually no need for load balancing at this level. Simply ensure that enough warps are available per thread block so that the scheduler can always find a warp that is ready to execute.
Blocks in a grid
The runtime system schedules blocks onto SMs. Several blocks can run concurrently on an SM.
There is usually no need for load balancing at this level. Simply ensure that enough thread blocks are available to fill all SMs several times over. It is useful to overprovision thread blocks to minimize the load imbalance at the end of a kernel, when some SMs are idle and no more thread blocks are ready to execute.
As others have already said, the threads within a warp use a scheme called Single Instruction, Multiple Data (SIMD.) SIMD means that there is a single instruction decoding unit in the hardware controling multiple arithmetic and logic units (ALU's.) A CUDA 'core' is basically just a floating-point ALU, not a full core in the same sense as a CPU core. While the exact CUDA core to instruction decoder ratio varies between different CUDA Compute Capability versions, all of them use this scheme. Since they all use the same instruction decoder, each thread within a warp of threads will execute the exact same instruction on every clock cycle. The cores assigned to the threads within that warp that do not follow the currently-executing code path will simply do nothing on that clock cycle. There is no way to avoid this, as it is an intentional physical hardware limitation. Thus, if you have 32 threads in a warp and each of those 32 threads follows a different code path, you will have no speedup from parallelism at all within that warp. It will execute each of those 32 code paths sequentially. This is why it is ideal for all threads within the warp to follow the same code path as much as possible, since parallelism within a warp is only possible when multiple threads are following the same code path.
The reason that the hardware is designed this way is that it saves chip space. Since each core doesn't have its own instruction decoder, the cores themselves take up less chip space (and use less power.) Having smaller cores that use less power per core means that more cores can be packed onto the chip. Having small cores like this is what allows GPU's to have hundreds or thousands of cores per chip while CPU's only have 4 or 8, even while maintaining similar chip sizes and power consumption (and heat dissipation) levels. The trade off with SIMD is that you can pack a lot more ALU's onto the chip and get a lot more parallelism, but you only get the speedup when those ALU's are all executing the same code path. The reason this trade off is made to such a high degree for GPU's is that much of the computation involved in 3D graphics processing is simply floating-point matrix multiplication. SIMD lends itself well to matrix multiplication because the process to compute each output value of the resultant matrix is identical, just on different data. Furthermore, each output value can be computed completely independently of every other output value, so the threads don't need to communicate with each other at all. Incidentally, similar patterns (and often even matrix multiplication itself) also happen to appear commonly in scientific and engineering applications. This is why General Purpose processing on GPU's (GPGPU) was born. CUDA (and GPGPU in general) was basically an afterthought on how existing hardware designs which were already being mass produced for the gaming industry could also be used to speed up other types of parallel floating-point processing applications.
If 1 thread in a warp takes longer than the other 31, will it hold up the other 31 from completing?
Yes. As soon as you have divergence in a Warp, the scheduler needs to take all divergent branches and process them one by one. The compute capacity of the threads not in the currently executed branch will then be lost. You can check the CUDA Programming Guide, it explains quite well what exactly happens.
If so, will the spare processing capacity be assigned to another warp?
No, unfortunately that is completely lost.
Why do we need the notion of warp and block? Seems to me a warp is just a small block of 32 threads.
Because a Warp has to be SIMD (single instruction, multiple data) to achieve optimal performance, the Warps inside a block can be completely divergent, however, they share some other resources. (Shared Memory, Registers, etc.)
So in general, for a given call to a kernel what do I need load balance?
I don't think load balance is the right word here. Just make sure, that you always have enough Threads being executed all the time and avoid divergence inside warps. Again, the CUDA Programming Guide is a good read for things like that.
Now for the example:
You could execute m threads with m=0..N*0.05, each picking a random number and putting the result of the "complicated function" in x1[m].
However, randomly reading from global memory over a large area isn't the most efficient thing you can do with a GPU, so you should also think about whether that really needs to be completely random.
Others have provided good answers for the theoretical questions.
For your example, you might consider restructuring the problem as follows:
have a vector x of N points: [1, 2, 3, ..., N]
compute some complicated function on every element of x, yielding y.
randomly sample subsets of y to produce y0 through y10.
Step 2 operates on every input element exactly once, without consideration for whether that value is needed. If step 3's sampling is done without replacement, this means that you'll be computing 2x the number of elements you'll actually need, but you'll be computing everything with no control divergence and all memory access will be coherent. These are often much more important drivers of speed on GPUs than the computation itself, but this depends on what the complicated function is really doing.
Step 3 will have a non-coherent memory access pattern, so you'll have to decide whether it's better to do it on the GPU or whether it's faster to transfer it back to the CPU and do the sampling there.
Depending on what the next computation is, you might restructure step 3 to instead randomly draw an integer in [0,N) for each element. If the value is in [N/2,N) then ignore it in the next computation. If it's in [0,N/2), then associate its value with an accumulator for that virtual y* array (or whatever is appropriate for your computation).
Your example is a really good way of showing of reduction.
I have a vector x0 of N points: [1, 2, 3, ..., N]
I randomly pick 50% of the points and log them (or some complicated function) (1)
I write the resulting vector x1 to memory (2)
I repeat the above 2 operations on x1 to yield x2, and then do a further 8 iterations to yield x3 ... x10 (3)
I return x10 (4)
Say |x0| = 1024, and you pick 50% of the points.
The first stage could be the only stage where you have to read from the global memory, I will show you why.
512 threads read 512 values from memory(1), it stores them into shared memory (2), then for step (3) 256 threads will read random values from shared memory and store them also in shared memory. You do this until you end up with one thread, which will write it back to global memory (4).
You could extend this further by at the initial step having 256 threads reading two values, or 128 threads reading 4 values, etc...

Calculating performance of CUFFT

I am running CUFFT on chunks (N*N/p) divided in multiple GPUs, and I have a question regarding calculating the performance. First, a bit about how I am doing it:
Send N*N/p chunks to each GPU
Batched 1-D FFT for each row in p GPUs
Get N*N/p chunks back to host - perform transpose on the entire dataset
Ditto Step 1
Ditto Step 2
Gflops = ( 1e-9 * 5 * N * N *lg(N*N) ) / execution time
and Execution time is calculated as:
execution time = Sum(memcpyHtoD + kernel + memcpyDtoH times for row and col FFT for each GPU)
Is this the correct way to evaluate CUFFT performance on multiple GPUs? Is there any other way I could represent the performance of FFT?
Thanks.
If you are doing a complex transform, the operation count is correct (it should be 2.5 N log2(N) for a real valued transform), but the GFLOP formula is incorrect. In a parallel, multiprocessor operation the usual calculation of throughput is
operation count / wall clock time
In your case, presuming the GPUs are operating in parallel, either measure the wall clock time (ie. how long the whole operation took) for the execution time, or use this:
execution time = max(memcpyHtoD + kernel + memcpyDtoH times for row and col FFT for each GPU)
As it stands, your calculation represents the serial execution time. Allowing for the overheads from the multigpu scheme, I would expect that the calculated performance numbers you are getting will be lower than the equivalent transform done on a single GPU.