CUDA: sum of data on a global memory variable - cuda

I've launched a kernel with 2100 blocks and 4 threads per block.
Somewhat in this kernel all the threads have to execute a function, and put its result on an array (on global memory) into "threadIdx.x" position.
I surely know that, in this fase of the project, the function always returns 1.012086.
Now, I've written this code to do that sum:
currentErrors[threadIdx.x]=0;
for(i=0;i<gridDim.x;i++)
{
if(i==blockIdx.x)
{
currentErrors[threadIdx.x]+=globalError(mynet,myoutput);
}
}
But when the kernel ends all array's position has 1.012086 as value (instead 1.012086*2100).
Where I'm wrong?
Thanks for your helps!

To compute a final sum out of partial results of your blocks, I would suggest doing it the following way:
Let every block write a partial result into a separate cell of a gridDim.x-sized array.
Copy the array to host.
Perform final sum on the host.
I assume each block has a lot to compute on its own, which would warrant the usage of CUDA in the first place.
In your current state --- I think there can be something wrong in your kernel. Seems to me that every block is summing all the data, returning a final result as if it was a partial result.
The loop you presented does not really make sense. For each block there is only one i which will do something. The code will be equivalent to simply writing:
currentErrors[threadIdx.x]=0;
currentErrors[threadIdx.x]+=globalError(mynet,myoutput);
save for some unpredictable scheduling differences.
Remember that blocks are not executed in sync. Each block can run before, during or after any other block.
Also:
You may be interested in parallel prefix sum algorithm.
You may want to check an efficient CUDA implementation of the prefix sum.

Related

What is the fastest way to update a single float value to the GPU to access it in a CUDA kernel?

I have a opengl particle simulation, where the position of each particle is calculated in a CUDA kernel. Most memory resides within the GPU memory, but there is a single float value, I have to update from the CPU each frame.
At the moment I use cudaMemcpyAsync() to copy the float value to the GPU, but (at least from what I can tell), this slows down the performance quite a bit. I tried to use nvproof to see, which calls take the longest, with these results:
Calls Avg Min Max Name
477 2.9740us 2.8160us 4.5440us simulation(float3*, float*, float3*, float*)
477 89.033us 18.600us 283.00us cudaLaunchKernel
477 47.819us 10.200us 120.70us cudaMemcpyAsync
I think I can't really do much about the kernel launch itself, but from the calls, that happen every frame cudaMemcpyAsync() seems to be taking the longest.
I have also tried to use pinned memory and cudaHostGetDevicePointer() as described here, however for some reason this increases the kernel launch times even more, making more than up for the time saved for not needing the memcopy function.
I guess there has to be a better/faster way to update my single float variable to the GPU?
Easiest way is, that you can add an extra parameter to the simulation kernel function as a value of simple float but not as a pointer to float so that the data goes directly by the kernel launch parameters structure that CUDA sends to GPU when you launch the kernel. Then you evade that data copy command altogether. (I'm assuming CUDA packs whole function parameter descriptor data of kernel into a single copy command because kernel parameter descriptor space is limited by a few kBs or less).
simulation(fooPointer,
barPointer,
fooBarPointer,
floatVariable
);
Or, try double buffering between data update and rendering or between data update and compute so that simulation image follows the simulation calculation by 1-2 frames behind (and per-frame time gets worse) but "frames per second" increases.
If its not an interactive simulation, hiding compute/render/data latencies by double or triple buffering should work.
If you are after minimizing per-frame timing (quicker response to a user-input into simulation?) then you should embed the float variable to the end of an array that you already send/use in simulation or whatever structure you are using. If you already have a 1MB+ float buffer to send to GPU, then appending 4B(float) to end of it should not make much difference then you can access it from there. 1 copy operation should be faster than 2 copy operations with same total size.
If you are literally sending just 4B to GPU at each frame (with a simple function to generate that data), then (as 3Dave said in comments) you can try adding an extra kernel function to update the value in the GPU and just have the overhead of kernel launch command instead of both copy command overhead and data copy overhead. On a positive side, that extra kernel overhead might be hidden if there is a "graph" of kernels running for each frame automatically without enqueueing all of them again and again.
Here,
https://devblogs.nvidia.com/cuda-graphs/
The part
We are going to create a simple code which mimics this pattern. We will then use this to demonstrate the overheads involved with the standard launch mechanism and show how to introduce a CUDA Graph comprising the multiple kernels, which can be launched from the application in a single operation.
cudaGraphLaunch(instance, stream);
They say per-kernel launch overhead in this "graph" feature is only 3-4 microseconds when there are many(20) kernels in the algorithm.
Since graph supports other commands too, you can try both copy and compute parts in parallel cuda-streams within a graph and switch their inputs with double buffering so all CUDA things can stay within CUDA's context before sending output to rendering.
(Maybe)You don't even have to change the data mechanism at all. Just try sending data of float as binary representation into the pointer value and only read the pointer value (not data value) from kernel and convert it back to float. I don't know if CUDA returns an error for this if you don't try reaching the (wrong) pointer address that the float data represents, in the kernel.
simulation(fooPointer,
barPointer,
fooBarPointer,
toPtr(floatData) // <----- float to 64/32 bit pointer value
);
and in kernel
float val = fromPtrToFloat(parameter4); // converts pointer itself, not the data
But this may not be a preferred practice if you can simply use "value" type parameters.

Launch CUDA kernels sequence and data transfer between them

i have two kernel in same file, the code should run the first kernel to generate an array. then i need to send the generated array to the second kernel.
however, when i do this the second kernel see all the array elements are 0.
here is simplification (not a runnable code ) just a psyducode.
cudaMalloc(device input array)
cudaMalloc(result array)
cudaMemcpy(device_input_array,inputarray,size,hosttodevice)
kernel1<<<1,n>>(device_input_array,device_result_array)
cudaMemcpy(host_result_array,device_result_array ... )
cudaMalloc(dev_secndarray)
kernel2<<<1,n>>>(dev_secndarray,device_result_array )
for testing.. in kernel2 i create a loop on device_result_array, how ever it prints all its elements as zero.
what is the proper way to send data between kernels. should i reserve space for the result array again ? what should i do?
Memory allocated through cudaMalloc exists till the end of application, or until you explicitly free the memory. Thus, the device_result_array can be passed directly to the second kernel as an input. I would recommend the following pattern:
cudaMalloc(device_input_array)
cudaMalloc(device_intermediate_result_array)
cudaMalloc(device_final_result_array)
cudaMemcpy(device_input_array,host_input_array,size,hosttodevice)
kernel1<<<G,B>>>(device_input_array,device_intermediate_result_array)
kernel2<<<G,B>>>(device_intermediate_result_array,device_final_result_array)
cudaMemcpy(host_result_array,device_final_result_array,size,devicetohost)
If for some reason you actually need to make a copy of the intermediate result in the device, you have an option to call cudaMemcpy(...,cudaMemcpyDeviceToDevice).
In either case, don't copy the intermediate result to host (unless you really need it for other reasons). Host<->Device copies are expensive.

Failing to read an array from the global mem CUDA

I am implementing a complicated algorithm on CUDA. But there is a really odd problem. The problem can be summarised as following: the kernel will repeat a series of calculation many times. The calculation of the present iteration is upon the result of the previous one. I am using an array on the global memory for passing information between blocks in each iteration. For example there are 2 blocks, for each iteration block 0 saves the result to the global memory, then block 1 read it from the global memory. However the problem is that the block 1 can’t read the array from the global memory. it sometimes returns the result of the 1st iteration, not the previous one.
a_e and e_a are two arrays on the global mem, the size is [2*8].
d_a_e and d_e_a are on the shared mem, the size is [blockDim.x+1][8].
if(threadIdx.x<8)
{
//block 0 writes, block 1 reads, this can't work properly
a_e[blockIdx.x*8+threadIdx.x]=d_a_e[blockDim.x][threadIdx.x];
if(blockIdx.x>0)
d_a_e[0][threadIdx.x]=a_e[(blockIdx.x-1)*8+threadIdx.x];
//block 1 writes, block 0 reads, this can work properly
e_a[blockIdx.x*8+threadIdx.x]=d_e_a[0][threadIdx.x];
if(blockIdx.x < gridDim.x-1)
d_e_a[blockDim.x][threadIdx.x]=e_a[(blockIdx.x+1)*8+threadIdx.x];
}
This setup won't work; you're effectively trying to serialize your blocks, which as talonmies alluded to in his comment, doesn't work. From the CUDA programming guide:
"Thread blocks are required to execute independently: It must be possible to execute them in any order, in parallel or in series. This independence requirement allows thread blocks to be scheduled in any order across any number of cores..."
http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#thread-hierarchy
Your best recourse if probably to launch seperate kernels (such that you perform the block 0 computation in the 1st kernel, block 1 in the 2nd kernel, etc) to try to enforce that the results from the 1st kernel are done before reading them in the next kernel. There has been some work done on have inter-block synchronization, but you wouldn't derive much benefit from them, as you need to serialize your blocks.
EDIT: I should also point out that the block scheduling isn't documented, and is liable to change at any point, so any inter-block synchronization will be non-portable and liable to break on a driver or CUDA toolkit update.

How can I sync my own kernel function in CUDA?

extern "C" void callKernel()
{
for(int i=0;i<10;i++)
{
calc<<< grid, thread >>>(d_arr);
copyElement<<< grid, thread >>>(d_arr,d_arr_part,3);
findMax<<< grid, thread >>>(d_arr_part, d_max);
positionChange<<< grid, thread >>>(d_arr, d_max);
}
}
Above code is about computing kernels.
The functionality of kernel function is like this.
"calc" : calculate in d_arr and update the d_arr's elements value.
"copyElement" : for example, d_arr is 4step array, In the array, I just want 3rd element, so I allocate other variable d_arr_part and copy to 3rd element of d_arr to d_arr_part.
"findMax" : find max value in d_arr_part and the max value is stored to d_max.
"positionChange" : d_arr element is update according to d_max value.
Problem
When I execute my program, results have no consistency. Whenever I execute, results are changed. I search this problem in google and find out that kernel function is executed concurrently. My intension is all kernel function is executed in sequence. I read NVIDIA's CUDA C programming guide at section 3.2.5. But I can't understand what to do to solve the problem. If anybody have an idea, please show me the way. Thanks in advance.
You can use cudaDeviceSynchronize in between kernel executions to guarantee a sequential order. However, your code does not require this, so I think there might be a bug in your kernels.

CUDA finding the max value in given array

I tried to develop a small CUDA program for find the max value in the given array,
int input_data[0...50] = 1,2,3,4,5....,50
max_value initialized by the first value of the input_data[0],
The final answer is stored in result[0].
The kernel is giving 0 as the max value. I don't know what the problem is.
I executed by 1 block 50 threads.
__device__ int lock=0;
__global__ void max(float *input_data,float *result)
{
float max_value = input_data[0];
int tid = threadIdx.x;
if( input_data[tid] > max_value)
{
do{} while(atomicCAS(&lock,0,1));
max_value=input_data[tid];
__threadfence();
lock=0;
}
__syncthreads();
result[0]=max_value; //Final result of max value
}
Even though there are in-built functions, just I am practicing small problems.
You are trying to set up a "critical section", but this approach on CUDA can lead to hang of your whole program - try to avoid it whenever possible.
Why your code hangs?
Your kernel (__global__ function) is executed by groups of 32 threads, called warps. All threads inside a single warp execute synchronously. So, the warp will stop in your do{} while(atomicCAS(&lock,0,1)) until all threads from your warp succeed with obtaining the lock. But obviously, you want to prevent several threads from executing the critical section at the same time. This leads to a hang.
Alternative solution
What you need is a "parallel reduction algorithm". You can start reading here:
Parallel prefix sum # wikipedia
Parallel Reduction # CUDA website
NVIDIA's Guide to Reduction
Your code has potential race. I'm not sure if you defined the 'max_value' variable in shared memory or not, but both are wrong.
1) If 'max_value' is just a local variable, then each thread holds the local copy of it, which are not the actual maximum value (they are just the maximum value between input_data[0] and input_data[tid]). In the last line of code, all threads write to result[0] their own max_value, which will result in undefined behavior.
2) If 'max_value' is a shared variable, 49 threads will enter the if-statements block, and they will try to update the 'max_value' one at a time using locks. But the order of executions among 49 threads is not defined, and therefore some threads may overwrite the actual maximum value to smaller values. You would need to compare the maximum value again within the critical section.
Max is a 'reduction' - check out the Reduction sample in the SDK, and do max instead of summation.
The white paper's a little old but still reasonably useful:
http://developer.download.nvidia.com/compute/cuda/1_1/Website/projects/reduction/doc/reduction.pdf
The final optimization step is to use 'warp synchronous' coding to avoid unnecessary __syncthreads() calls.
It requires at least 2 kernel invocations - one to write a bunch of intermediate max() values to global memory, then another to take the max() of that array.
If you want to do it in a single kernel invocation, check out the threadfenceReduction SDK sample. That uses __threadfence() and atomicAdd() to track progress, then has 1 block do a final reduction when all blocks have finished writing their intermediate results.
There are different accesses for variables. when you define a variable by device then the variable is placed on GPU global memory and it is accessible by all threads in grid , shared places the variable in block shared memory and it is accessible only by the threads of that block , at the end if you don't use any keyword like float max_value then the variable is placed on thread registers and it can be accessed only in that thread.In your code each thread have local variable max_value and it doesn't identify variables in other threads.