cuBLAS argmin -- segfault if outputing to device memory? - cuda

In cuBLAS, cublasIsamin() gives the argmin for a single-precision array.
Here's the full function declaration: cublasStatus_t cublasIsamin(cublasHandle_t handle, int n,
const float *x, int incx, int *result)
The cuBLAS programmer guide provides this information about the cublasIsamin() parameters:
If I use host (CPU) memory for result, then cublasIsamin works properly. Here's an example:
void argmin_experiment_hostOutput(){
float h_A[4] = {1, 2, 3, 4}; int N = 4;
float* d_A = 0;
CHECK_CUDART(cudaMalloc((void**)&d_A, N * sizeof(d_A[0])));
CHECK_CUBLAS(cublasSetVector(N, sizeof(h_A[0]), h_A, 1, d_A, 1));
cublasHandle_t handle; CHECK_CUBLAS(cublasCreate(&handle));
int result; //host memory
CHECK_CUBLAS(cublasIsamin(handle, N, d_A, 1, &result));
printf("argmin = %d, min = %f \n", result, h_A[result]);
CHECK_CUBLAS(cublasDestroy(handle));
}
However, if I use device (GPU) memory for result, then cublasIsamin segfaults. Here's an example that segfaults:
void argmin_experiment_deviceOutput(){
float h_A[4] = {1, 2, 3, 4}; int N = 4;
float* d_A = 0;
CHECK_CUDART(cudaMalloc((void**)&d_A, N * sizeof(d_A[0])));
CHECK_CUBLAS(cublasSetVector(N, sizeof(h_A[0]), h_A, 1, d_A, 1));
cublasHandle_t handle; CHECK_CUBLAS(cublasCreate(&handle));
int* d_result = 0;
CHECK_CUDART(cudaMalloc((void**)&d_result, 1 * sizeof(d_result[0]))); //just enough device memory for 1 result
CHECK_CUDART(cudaMemset(d_result, 0, 1 * sizeof(d_result[0])));
CHECK_CUBLAS(cublasIsamin(handle, N, d_A, 1, d_result)); //SEGFAULT!
CHECK_CUBLAS(cublasDestroy(handle));
}
The Nvidia guide says that `cublasIsamin()` can output to device memory. What am I doing wrong?
Motivation: I want to compute the argmin() of several vectors concurrently in multiple streams. Outputting to host memory requires CPU-GPU synchronization and seems to kill the multi-kernel concurrency. So, I want to output the argmin to device memory instead.

The CUBLAS V2 API does support writing scalar results to device memory. But it doesn't support this by default. As per Section 2.4 "Scalar parameters" of the documentation, you need to use cublasSetPointerMode() to make the API aware that scalar argument pointers will reside in device memory. Note this also makes these level 1 BLAS functions asynchronous, so you must ensure that the GPU has completed the kernel(s) before trying to access the result pointer.
See this answer for a complete working example.

Related

Cuda: Copy 1D Array From CPU to GPU

I am newbie to Cuda, trying to copy array from Host to Device via cudaMemcpy(...)
However,the data passed to GPU seems to be totally different (for cost: totally wrong, for G: wrong after index of 5)
My data is a malloc array (written in C) of size 25 for example,
I tried to copy through the following way (MAX = 5):
Declaration:
int *cost, int* G
int *dev_cost, *dev_G;
Allocation:
cost = (int*)malloc(MAX* MAX * sizeof(int));
G = (int*)malloc(MAX* MAX* sizeof(int));
cudaMalloc((void**)&dev_cost, MAX*MAX);
cudaMalloc((void**)&dev_G, MAX*MAX);
Data transfer:
cudaMemcpy(dev_cost, cost, MAX*MAX, cudaMemcpyHostToDevice);
cudaMemcpy(dev_G, G, MAX*MAX, cudaMemcpyHostToDevice);
Kernel Trigger:
assignCost<<<1,MAX*MAX>>>(dev_G,dev_cost);
Kernel Function:
__global__ void assignCost(int *G, int *cost)
{
int tid = threadIdx.x + blockDim.x*blockIdx.x;
printf("cost[%d]: %d G[%d] = %d\n", tid, cost[tid], tid, G[tid]);
if(tid<MAX*MAX)
{
if (G[tid] == 0)
cost[tid] = INT_MAX;
else
cost[tid] = G[tid];
}
}
Is there anything wrong with my approach? If then, how should i modify?
cudaMemcpy(dev_cost, cost, MAX*MAX*sizeof(int), cudaMemcpyHostToDevice);
cudaMemcpy(dev_G, G, MAX*MAX*sizeof(int), cudaMemcpyHostToDevice);
The 3rd argument for cudaMemcpy is the count in bytes. As you have MAX*MAX integers and each integer has a size sizeof(int) bytes, replace MAX*MAX as MAX*MAX*sizeof(int)

Strange Cudamemcpy execution time

I'm currently working on a Cuda code which computes a simple difference pixel by pixel of two images (size: 2560x1706 px) in order to compare execution time of CPU and GPU.
I realize a "for" loop of 1000 iterations of my kernel to have a more significant execution time, and I perform the cudaMemcpy (from device to host) straight after the loop to retrieve the data computed.
Nevertheless, the execution time of this cudaMemcpy took 2800 ms which is higher than expected. I just was asking myself why I obtain such a result.
Here is my Kernel Code :
__global__ void diff (unsigned char *data1 ,unsigned char *data2, int *data_res)
{
int v = threadIdx.x + blockIdx.x*blockDim.x;
if (v < N)
{
data_res[v] = (int) data2[v] - (int) data1[v];
}
}
Here is the kernel calls :
cudaProfilerStart();
// Cuda allocation
cudaMalloc((void**)&dev_data1, N*sizeof(unsigned char));
cudaMalloc((void**)&dev_data2, N*sizeof(unsigned char));
cudaMalloc((void**)&dev_data_res, N*sizeof(int));
// Cuda memory copy
cudaMemcpy(dev_data1, img1->data, N*sizeof(unsigned char), cudaMemcpyHostToDevice);
cudaMemcpy(dev_data2, img2->data, N*sizeof(unsigned char), cudaMemcpyHostToDevice);
cudaMemcpy(dev_data_res, imgresult->data, N*sizeof(int), cudaMemcpyHostToDevice);
//Simulate nb_loops images
for(int m = 0; m < nb_loops ; m++)
{
diff<<<blck_nb, thrd_nb>>>(dev_data1, dev_data2, dev_data_res);
//printf("%4d", m);
}
printf("WAITING FOR MEMCPY...\n");
clock_t begin = clock(), diff;
cudaMemcpy(imgresult_data, dev_data_res, N*sizeof(int), cudaMemcpyDeviceToHost);
diff = clock() - begin;
float msec = diff*1000/CLOCKS_PER_SEC;
printf("\t \nTime of the MEMCPY : %2.3f ms\n", msec);
printf("MEMCPY DEVICE TO HOST OK!\n");
cudaProfilerStop();
And here is the screenshot of the execution time results :
CUDA kernel launches are asynchronous, and cudaMemcpy is a blocking call. So what you are calling memcpy time is really kernel execution + memcpy tiime. Change your code like this:
...
for(int m = 0; m < nb_loops ; m++)
{
diff<<<blck_nb, thrd_nb>>>(dev_data1, dev_data2, dev_data_res);
//printf("%4d", m);
}
cudaDeviceSynchronize();
printf("WAITING FOR MEMCPY...\n");
....
and this timing should be correct.

What is the general way to launch appropriate amount of reduction kernels?

As I have read from NVIDIA's instruction in this link http://www.cuvilib.com/Reduction.pdf, for arrays bigger than blockSize, I should launch multiple reduction kernels to achieve global synchronization. What is the general way to determine how many times I should launch the reduction kernel? I tried as below but I need to Malloc 2 additional pointers, which takes a lot of processing times.
My job is to Reduce the array d_logLuminance into one minimum value min_logLum
void your_histogram_and_prefixsum(const float* const d_logLuminance,
float &min_logLum,
const size_t numRows,
const size_t numCols)
{
const dim3 blockSize(512);
unsigned int pixel = numRows*numCols;
const dim3 gridSize(pixel/blockSize.x+1);
//Reduction kernels to find max and min value
float *d_tempMin, *d_min;
checkCudaErrors(cudaMalloc((void**) &d_tempMin, sizeof(float)*pixel));
checkCudaErrors(cudaMalloc((void**) &d_min, sizeof(float)*pixel));
checkCudaErrors(cudaMemcpy(d_min, d_logLuminance, sizeof(float)*pixel, cudaMemcpyDeviceToDevice));
dim3 subGrid = gridSize;
for(int reduceLevel = pixel; reduceLevel > 0; reduceLevel /= blockSize.x) {
checkCudaErrors(cudaMemcpy(d_tempMin, d_min, sizeof(float)*pixel, cudaMemcpyDeviceToDevice));
reduceMin<<<subGrid,blockSize,blockSize.x*sizeof(float)>>>(d_tempMin, d_min);
cudaDeviceSynchronize(); checkCudaErrors(cudaGetLastError());
subGrid.x = subGrid.x / blockSize.x + 1;
}
checkCudaErrors(cudaMemcpy(&min_logLum, d_min, sizeof(float), cudaMemcpyDeviceToHost));
std::cout<< "Min value = " << min_logLum << std::endl;
checkCudaErrors(cudaFree(d_tempMin));
checkCudaErrors(cudaFree(d_min));
}
And if you are curious, here is my reduction kernel:
__global__
void reduceMin(const float* const g_inputRange,
float* g_outputRange)
{
extern __shared__ float sdata[];
unsigned int tid = threadIdx.x;
unsigned int i = blockDim.x * blockIdx.x + threadIdx.x;
sdata[tid] = g_inputRange[i];
__syncthreads();
for(unsigned int s = blockDim.x/2; s > 0; s >>= 1){
if (tid < s){
sdata[tid] = min(sdata[tid],sdata[tid+s]);
}
__syncthreads();
}
if(tid == 0){
g_outputRange[blockIdx.x] = sdata[0];
}
}
There are many ways to skin the cat, but if you want to minimize kernel launches, it can always be done with at most two kernel launches.
The first kernel launch is composed of up to however many blocks correspond to the number of threads per block that your device supports. Newer devices will support 1024, older devices, 512.
Each of these (at most 512 or 1024) blocks in the first kernel will participate in a grid-looping sum of all the data elements in global memory.
Each of these blocks will then do a partial reduction and write a partial result to global memory. There will be 512 or 1024 of these partial results.
The second kernel launch will be composed of 512 or 1024 threads in a single block. Each thread will pick up one of the partial results from global memory, and then the threads in that single block will cooperatively reduce the partial results to a single final result, and write it back to global memory.
The "grid-looping sum" is described in reduction #7 here as "multiple add/thread". All of the reductions described in this document are available in the NVIDIA reduction sample code

CUDA Dynamic Parallelizm; stream synchronization from device

I am basically looking for a way to synchronize a stream from within the device. I want to avoid using cudaDeviceSynchronize(), as it would serialize execution of my kernel that I want to execute concurrently using streams;
More detailed description: I have written a kernel, that is a stabilized bi-conjugate gradient solver. I want to lunch this kernel concurrently on different data using streams.
This kernel uses cublas functions. They are called from within the kernel.
One of operations required by the solver is calculation of a dot product of two vectors. This can be done with cublasdot(). But as this call is synchronous, execution of kernels in different streams get serialized. Instead of calling a dot product function, I calculate the dot product using cublasspmv(), which is called asynchronously. The problem is that this function returns before the result is calculated. I want therefore to synchronize the stream from the device - I am looking for an equivalent of cudaStreamSynchronize() but callable from the device.
__device__ float _cDdot(cublasHandle_t & cublasHandle, const int n, real_t * x, real_t * y) {
float *norm; norm = new float;
float alpha = 1.0f; float beta = 0.0f;
cublasSgemv_v2(cublasHandle, CUBLAS_OP_N ,1 , n, &alpha, x, 1, y, 1, &beta, norm, 1);
return *norm;
}
What can I do to make sure, that the result is calculated before the function returns? Of course insertion of cudaDeviceSynchronize() works, but as I mentioned, it serializes the execution of my kernel across streams.
Probably if you read the programming guide dynamic parallelism section carefully (especially streams, events, and synchronization), you may get some ideas. Here's what I came up with:
There is an implicit NULL stream (on the device) associated with the execution sequence that calls your _cDdot function (oddly named, IMHO, since you're working with float quantities in that case, i.e. using Sgemv). Therefore, any cuda kernel or API call issued after the call to cublasSgemv_v2 in your function should wait until any cuda activity associated with the cublasSgemv_v2 function is complete. If you insert an innocuous cuda API call, or else a dummy kernel call, after the call to cublasSgemv_v2, it should wait for that to be complete. This should give you the thread-level synchronization you are after. You might also be able to use a cudaEventRecord call followed by a cudaStreamWaitEvent call.
Here's an example to show the implicit stream synchronization approach:
#include <stdio.h>
#include <cublas_v2.h>
#define SZ 16
__global__ void dummy_kernel(float *in, float *out){
*out = *in;
}
__device__ float _cDdot(cublasHandle_t & cublasHandle, const int n, float * x, float * y, const int wait) {
float *norm; norm = new float;
float alpha = 1.0f; float beta = 0.0f;
*norm = 0.0f;
cublasSgemv_v2(cublasHandle, CUBLAS_OP_N ,1 , n, &alpha, x, 1, y, 1, &beta, norm, 1);
if (wait){
dummy_kernel<<<1,1>>>(norm, norm);
}
return *norm;
}
__global__ void compute(){
cublasHandle_t my_h;
cublasStatus_t status;
status = cublasCreate(&my_h);
if (status != CUBLAS_STATUS_SUCCESS) printf("cublasCreate fail\n");
float *x, *y;
x = new float[SZ];
y = new float[SZ];
for (int i = 0; i < SZ; i++){
x[i] = 1.0f;
y[i] = 1.0f;}
float result = _cDdot(my_h, SZ, x, y, 0);
printf("result with no wait = %f\n", result);
result = _cDdot(my_h, SZ, x, y, 1);
printf("result with wait = %f\n", result);
}
int main(){
compute<<<1,1>>>();
cudaDeviceSynchronize();
return 0;
}
compile with:
nvcc -arch=sm_35 -rdc=true -o t302 t302.cu -lcudadevrt -lcublas -lcublas_device
results:
$ ./t302
result with no wait = 0.000000
result with wait = 16.000000
$
Unfortunately I tried a completely empty dummy_kernel; that did not work, unless I compiled with -G. So the compiler may be smart enough to optimize out a complete empty child kernel call.

Dot Product in CUDA using atomic operations - getting wrong results

I am trying to implement the dot product in CUDA and compare the result with what MATLAB returns. My CUDA code (based on this tutorial) is the following:
#include <stdio.h>
#define N (2048 * 8)
#define THREADS_PER_BLOCK 512
#define num_t float
// The kernel - DOT PRODUCT
__global__ void dot(num_t *a, num_t *b, num_t *c)
{
__shared__ num_t temp[THREADS_PER_BLOCK];
int index = threadIdx.x + blockIdx.x * blockDim.x;
temp[threadIdx.x] = a[index] * b[index];
__syncthreads(); //Synchronize!
*c = 0.00;
// Does it need to be tid==0 that
// undertakes this task?
if (0 == threadIdx.x) {
num_t sum = 0.00;
int i;
for (i=0; i<THREADS_PER_BLOCK; i++)
sum += temp[i];
atomicAdd(c, sum);
//WRONG: *c += sum; This read-write operation must be atomic!
}
}
// Initialize the vectors:
void init_vector(num_t *x)
{
int i;
for (i=0 ; i<N ; i++){
x[i] = 0.001 * i;
}
}
// MAIN
int main(void)
{
num_t *a, *b, *c;
num_t *dev_a, *dev_b, *dev_c;
size_t size = N * sizeof(num_t);
cudaMalloc((void**)&dev_a, size);
cudaMalloc((void**)&dev_b, size);
cudaMalloc((void**)&dev_c, size);
a = (num_t*)malloc(size);
b = (num_t*)malloc(size);
c = (num_t*)malloc(size);
init_vector(a);
init_vector(b);
cudaMemcpy(dev_a, a, size, cudaMemcpyHostToDevice);
cudaMemcpy(dev_b, b, size, cudaMemcpyHostToDevice);
dot<<<N/THREADS_PER_BLOCK, THREADS_PER_BLOCK>>>(dev_a, dev_b, dev_c);
cudaMemcpy(c, dev_c, sizeof(num_t), cudaMemcpyDeviceToHost);
printf("a = [\n");
int i;
for (i=0;i<10;i++){
printf("%g\n",a[i]);
}
printf("...\n");
for (i=N-10;i<N;i++){
printf("%g\n",a[i]);
}
printf("]\n\n");
printf("a*b = %g.\n", *c);
free(a); free(b); free(c);
cudaFree(dev_a);
cudaFree(dev_b);
cudaFree(dev_c);
}
and I compile it with:
/usr/local/cuda-5.0/bin/nvcc -m64 -I/usr/local/cuda-5.0/include -gencode arch=compute_20,code=sm_20 -o multi_dot_product.o -c multi_dot_product.cu
g++ -m64 -o multi_dot_product multi_dot_product.o -L/usr/local/cuda-5.0/lib64 -lcudart
Information about my NVIDIA cards can be found at http://pastebin.com/8yTzXUuK. I tried to verify the result in MATLAB using the following simple code:
N = 2048 * 8;
a = zeros(N,1);
for i=1:N
a(i) = 0.001*(i-1);
end
dot_product = a'*a;
But as N increases, I'm getting significantly different results (For instance, for N=2048*32 CUDA reutrns 6.73066e+07 while MATLAB returns 9.3823e+07. For N=2048*64 CUDA gives 3.28033e+08 while MATLAB gives 7.5059e+08). I incline to believe that the discrepancy stems from the use of float in my C code, but if I replace it with double the compiler complains that atomicAdd does not support double parameters. How should I fix this problem?
Update: Also, for high values of N (e.g. 2048*64), I noticed that the result returned by CUDA changes at every run. This does not happen if N is low (e.g. 2048*8).
At the same time I have a more fundamental question: The variable temp is an array of size THREADS_PER_BLOCK and is shared between threads in the same block. Is it also shared between blocks or every block operates on a different copy of this variable? Should I think of the method dot as instructions to every block? Can someone elaborate on how exactly the jobs are split and how the variables are shared in this example
Comment this line out of your kernel:
// *c = 0.00;
And add these lines to your host code, before the kernel call (after the cudaMalloc of dev_c):
num_t h_c = 0.0f;
cudaMemcpy(dev_c, &h_c, sizeof(num_t), cudaMemcpyHostToDevice);
And I believe you'll get results that match matlab, more or less.
The fact that you have this line in your kernel unprotected by any synchronization is messing you up. Every thread of every block, whenever they happen to execute, is zeroing out c as you have written it.
By the way, we can do significantly better with this operation in general by using a classical parallel reduction method. A basic (not optimized) illustration is here. If you combine that method with your usage of shared memory and a single atomicAdd at the end (one atomicAdd per block) you'll have a significantly improved implementation. Although it's not a dot product, this example combines those ideas.
Edit: responding to a question below in the comments:
A kernel function is the set of instructions that all threads in the grid (all threads associated with a kernel launch, by definition) execute. However, it's reasonable to think of execution as being managed by threadblock, since the threads in a threadblock are executing together to a large extent. However, even within a threadblock, execution is not in perfect lockstep across all threads, necessarily. Normally when we think of lockstep execution, we think of a warp which is a group of 32 threads in a single threadblock. Therefore, since execution amongst warps within a block can be skewed, this hazard was present even for a single threadblock. However, if there were only one threadblock, we could have gotten rid of the hazard in your code using appropriate sync and control mechanisms like __syncthreads() and (if threadIdx.x == 0) etc. But these mechanisms are useless for the general case of controlling execution across multiple threadsblocks. Multiple threadblocks can execute in any order. The only defined sync mechanism across an entire grid is the kernel launch itself. Therefore to fix your issue, we had to zero out c prior to the kernel launch.