CUDA "Access Violation" when access “ __managed__ int ”variables by a host function - cuda

The problem occurred when I run the "Gipuma" project, which need the support of opencv and CUDA. My Video card is GTX 750Ti,with CUDA 8.0.
It got "Access Violation" when access the "__managed__ int" variables through a host function.In general,a "__managed__"variable can be read and write through both device and host.I am so confused and I think there may be something wrong in Configuration?
The variables declare in "gipuma.cu":
#ifndef SHARED_HARDCODED
__managed__ int SHARED_SIZE_W_m;
__constant__ int SHARED_SIZE_W;
__managed__ int SHARED_SIZE_H;
__managed__ int SHARED_SIZE = 0;
__managed__ int WIN_RADIUS_W;
__managed__ int WIN_RADIUS_H;
__managed__ int TILE_W;
__managed__ int TILE_H;
#endif
and the host function in "gipuma.cu":
int runcuda(GlobalState &gs)
{
WIN_RADIUS_W = 0;//it gets wrong here,access violation.
printf("test is %d\n", WIN_RADIUS_W);
printf("Run cuda\n");
if(gs.params->color_processing)
gipuma<float4>(gs);
else
gipuma<float>(gs);
return 0;
}
and the error message:
0x000000013FA1DCBD has an unhandled exception (in gipuma.exe): 0xC0000005: An access violation occurred when writing to location 0x0000000000000000.

On devices before compute capability 6.0 host and device may not access __managed__ memory concurrently, because the driver needs an opportunity to programmatically copy the data between host and device.
So, As Robert Crovella already pointed out in his comment, you need to insert a call to cudaDeviceSynchronize() after a kernel call before being able to access __managed__ memory from the host again.

Related

Declaring a global device array using a pointer with cudaMemcpyFromSymbol

When I use the following code, it shows the correct value 3345.
#include <iostream>
#include <cstdio>
__device__ int d_Array[1];
__global__ void foo(){
d_Array[0] = 3345;
}
int main()
{
foo<<<1,1>>>();
cudaDeviceSynchronize();
int h_Array[1];
cudaMemcpyFromSymbol(&h_Array, d_Array, sizeof(int));
std::cout << "values: " << h_Array[0] << std::endl;
}
But if we replace the line of code __device__ int d_Array[1]; by
__device__ int *d_Array; it shows a wrong value. Why?
The problem is in memory allocation. Try the same thing on C++ (on host) and you will either get an error or unexpected value.
In addition, you can check the Cuda errors calling cudaGetLastError() after your kernel. In the first case everything is fine, and the result is cudaSuccess. In the second case there is cudaErrorLaunchFailure error. Here is the explanation of this error (from cuda toolkit documentation):
"An exception occurred on the device while executing a kernel. Common causes include dereferencing an invalid device pointer and accessing out of bounds shared memory. The device cannot be used until cudaThreadExit() is called. All existing device memory allocations are invalid and must be reconstructed if the program is to continue using CUDA."
Note that cudaMemcpyToSymbol also supports the offset parameter for array indexing

CUDA MemcpyHostToDevice

typedef struct {
int M;
int N;
int records[NMAX][SZM];
int times[NMAX];
bool prime[NMAX];
} DATASET;
typedef int ITEMSET[SZM];
__device__ DATASET d_db;
DATASET db;
int main(void) {
loadDB();
cudaMemcpy(&d_db, &db, sizeof(DATASET), cudaMemcpyHostToDevice);
...
I have a device variable d_db a variable db on the host. After I load same value on my db variable, I want to copy this variable on device. Compiling there are no errors, but when I execute the code there are some wornings about cache and sometimes the pc is restarted. What I'm doing wrong?
Using __device__ variables you need to use MemcpyToSymbol and MemcpyFromSymbol instead of cudaMemcpy.
So in my case I have to use
cudaMemcpyToSymbol(d_db,&db,sizeof(DATASET)));

How does CUDA's cudaMemcpyFromSymbol work?

I understand the concept of passing a symbol, but was wondering what exactly is going on behind the scenes. If it's not the address of the variable, then what is it?
I believe the details are that for each __device__ variable, cudafe creates a normal global variable as in C and also a CUDA-specific PTX variable. The global C variable is used so that the host program can refer to the variable by its address, and the PTX variable is used for the actual storage of the variable. The presence of the host variable also allows the host compiler to successfully parse the program. When the device program executes, it operates on the PTX variable when it manipulates the variable by name.
If you wrote a program to print the address of a __device__ variable, the address would differ depending on whether you printed it out from the host or device:
#include <cstdio>
__device__ int device_variable = 13;
__global__ void kernel()
{
printf("device_variable address from device: %p\n", &device_variable);
}
int main()
{
printf("device_variable address from host: %p\n", &device_variable);
kernel<<<1,1>>>();
cudaDeviceSynchronize();
return 0;
}
$ nvcc test_device.cu -run
device_variable address from host: 0x65f3e8
device_variable address from device: 0x403ee0000
Since neither processor agrees on the address of the variable, that makes copying to it problematic, and indeed __host__ functions are not allowed to access __device__ variables directly:
__device__ int device_variable;
int main()
{
device_variable = 13;
return 0;
}
$ nvcc warning.cu
error.cu(5): warning: a __device__ variable "device_variable" cannot be directly written in a host function
cudaMemcpyFromSymbol allows copying data back from a __device__ variable, provided the programmer happens to know the (mangled) name of the variable in the source program.
cudafe facilitates this by creating a mapping from mangled names to the device addresses of variables at program initialization time. The program discovers the device address of each variable by querying the CUDA driver for a driver token given its mangled name.
So the implementation of cudaMemcpyFromSymbol would look something like this in pseudocode:
std::map<const char*, void*> names_to_addresses;
cudaError_t cudaMemcpyFromSymbol(void* dst, const char* symbol, size_t count, size_t offset, cudaMemcpyKind kind)
{
void* ptr = names_to_addresses[symbol];
return cudaMemcpy(dst, ptr + offset, count, kind);
}
If you look at the output of nvcc --keep, you can see for yourself the way that the program interacts with special CUDART APIs that are not normally available to create the mapping:
$ nvcc --keep test_device.cu
$ grep device_variable test_device.cudafe1.stub.c
static void __nv_cudaEntityRegisterCallback( void **__T22) { __nv_dummy_param_ref(__T22); __nv_save_fatbinhandle_for_managed_rt(__T22); __cudaRegisterEntry(__T22, ((void ( *)(void))kernel), _Z6kernelv, (-1)); __cudaRegisterVariable(__T22, __shadow_var(device_variable,::device_variable), 0, 4, 0, 0); }
If you inspect the output, you can see that cudafe has inserted a call to __cudaRegisterVariable to create the mapping for device_variable. Users should not attempt to use this API themselves.

Reduce by key on device array

I am using reduce_by_key to find the number of elements in an array of type int2 which has same first values .
For example
Array: <1,2> <1,3> <1,4> <2,5> <2,7>
so no. elements with 1 as first element are 3 and with 2 are 2.
CODE:
struct compare_int2 : public thrust::binary_function<int2, int2, bool> {
__host__ __device__ bool operator()(const int2 &a,const int2 &b) const{
return (a.x == b.x);}
};
compare_int2 cmp;
int main()
{
int n,i;
scanf("%d",&n);
int2 *h_list = (int2 *) malloc(sizeof(int2)*n);
int *h_ones = (int *) malloc(sizeof(int)*n);
int2 *d_list,*C;
int *d_ones,*D;
cudaMalloc((void **)&d_list,sizeof(int2)*n);
cudaMalloc((void **)&d_ones,sizeof(int)*n);
cudaMalloc((void **)&C,sizeof(int2)*n);
cudaMalloc((void **)&D,sizeof(int)*n);
for(i=0;i<n;i++)
{
int2 p;
printf("Value ? ");
scanf("%d %d",&p.x,&p.y);
h_list[i] = p;
h_ones[i] = 1;
}
cudaMemcpy(d_list,h_list,sizeof(int2)*n,cudaMemcpyHostToDevice);
cudaMemcpy(d_ones,h_ones,sizeof(int)*n,cudaMemcpyHostToDevice);
thrust::reduce_by_key(d_list, d_list+n, d_ones, C, D,cmp);
return 0;
}
The above code is showing Segmentation Fault . I ran the above code using gdb and it reported the segfault at this location.
thrust::system::detail::internal::scalar::reduce_by_key >
(keys_first=0x1304740000,keys_last=0x1304740010,values_first=0x1304740200,keys_output=0x1304740400, values_output=0x1304740600,binary_pred=...,binary_op=...)
at /usr/local/cuda-6.5/bin/../targets/x86_64-linux/include/thrust/system/detail/internal/scalar/reduce_by_key.h:61 61
InputKeyType temp_key = *keys_first
How to use reduce_by_key on device arrays ?
Thrust interprets ordinary pointers as pointing to data on the host:
thrust::reduce_by_key(d_list, d_list+n, d_ones, C, D,cmp);
Therefore thrust will call the host path for the above algorithm, and it will seg fault when it attempts to dereference those pointers in host code. This is covered in the thrust getting started guide:
You may wonder what happens when a "raw" pointer is used as an argument to a Thrust function. Like the STL, Thrust permits this usage and it will dispatch the host path of the algorithm. If the pointer in question is in fact a pointer to device memory then you'll need to wrap it with thrust::device_ptr before calling the function.
Thrust has a variety of mechanisms (e.g. device_ptr, device_vector, and execution policy) to identify to the algorithm that the data is device-resident and the device path should be used.
The simplest modification for your existing code might be to use device_ptr:
#include <thrust/device_ptr.h>
...
thrust::device_ptr<int2> dlistptr(d_list);
thrust::device_ptr<int> donesptr(d_ones);
thrust::device_ptr<int2> Cptr(C);
thrust::device_ptr<int> Dptr(D);
thrust::reduce_by_key(dlistptr, dlistptr+n, donesptr, Cptr, Dptr,cmp);
The issue described above is similar to another issue you asked about.

CUDA "invalid device symbol"

The skeleton of the code is
a_kernel.cu
__constant__ unsigned char carray[256];
a.cu
#include <a_kernel.cu>
...
unsigned char h_carray[256];
...
cudaMemcpyToSymbol("carray", h_carray, 256);
The system configuration is
Windows7 64bit
CUDA toolkit 3.1, SDK 3.1
GeForce GTX 460
rules file in SDK 3.1
I've got invalid device symbol error string at cudaMemcpyToSymbol.
Any help would be appreciated. :)
It would help if you could post some code to reproduce the problem, perhaps you could do this on the CUDA forums. Having said that, __constant__ variables have static (i.e. translation unit) scope. The simplest structure to follow would be as follows. Note that it may also be worth checking out CUDA 3.2.
host_code.cpp:
#include "cuda_funcs.h"
...
{
unsigned char h_carray[256];
cudaMemcpyToSymbol("carray", h_carray, 256);
processOnGpu(...);
}
...
cuda_funcs.h:
void processOnGpu(...);
cuda_funcs.cu:
__constant__ unsigned char carray[256];
__global__ void kernel(...)
{
...
}
void processOnGpu(...)
{
...
kernel<<<...>>>(...);
...
}
Checkout the document in the cuda manual
You need to include the kind or direction of the memory copy. Maybe default is "cudaDevicetoHost".
cudaMemcpyToSymbol("carray", h_carray, 256, 0, cudaHostToDevice);