want to do this programm on cuda.
1.in "main.cpp"
struct Center{
double * Data;
int dimension;
};
typedef struct Center Center;
//I allow a pointer on N Center elements by the CUDAMALLOC like follow
....
#include "kernel.cu"
....
center *V_dev;
int M =100, n=4;
cudaStatus = cudaMalloc((void**)&V_dev,M*sizeof(Center));
Init<<<1,M>>>(V_dev, M, N); //I always know the dimension of N before calling
My "kernel.cu" file is something like this
#include "cuda_runtime.h"
#include"device_launch_parameters.h"
... //other include headers to allow my .cu file to know the Center type definition
__global__ void Init(Center *V, int N, int dimension){
V[threadIdx.x].dimension = dimension;
V[threadIdx.x].Data = (double*)malloc(dimension*sizeof(double));
for(int i=0; i<dimension; i++)
V[threadIdx.x].Data[i] = 0; //For the value, it can be any kind of operation returning a float that i want to be able put here
}
I'm on visual studio 2008 and CUDA 5.0. When I Build my project, I've got these errors:
error: calling a _host_ function("malloc") from a _global_ function("Init") is not allowed.
I want to know please how can I perform this? (I know that 'malloc' and other cpu memory allocation are not allowed for device memory.
malloc is allowed in device code but you have to be compiling for a cc2.0 or greater target GPU.
Adjust your VS project settings to remove any GPU device settings like compute_10,sm_10 and replace it with compute_20,sm_20 or higher to match your GPU. (And, to run that code, your GPU needs to be cc2.0 or higher.)
You need the compiler parameter -arch=sm_20 and a GPU which supports it.
Related
There are similar questions to what I'm about to ask, but I feel like none of them get at the heart of what I'm really looking for. What I have now is a CUDA method that requires defining two arrays into shared memory. Now, the size of the arrays is given by a variable that is read into the program after the start of execution. Because of this, I cannot use that variable to define the size of the arrays, due to the fact that defining the size of shared arrays requires knowing the value at compile time. I do not want to do something like __shared__ double arr1[1000] because typing in the size by hand is useless to me as that will change depending on the input. In the same vein, I cannot use #define to create a constant for the size.
Now I can follow an example similar to what is in the manual (http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#shared) such as
extern __shared__ float array[];
__device__ void func() // __device__ or __global__ function
{
short* array0 = (short*)array;
float* array1 = (float*)&array0[128];
int* array2 = (int*)&array1[64];
}
But this still runs into an issue. From what I've read, defining a shared array always makes the memory address the first element. That means I need to make my second array shifted over by the size of the first array, as they appear to do in this example. But the size of the first array is dependent on user input.
Another question (Cuda Shared Memory array variable) has a similar issue, and they were told to create a single array that would act as the array for both arrays and simply adjust the indices to properly match the arrays. While this does seem to do what I want, it looks very messy. Is there any way around this so that I can still maintain two independent arrays, each with sizes that are defined as input by the user?
When using dynamic shared memory with CUDA, there is one and only one pointer passed to the kernel, which defines the start of the requested/allocated area in bytes:
extern __shared__ char array[];
There is no way to handle it differently. However this does not prevent you from having two user-sized arrays. Here's a worked example:
$ cat t501.cu
#include <stdio.h>
__global__ void my_kernel(unsigned arr1_sz, unsigned arr2_sz){
extern __shared__ char array[];
double *my_ddata = (double *)array;
char *my_cdata = arr1_sz*sizeof(double) + array;
for (int i = 0; i < arr1_sz; i++) my_ddata[i] = (double) i*1.1f;
for (int i = 0; i < arr2_sz; i++) my_cdata[i] = (char) i;
printf("at offset %d, arr1: %lf, arr2: %d\n", 10, my_ddata[10], (int)my_cdata[10]);
}
int main(){
unsigned double_array_size = 256;
unsigned char_array_size = 128;
unsigned shared_mem_size = (double_array_size*sizeof(double)) + (char_array_size*sizeof(char));
my_kernel<<<1,1, shared_mem_size>>>(256, 128);
cudaDeviceSynchronize();
return 0;
}
$ nvcc -arch=sm_20 -o t501 t501.cu
$ cuda-memcheck ./t501
========= CUDA-MEMCHECK
at offset 10, arr1: 11.000000, arr2: 10
========= ERROR SUMMARY: 0 errors
$
If you have a random arrangement of arrays of mixed data types, you'll want to either manually align your array starting points (and request enough shared memory) or else use alignment directives (and be sure to request enough shared memory), or use structures to help with alignment.
in the cuda code ,I am trying to use a structure and constant structure object and the value is assigned to constant object using cudaMemcpyToSymbol but this constant values are not accessed . I know the actual use of constant is not this way as each thread needs to access different values and cannot take advantage of memory broadcast to half warp but here in some situation I need this way
#include <iostream>
#include <stdio.h>
#include <cuda.h>
using namespace std;
struct CDistance
{
int Magnitude;
int Direction;
};
__constant__ CDistance *c_daSTLDistance;
__global__ static void CalcSTLDistance_Kernel(CDistance *m_daSTLDistance)
{
int ID = threadIdx.x;
m_daSTLDistance[ID].Magnitude = m_daSTLDistance[ID].Magnitude + c_daSTLDistance[ID].Magnitude ;
m_daSTLDistance[ID].Direction = 2 ;
}
// main routine that executes on the host
int main(void)
{
CDistance *m_haSTLDistance,*m_daSTLDistance;
m_haSTLDistance = new CDistance[10];
for(int i=0;i<10;i++)
{
m_haSTLDistance[i].Magnitude=3;
m_haSTLDistance[i].Direction=2;
}
//m_haSTLDistance =(CDistance*)malloc(100 * sizeof(CDistance));
cudaMalloc((void**)&m_daSTLDistance,sizeof(CDistance)*10);
cudaMemcpy(m_daSTLDistance, m_haSTLDistance,sizeof(CDistance)*10, cudaMemcpyHostToDevice);
cudaMemcpyToSymbol(c_daSTLDistance, m_haSTLDistance, sizeof(m_daSTLDistance)*10);
CalcSTLDistance_Kernel<<< 1, 100 >>> (m_daSTLDistance);
cudaMemcpy(m_haSTLDistance, m_daSTLDistance, sizeof(CDistance)*10, cudaMemcpyDeviceToHost);
for (int i=0;i<10;i++){
cout<<m_haSTLDistance[i].Magnitude<<endl;
}
free(m_haSTLDistance);
cudaFree(m_daSTLDistance);
}
here in the output, the constant c_daSTLDistance[ID].Magnitude is not accessed in the kernel and the statically assigned value 3 is obtained whereas I want this device value 3 is added to constant value and total 6 is returned.
while looking in to the cuda-memcheck it says error in read operation with memory out of bound
Your code doesn't work because of an uninitialised pointer/buffer overflow problem around the use of c_daSTLDistance. It is illegal to do this:
__constant__ CDistance *c_daSTLDistance;
....
cudaMemcpyToSymbol(c_daSTLDistance, m_haSTLDistance, sizeof(m_daSTLDistance)*10);
No memory was every allocated or a valid value set for c_daSTLDistance.
Further, note that all constant memory variables must be statically defined, and there is no ability to dynamically allocate constant memory at runtime. Therefore, what you are attempting to do can't be made to work. Also note that on all but the very oldest of CUDA devices, kernel arguments are stored in constant memory. So if you had a trivially small array of constant structures, it would be far easier and simpler to pass them by value to the kernel. The compiler and runtime will automagically place them in constant memory for you without any explicit host API calls.
I am reading the CUB documentations and examples:
#include <cub/cub.cuh> // or equivalently <cub/block/block_radix_sort.cuh>
__global__ void ExampleKernel(...)
{
// Specialize BlockRadixSort for 128 threads owning 4 integer items each
typedef cub::BlockRadixSort<int, 128, 4> BlockRadixSort;
// Allocate shared memory for BlockRadixSort
__shared__ typename BlockRadixSort::TempStorage temp_storage;
// Obtain a segment of consecutive items that are blocked across threads
int thread_keys[4];
...
// Collectively sort the keys
BlockRadixSort(temp_storage).Sort(thread_keys);
...
}
In the example, each thread has 4 keys. It looks like 'thread_keys' will be allocated in global local memory. If I only has 1 key per thread, could I declare"int thread_key;" and make this variable in register only?
BlockRadixSort(temp_storage).Sort() is taking a pointer to the key as parameter. Does it mean that the keys have to be in global memory?
I would like to use this code but I want each thread to hold one key in register and keep it on-chip in register/shared memory after they are sorted.
Thanks in advance!
You can do this using shared memory (which will keep it "on-chip"). I'm not sure I know how to do it using strictly registers without de-constructing the BlockRadixSort object.
Here's an example code that uses shared memory to hold the initial data to be sorted, and the final sorted results. This sample is mostly set up for one data element per thread, since that seems to be what you are asking for. It's not difficult to extend it to multiple elements per thread, and I have put most of the plumbing in place to do that, with the exception of the data synthesis and debug printouts:
#include <cub/cub.cuh>
#include <stdio.h>
#define nTPB 32
#define ELEMS_PER_THREAD 1
// Block-sorting CUDA kernel (nTPB threads each owning ELEMS_PER THREAD integers)
__global__ void BlockSortKernel()
{
__shared__ int my_val[nTPB*ELEMS_PER_THREAD];
using namespace cub;
// Specialize BlockRadixSort collective types
typedef BlockRadixSort<int, nTPB, ELEMS_PER_THREAD> my_block_sort;
// Allocate shared memory for collectives
__shared__ typename my_block_sort::TempStorage sort_temp_stg;
// need to extend synthetic data for ELEMS_PER_THREAD > 1
my_val[threadIdx.x*ELEMS_PER_THREAD] = (threadIdx.x + 5)%nTPB; // synth data
__syncthreads();
printf("thread %d data = %d\n", threadIdx.x, my_val[threadIdx.x*ELEMS_PER_THREAD]);
// Collectively sort the keys
my_block_sort(sort_temp_stg).Sort(*static_cast<int(*)[ELEMS_PER_THREAD]>(static_cast<void*>(my_val+(threadIdx.x*ELEMS_PER_THREAD))));
__syncthreads();
printf("thread %d sorted data = %d\n", threadIdx.x, my_val[threadIdx.x*ELEMS_PER_THREAD]);
}
int main(){
BlockSortKernel<<<1,nTPB>>>();
cudaDeviceSynchronize();
}
This seems to work correctly for me, in this case I happened to be using RHEL 5.5/gcc 4.1.2, CUDA 6.0 RC, and CUB v1.2.0 (which is quite recent).
The strange/ugly static casting is needed as far as I can tell, because the CUB Sort is expecting a reference to an array of length equal to the customization parameter ITEMS_PER_THREAD(i.e. ELEMS_PER_THREAD):
__device__ __forceinline__ void Sort(
Key (&keys)[ITEMS_PER_THREAD],
int begin_bit = 0,
int end_bit = sizeof(Key) * 8)
{ ...
I have a problem to access and assign variable with cusp array1d type from device/global kernel. The attached code gives error
alay.cu(8): warning: address of a host variable "p1" cannot be directly taken in a device function
alay.cu(8): error: calling a __host__ function("thrust::detail::vector_base<float, thrust::device_malloc_allocator<float> > ::operator []") from a __global__ function("func") is not allowed
Code Below
#include <cusp/blas.h>
cusp::array1d<float, cusp::device_memory> p1(10,3);
__global__ void func()
{
p1[blockIdx.x]=p1[blockIdx.x]+blockIdx.x*5;
}
int main()
{
func<<<10,1>>>();
return 0;
}
CUSP matrices and arrays (and the Thrust containers they are built with) are intended for host use only. You cannot directly use them in GPU code.
The canonical way to populate a CUSP sparse matrix would be to construct it in host memory and the copy it across to device memory using the copy constructor, so your trivial example becomes this:
cusp::array1d<float, cusp::host_memory> p1(10);
for(int i=0; i<10; i++) p1[i] = 4.f;
cusp::array1d<float, cusp::device_memory> p2(10) = p1; // data now on device
If you want to manipulate a sparse matrix in device code, you will need to have a kernel specifically for whichever format you are interested in, and pass pointers to each of the device arrays holding the matrix data as arguments to that kernel. There is good Doxygen source annotation for all of the sparse types included in the CUSP distribution.
Your edit still doesn't present anything which couldn't be done on the host without a kernel, viz:
cusp::array1d<float, cusp::host_memory> p1(10, 3.f);
for(int i=0; i<10; i++) p1[i] += (i * 5.f);
cusp::array1d<float, cusp::device_memory> p2(10) = p1; // data now on device
So, im trying to write some code that utilizes Nvidia's CUDA architecture. I noticed that copying to and from the device was really hurting my overall performance, so now I am trying to move a large amount of data onto the device.
As this data is used in numerous functions, I would like it to be global. Yes, I can pass pointers around, but I would really like to know how to work with globals in this instance.
So, I have device functions that want to access a device allocated array.
Ideally, I could do something like:
__device__ float* global_data;
main()
{
cudaMalloc(global_data);
kernel1<<<blah>>>(blah); //access global data
kernel2<<<blah>>>(blah); //access global data again
}
However, I havent figured out how to create a dynamic array. I figured out a work around by declaring the array as follows:
__device__ float global_data[REALLY_LARGE_NUMBER];
And while that doesn't require a cudaMalloc call, I would prefer the dynamic allocation approach.
Something like this should probably work.
#include <algorithm>
#define NDEBUG
#define CUT_CHECK_ERROR(errorMessage) do { \
cudaThreadSynchronize(); \
cudaError_t err = cudaGetLastError(); \
if( cudaSuccess != err) { \
fprintf(stderr, "Cuda error: %s in file '%s' in line %i : %s.\n", \
errorMessage, __FILE__, __LINE__, cudaGetErrorString( err) );\
exit(EXIT_FAILURE); \
} } while (0)
__device__ float *devPtr;
__global__
void kernel1(float *some_neat_data)
{
devPtr = some_neat_data;
}
__global__
void kernel2(void)
{
devPtr[threadIdx.x] *= .3f;
}
int main(int argc, char *argv[])
{
float* otherDevPtr;
cudaMalloc((void**)&otherDevPtr, 256 * sizeof(*otherDevPtr));
cudaMemset(otherDevPtr, 0, 256 * sizeof(*otherDevPtr));
kernel1<<<1,128>>>(otherDevPtr);
CUT_CHECK_ERROR("kernel1");
kernel2<<<1,128>>>();
CUT_CHECK_ERROR("kernel2");
return 0;
}
Give it a whirl.
Spend some time focusing on the copious documentation offered by NVIDIA.
From the Programming Guide:
float* devPtr;
cudaMalloc((void**)&devPtr, 256 * sizeof(*devPtr));
cudaMemset(devPtr, 0, 256 * sizeof(*devPtr));
That's a simple example of how to allocate memory. Now, in your kernels, you should accept a pointer to a float like so:
__global__
void kernel1(float *some_neat_data)
{
some_neat_data[threadIdx.x]++;
}
__global__
void kernel2(float *potentially_that_same_neat_data)
{
potentially_that_same_neat_data[threadIdx.x] *= 0.3f;
}
So now you can invoke them like so:
float* devPtr;
cudaMalloc((void**)&devPtr, 256 * sizeof(*devPtr));
cudaMemset(devPtr, 0, 256 * sizeof(*devPtr));
kernel1<<<1,128>>>(devPtr);
kernel2<<<1,128>>>(devPtr);
As this data is used in numerous
functions, I would like it to be
global.
There are few good reasons to use globals. This definitely is not one. I'll leave it as an exercise to expand this example to include moving "devPtr" to a global scope.
EDIT:
Ok, the fundamental problem is this: your kernels can only access device memory and the only global-scope pointers that they can use are GPU ones. When calling a kernel from your CPU, behind the scenes what happens is that the pointers and primitives get copied into GPU registers and/or shared memory before the kernel gets executed.
So the closest I can suggest is this: use cudaMemcpyToSymbol() to achieve your goals. But, in the background, consider that a different approach might be the Right Thing.
#include <algorithm>
__constant__ float devPtr[1024];
__global__
void kernel1(float *some_neat_data)
{
some_neat_data[threadIdx.x] = devPtr[0] * devPtr[1];
}
__global__
void kernel2(float *potentially_that_same_neat_data)
{
potentially_that_same_neat_data[threadIdx.x] *= devPtr[2];
}
int main(int argc, char *argv[])
{
float some_data[256];
for (int i = 0; i < sizeof(some_data) / sizeof(some_data[0]); i++)
{
some_data[i] = i * 2;
}
cudaMemcpyToSymbol(devPtr, some_data, std::min(sizeof(some_data), sizeof(devPtr) ));
float* otherDevPtr;
cudaMalloc((void**)&otherDevPtr, 256 * sizeof(*otherDevPtr));
cudaMemset(otherDevPtr, 0, 256 * sizeof(*otherDevPtr));
kernel1<<<1,128>>>(otherDevPtr);
kernel2<<<1,128>>>(otherDevPtr);
return 0;
}
Don't forget '--host-compilation=c++' for this example.
I went ahead and tried the solution of allocating a temporary pointer and passing it to a simple global function similar to kernel1.
The good news is that it does work :)
However, I think it confuses the compiler as I now get "Advisory: Cannot tell what pointer points to, assuming global memory space" whenever I try to access the global data. Luckily, the assumption happens to be correct, but the warnings are annoying.
Anyway, for the record - I have looked at many of the examples and did run through the nvidia exercises where the point is to get the output to say "Correct!". However, I haven't looked at all of them. If anyone knows of an sdk example where they do dynamic global device memory allocation, I would still like to know.
Erm, it was exactly that problem of moving devPtr to global scope that was my problem.
I have an implementation that does exactly that, with the two kernels having a pointer to data passed in. I explicitly don't want to pass in those pointers.
I have read the documentation fairly closely, and hit up the nvidia forums (and google searched for an hour or so), but I haven't found an implementation of a global dynamic device array that actually runs (i have tried several that compile and then fail in new and interesting ways).
check out the samples included with the SDK. Many of those sample projects are a decent way to learn by example.
As this data is used in numerous functions, I would like it to be global.
-
There are few good reasons to use globals. This definitely is not one. I'll leave it as an
exercise to expand this example to include moving "devPtr" to a global scope.
What if the kernel operates on a large const structure consisting of arrays? Using the so called constant memory is not an option, because it's very limited in size.. so then you have to put it in global memory..?