I want to write a CUDA program that returns locations of a bigger array that hold a specific criteria.
The trivial way to do it is to write a kernel that returns an array of integers with 1 if the criteria was held, or 0 if it was not.
Another way might be to return only indexes that were found - but that would be problematic based on my knowledge of GPU synchronization (it's equivalent to implement a queue/linked list on GPU).
The problem with the first idea presented is that the array would be in the input size.
Another way I thought about is to create an array the size of log(n)/8+1 (n=number of items I check), and use 1 bit for each array location (holding a sort of compressed representation of the output).
The only thing I could not find is if CUDA supports bit addressing in parallel..
An example of how I am doing it now:
__global__ void test_kernel(char *gpu, char *gpuFind, int *gputSize, int *gputSearchSize, int *resultsGPU)
{
int start_idx = threadIdx.x + (blockIdx.x * blockDim.x);
if (start_idx > *gputTextSize - *gputSearchSize){return;}
unsigned int wrong=0;
for(int i=0; i<*gputSearchSize;i++){
wrong = calculationOnGpu(gpuText, gpuFind, start_idx,i, gputSearchSize);
}
resultsGPU[start_idx] = !wrong;
}
What I want to do is instead of using int or char for the "resultsGpu" variable , to use something else.
Thanks
A CUDA GPU can access items on boundaries of 1,2,4,8, or 16 bytes. It does not have the ability to independently access bits in a byte.
Bits in a byte would be modified by reading a larger size item, such as char or int, modifying the bit(s) in a register, then writing that item back to memory. Thus it would be a read-modify-write operation.
In order to preserve adjacent bits in such a scenario with multiple threads, it would be necessary to atomically update the item (char, int, etc.) There are no atomics that operate on char quantities, so the bits would need to be grouped into quantities of 32, and written e.g. as int. Following that idiom, every thread would be doing an atomic operation.
32 also happens to be the warp size currently, so a warp-based intrinsic might be a more efficient way to go here, in particular the warp vote __ballot() function. Something like this:
__global__ void test_kernel(char *gpu, char *gpuFind, int *gputSize, int *gputSearchSize, int *resultsGPU)
{
int start_idx = threadIdx.x + (blockIdx.x * blockDim.x);
if (start_idx > *gputTextSize - *gputSearchSize){return;}
unsigned int wrong=0;
wrong = calculationOnGpu(gpuText, gpuFind, start_idx,0, gputSearchSize);
wrong = __ballot(wrong);
if ((threadIdx.x & 31) == 0)
resultsGPU[start_idx/32] = wrong;
}
You haven't provided a complete code, so the above is just a sketch of how it might be done. I'm not sure the loop in your original kernel was an efficient approach anyway, and the above assumes 1 thread per data item to be searched. __ballot() should be safe even in the presence of inactive threads at one end or the other of the array being searched.
Related
I am novice in GPU parallel computing and I'm trying to learn CUDA by looking at some examples in NVidia "CUDA by examples" book.
And I do not understand properly how thread access and change variables in such a simple example (dot product of two vectors).
The kernel function is defined as follows
__global__ void dot( float *a, float *b, float *c ) {
__shared__ float cache[threadsPerBlock];
int tid = threadIdx.x + blockIdx.x * blockDim.x;
int cacheIndex = threadIdx.x;
float temp = 0;
while (tid < N) {
temp += a[tid] * b[tid];
tid += blockDim.x * gridDim.x;
}
// set the cache values
cache[cacheIndex] = temp;
I do not understand three things.
What is the sequence of execution of this function? Is there any sequence between threads? For example, the first are the thread from the first block, then threads from the second block come into play and so on (this is connected to the question why this is necessary to divide threads into blocks).
Do all threads have their own copy of the "temp" variable or not (if not, why is there no race condition?)
How is it operated? What exactly goes to the variable temp in the while loop? The array cache stores values of temp for different threads. How does the summation go on? It seems that temp already contains all sums necessary for dot product because variable tid goes from 0 to N-1 in the while loop.
Despite the code you provide is incomplete, here are some clarifications about what you are asking :
The kernel code will be executed by all the threads in all the blocks. The way to "split the jobs" is to make threads work only on one or a few elements.
For instance, if you have to treat 100 integers with a specific algorithm, you probably want 100 threads to treat 1 element each.
In CUDA the amount of blocks and threads is defined at the kernel launch on host side :
myKernel<<<grid, threads>>>(...);
Where grids and threads are dim3, which define the size on three dimensions.
There is no specific order in the execution of threads and blocks. As you can read there :
http://mc.stanford.edu/cgi-bin/images/3/34/Darve_cme343_cuda_3.pdf
On page 6 : "No specific order in which blocks are dispatched and executed".
Since the temp variable is defined in the kernel in no specific way, it is not distributed and each thread will have this value stored in a register.
This is equivalent of what is done on CPU side. So yes, this means each threads has its own "temp" variable.
The temp variable is updated in each iteration of the loop, using access to device arrays.
Again, this is equivalent of what is done on CPU side.
I think you should probably check if you are used enough to C/C++ programming on CPU side before going further into GPU programming. Meaning no offense, it seems you have a lack in several main topics.
Since CUDA allows you to drive your GPU with C code, the difficulty is not in the syntax, but in the specificities of the hardware.
I present here some code
__constant__ int array[1024];
__global__ void kernel1(int *d_dst) {
int tId = threadIdx.x + blockIdx.x * blockDim.x;
d_dst[tId] = array[tId];
}
__global__ void kernel2(int *d_dst, int *d_src) {
int tId = threadIdx.x + blockIdx.x * blockDim.x;
d_dst[tId] = d_src[tId];
}
int main(int argc, char **argv) {
int *d_array;
int *d_src;
cudaMalloc((void**)&d_array, sizeof(int) * 1024);
cudaMalloc((void**)&d_src, sizeof(int) * 1024);
int *test = new int[1024];
memset(test, 0, sizeof(int) * 1024);
for (int i = 0; i < 1024; i++) {
test[i] = 100;
}
cudaMemcpyToSymbol(array, test, sizeof(int) * 1024);
kernel1<<< 1, 1024 >>>(d_array);
cudaMemcpy(d_src, test, sizeof(int) * 1024, cudaMemcpyHostToDevice);
kernel2<<<1, 32 >>>(d_array, d_src),
free(test);
cudaFree(d_array);
cudaFree(d_src);
return 0;
}
Which simply shows constant memory and global memory usage. On its execution the "kernel2" executes about 4 times faster (in terms of time) than "kernel1"
I understand from the Cuda C programming guide, that this this because accesses to constant memory are getting serialized. Which brings me to the idea that constant memory can be best utilized if a warp accesses a single constant value such as integer, float, double etc. but accessing an array is not beneficial at all. In other terms, I can say a warp must access a single address in order to have any beneficial optimization/speedup gains from constant memory access. Is this correct?
I also want to know, if I keep a structure instead of a simple type in my constant memory. Any access to the structure by a thread with in a warp; is also considered as single memory access or more? I mean a structure might contain multiple simple types and array for example; when accessing these simple types, are these accesses also serialized or not?
Last question would be, in case I do have an array with constant values, which needs to be accessed via different threads within a warp; for faster access it should be kept in global memory instead of constant memory. Is that correct?
Anyone can refer me some example code where an efficient constant memory usage is shown.
regards,
I can say a warp must access a single address in order to have any beneficial optimization/speedup gains from constant memory access. Is this correct?
Yes this is generally correct and is the principal intent of usage of constant memory/constant cache. The constant cache can serve up one quantity per SM "at a time". The precise wording is as follows:
The constant memory space resides in device memory and is cached in the constant cache.
A request is then split into as many separate requests as there are different memory addresses in the initial request, decreasing throughput by a factor equal to the number of separate requests.
The resulting requests are then serviced at the throughput of the constant cache in case of a cache hit, or at the throughput of device memory otherwise.
An important takeaway from the text above is the desire for uniform access across a warp to achieve best performance. If a warp makes a request to __constant__ memory where different threads in the warp are accessing different locations, those requests will be serialized. Therefore if each thread in a warp is accessing the same value:
int i = array[20];
then you will have the opportunity for good benefit from the constant cache/memory. If each thread in a warp is accessing a unique quantity:
int i = array[threadIdx.x];
then the accesses will be serialized, and the constant data usage will be disappointing, performance-wise.
I also want to know, if I keep a structure instead of a simple type in my constant memory. Any access to the structure by a thread with in a warp; is also considered as single memory access or more?
You can certainly put structures in constant memory. The same rules apply:
int i = constant_struct_ptr->array[20];
has the opportunity to benefit, but
int i = constant_struct_ptr->array[threadIdx.x];
does not. If you access the same simple type structure element across threads, that is ideal for constant cache usage.
Last question would be, in case I do have an array with constant values, which needs to be accessed via different threads within a warp; for faster access it should be kept in global memory instead of constant memory. Is that correct?
Yes, if you know that in general your accesses will break the constant memory one 32-bit quantity per cycle rule, then you'll probably be better off leaving the data in ordinary global memory.
There are a variety of cuda sample codes that demonstrate usage of __constant__ data. Here are a few:
graphics volumeRender
imaging bilateralFilter
imaging convolutionTexture
finance MonteCarloGPU
and there are others.
EDIT: responding to a question in the comments, if we have a structure like this in constant memory:
struct Simple { int a, int b, int c} s;
And we access it like this:
int p = s.a + s.b + s.c;
^ ^ ^
| | |
cycle: 1 2 3
We will have good usage of the constant memory/cache. When the C code gets compiled, under the hood it will generate machine code accesses corresponding to 1,2,3 in the diagram above. Let's imagine that access 1 occurs first. Since access 1 is to the same memory location independent of which thread in the warp, during cycle 1, all threads will receive the value in s.a and it will take advantage of the cache for best possible benefit. Likewise for accesses 2 and 3. If on the other hand we had:
struct Simple { int a[32], int b[32], int c[32]} s;
...
int idx = threadIdx.x + blockDim.x * blockIdx.x;
int p = s.a[idx] + s.b[idx] + s.c[idx];
This would not give good usage of constant memory/cache. Instead, if this were typical of our accesses to s, we'd probably have better performance locating s in ordinary global memory.
I am trying to figure out how well the global memory write accesses of one of my kernels are coalesced, based on the "global store efficiency" value of NVidia's profiler (I am using CUDA 5 toolkit preview release, on a Fermi GPU).
As far as I understood, this value is the ratio of requested memory transactions to actual nb of transcations performed, therefore reflects whether accesses are all perfectly coalesced (100% efficiency) or not.
Now, for a thread block width of 32, and taking float values as input and output, the following test kernel gives 100% efficiency both for global load and for global store, as expected:
__global__ void dummyKernel(float*output,float* input,size_t pitch)
{
unsigned int x = blockIdx.x * blockDim.x + threadIdx.x;
unsigned int y = blockIdx.y * blockDim.y + threadIdx.y;
int offset = y*pitch+x;
float tmp = input[offset];
output[offset] = tmp;
}
What I don't understand is why when I start adding useful code in between the input read and the output write, the global store efficiency begins to drop, whereas I have not changed the memory write pattern or the thread block geometry ? The global load stays at 100%, as I expect, though.
Could someone please shed a light on why this happens ? I thought, since all 32 threads in a given warp execute the output store instruction simultaneously (by definition) and using a "coalescing-friendly" pattern, I should still get 100% whatever I do before, but obviously I must be misunderstanding something on either the meaning of global store efficiency, or on the conditions for global store coalescing.
Thx,
EDIT :
Here is an example: if I use this code (just adding a "round" operation on input), global store efficiency drops from 100% to 95%
__global__ void dummyKernel(float*output,float* input,size_t pitch)
{
unsigned int x = blockIdx.x * blockDim.x + threadIdx.x;
unsigned int y = blockIdx.y * blockDim.y + threadIdx.y;
int offset = y*pitch+x;
float tmp = round(input[offset]);
output[offset] = tmp;
}
Unsure if this is the case, but round probably converts its argument to a double and if there is a register spilling, then each thread would access 8 bytes of memory, which would then be coerced into 4 bytes of tmp. Accessing 8 bytes would reduce the coalescing to half-warp.
However, I believe register spilling shouldn't happen since the number of local variables in your kernel is small. You could check with nvcc --ptxas-options=-v for the spill.
Ok, shame on me, I found the problem: I was profiling this simple test code in Debug mode, which gives completely wild numbers for most metrics. Re-profiling in Release mode gave me the expected result : 100% store efficiency in both cases.
I have a short piece of code like this:
typedef struct {
double sX;
double sY;
double vX;
double vY;
int rX;
int rY;
int mass;
int species;
int boxnum;
} particle;
typedef struct {
double mX;
double mY;
double count;
int rotDir;
double cX;
double cY;
int superDir;
} box;
//....
int i;
for(i=0;i<PART_COUNT;i++) {
particles[i].boxnum = ((((int)(particles[i].sX+boxShiftX))/BOX_SIZE)%BWIDTH+BWIDTH*((((int)(particles[i].sY+boxShiftY))/BOX_SIZE)%BHEIGHT));
}
for(i=0;i<PART_COUNT;i++) {
//sum the momenta
boxnum = particles[i].boxnum;
boxes[boxnum].mX += particles[i].vX*particles[i].mass;
boxes[boxnum].mY += particles[i].vY*particles[i].mass;
boxes[boxnum].count++;
}
Now, I want to port this to CUDA. The first step is easy; spreading the calculation across a bunch of threads is no problem. The issue is the second. Since any two particles are equally likely to be in any same box, I'm not sure how I can partition it so as to avoid conflicts.
Number of particles is on the order of 10,000 to 10,000,000, and number of boxes is on the order of 1024 to 1048576.
Ideas?
You can try to use atomicAdd operations to modify your boxes array. Atomic operations on global memory are very slow but at the same time it's quite impossible to do any optimizations involving shared memory for two reasons:
Under the assumption that the properties boxnum of the particles particles[0]..particles[n] aren't ordered and do not lie in any small boundaries (in the range of a block size) you can't predict which boxes to load from global memory into shared memory. You would've to first collect all the boxnumbers..
If you try to collect all boxnumbers you can't use an array with every possible boxnumber as an index since there are way too many boxes to fit into shared memory. So you'd have to collect indices with a queue (realized with an array, a pointer to the next free slot and atomic operations), but then you'd still have conflicts because the same boxnumber could occur multiple times in your queue.
Conclusion: atomicAdd will give you at least correct behavior. Try it out and test the performance. If you aren't satisfied by the performance, think if there's another way to do the same computations that would profit from shared memory.
As an alternative, you could launch a 2D grid of blocks.
blocks.x = numParticles / threadsPerBlock / repeatPerBlock.
blocks.y = numOfBoxes / 1024;
Each block performs atomic additions in shared memory if and only if boxnum lies in between 1024 * blockIdx.y and 1024 * (blockIdx.y + 1);
This is followed by a reduction along blocks.x
This may or may not be faster than atomicAdd on global memory as the data is read blocks.y number of times. This could however be fixed if the "particles" are sorted by boxnum in a sorting pass followed by a partitioning pass.
There may be several other ways to do it, but since the problem size varies by a large amount, you may end up having to write 2-3 different methods that are optimized for a given size range.
I am playing around with cuda.
At the moment I have a problem. I am testing a large array for particular responses, and when I get the response, I have to copy the data onto another array.
For example, my test array of 5 elements looks like this:
[ ][ ][v1][ ][ ][v2]
Result must look like this:
[v1][v2]
The problem is how do I calculate the address of the second array to store the result? All elements of the first array are checked in parallel.
I am thinking to declare a device variable int addr = 0. Every time I find a response, I will increment the addr. But I am not sure about that because it means that addr may be accessed by multiple threads at the same time. Will that cause problems? Or will the thread wait until another thread finishes using that variable?
Is not as trivial as it seems. I just finished to implement one and I can tell what you need
read the scan Gpu Gems 3 Article in particular chapter 39.3.1 Stream Compaction.
To implement your own start from the LargeArrayScan example in the SDK, that will give you just the prescan. Assuming you have the selection array in device memory (an array of 1 and 0 meaning 1- select 0 - discard), dev_selection_array a dev_elements_array elements to be selected a dev_prescan_array and a dev_result_array all of size N then you do
prescan(dev_prescan_array,dev_selection_array, N);
scatter(dev_result_array, dev_prescan_array,
dev_selection_array, dev_elements_array, N);
where the scatter is
__global__ void scatter_kernel( T*dev_result_array,
const T* dev_prescan_array,
const T* dev_selection_array,
const T* dev_elements_array, std::size_t size){
unsigned int idx = blockIdx.x * blockDim.x + threadIdx.x;
if (idx >= size) return;
if (dev_selection_array[idx] == 1){
dev_result_array[dev_prescan_array[idx]] = dev_elements_array[idx];
}
}
for other nice application of the prescan see the paper Ble93
Have fun!
You're talking about classic stream compaction. Generally I would recommend looking at Thrust or CUDPP (those links go to the compaction documentation). Both of these are open source, if you want to roll your own then I would also suggest looking at the 'scan' SDK sample.