how to use CUDA to create an array of indices of matches from another array? With atomic? - cuda

I have a array in device ,the array is like[0,1,0,0,1...], only have 0,1. I want to create a new array and put the number 1's index in the new array.
I think it should use atomic. I have no idea. How to implement?

This can be achieved using stream-compaction. With Thrust it could look like this
#include <thrust/device_vector.h>
#include <thrust/iterator/counting_iterator.h>
#include <thrust/copy.h>
#include <vector>
struct IsOne{
__host__ __device__
bool operator()(int i) const{
return i == 1;
}
};
int main(){
std::vector<int> h_array{1,0,1,1,0};
thrust::device_vector<int> d_array = h_array;
thrust::device_vector<int> d_indicesOfOnes(d_array.size());
auto end = thrust::copy_if(
thrust::device,
thrust::make_counting_iterator(0),
thrust::make_counting_iterator(5),
d_array.begin(),
d_indicesOfOnes.begin(),
IsOne{}
);
int numIndices = thrust::distance(d_indicesOfOnes.begin(), end);
for(int i = 0; i < numIndices; i++){
std::cout << d_indicesOfOnes[i] << "\n";
}
}
Output:
0
2
3

So you want to find the indices of all the 1s, producing an output array whose length is the count of ones?
Almost certainly better to work in private chunks than to have all threads (whatever you call them in CUDA) serialized by incrementing a shared atomic output position. That strategy would be disastrously slow, getting no parallelism except when scanning through 0s. Maybe you had some better idea for how to use atomic, but you didn't mention it.
Have each "thread" count 1s in its chunk, then prefix-sum those results across chunks to find the starting point in the output array for each chunk.
Then (after all earlier chunks are done) copy its local array of indices to the right position in the final shared array.
Collecting results between chunks might be done with atomic, but I think you'd want an array of start-positions instead of serializing things with atomic for only 1 chunk start-point at a time.
IDK whether it would be faster to generate a local array of indices during the first pass as you count, or to re-scan the original array to generate indices. That would let the first pass just be a simple sum (which is the same thing as counting the matches in this case). If 1s are very rare, writing a temporary array and copying it might be good. If they're common, keeping the first pass simple and light-weight is probably good, not costing a lot more memory access / cache footprint.
The overall problem is somewhat similar to the parallel prefix-sum problem for dependencies between chunks, but it's the starting position that isn't known here, instead of what offset you need to add.
The prefix-sum part of what I'm suggesting is just over chunks, one value per thread, so this step doesn't have to do a lot of work between waiting for earlier chunks and starting up the 2nd phase.
I've never actually used CUDA so I'm not going to attempt code, but this is how you can parallelize the dependencies inherent in this problem in a way that's probably friendly for what GPUs can do.
It would work well on a multi-core CPU where you'd maybe have an array of atomic<ssize_t> output counts / positions. Perhaps starting zero-initialized, and when a thread finishes its chunk it writes a count to the array biased by 1, so it's non-zero. Or perhaps chunk_counts[thread] = -(count+1); so it's definitely non-zero, and negative. Then it checks if it's the first thread, or if chunk_counts[thread-1] is non-zero and positive (else .wait(0) on it?), and if so writes its chunk_counts[thread] = chunk_counts[thread-1] + count; Actually you wouldn't need to store both the negative local count and the positive prefix-summed count, just one or the other depending on how you collect.
And on a CPU, probably better to have one thread responsible for collecting the results, prefix summing, and notifying all the other threads to wake again. Instead of serializing with a chain of each thread waiting for a previous thread to write its value. Perhaps also on a GPU. That one collector thread can notify workers to start phase 2 as it goes along in the prefix sum.

Related

Does CUDA automatically convert float4 arrays into a struct of arrays?

I have the following snippet of code:
#include <stdio.h>
struct Nonsense {
float3 group;
float other;
};
__global__ void coalesced(float4* float4Array, Nonsense* nonsenseArray) {
float4 someCoordinate = float4Array[threadIdx.x];
someCoordinate.x = 5;
float4Array[threadIdx.x] = someCoordinate;
Nonsense nonsenseValue = nonsenseArray[threadIdx.x];
nonsenseValue.other = 3;
nonsenseArray[threadIdx.x] = nonsenseValue;
}
int main() {
float4* float4Array;
cudaMalloc(&float4Array, 32 * sizeof(float4));
cudaMemset(float4Array, 32 * sizeof(float4), 0);
Nonsense* nonsenseArray;
cudaMalloc(&nonsenseArray, 32 * sizeof(Nonsense));
cudaMemset(nonsenseArray, 32 * sizeof(Nonsense), 0);
coalesced<<<1, 32>>>(float4Array, nonsenseArray);
cudaDeviceSynchronize();
return 0;
}
When I run this through the Nvidia profiler in Nsight, and look at the Global Memory Access Pattern, the float4Array has perfect coalesced reads and writes. Meanwhile, the Nonsense array has a poor access patterns (due to it being an array of structs).
Does NVCC automatically convert a float4 array which conceptually is an array of structs into a struct of array for better memory access patterns?
No, it does not convert it into a struct of arrays. I think if you think about this carefully, you would conclude that it is nearly impossible for the compiler to reorganize data this way. After all, the thing that is being passed is a pointer.
There is only one array, and the elements of that array still have the struct elements in the same order:
float address (i.e. index): 0 1 2 3 4 5 ...
array element : a[0].x a[0].y a[0].z a[0].w a[1].x a[1].y ...
However the float4 array gives a better pattern because the compiler generates a single 16-byte load per thread. This is sometimes referred to as a "vector load" because we are loading a vector (float4 in this case) per thread. Therefore, adjacent threads are still reading adjacent data, and you have ideal coalescing behavior. In the above example, thread 0 would read a[0].x, a[0].y, a[0].z and a[0].w, thread 1 would read a[1].x, a[1].y etc. All of this would take place in a single request (i.e. SASS instruction) but may be split across multiple transactions. The splitting of a request into multiple transactions does not result in any loss of efficiency (in this case).
In the case of the Nonsense struct, the compiler does not recognize that that struct could also be loaded in a similar fashion, so under the hood it must generate 3 or 4 loads per thread:
one 8-byte load (or two 4-byte loads) to load the first two words of the float3 group
one 4-byte load to load the last word of the float3 group
one 4-byte load to load the float other
If you map out the above loads per thread, perhaps using the above diagram, you will see that each load involves a stride (unused elements between the items loaded per thread) and so results in lower efficiency.
By using careful typecasting or a union definition in your struct, you can get the compiler to load your Nonsense struct in a single load.
This answer also covers some ideas related to AoS -> SoA conversion and the related efficiency gains.
This answer covers vector load details.

how can a __global__ function RETURN a value or BREAK out like C/C++ does

Recently I've been doing string comparing jobs on CUDA, and i wonder how can a __global__ function return a value when it finds the exact string that I'm looking for.
I mean, i need the __global__ function which contains a great amount of threads to find a certain string among a big big string-pool simultaneously, and i hope that once the exact string is caught, the __global__ function can stop all the threads and return back to the main function, and tells me "he did it"!
I'm using CUDA C. How can I possibly achieve this?
There is no way in CUDA (or on NVIDIA GPUs) for one thread to interrupt execution of all running threads. You can't have immediate exit of the kernel as soon as a result is found, it's just not possible today.
But you can have all threads exit as soon as possible after one thread finds a result. Here's a model of how you would do that.
__global___ void kernel(volatile bool *found, ...)
{
while (!(*found) && workLeftToDo()) {
bool iFoundIt = do_some_work(...); // see notes below
if (iFoundIt) *found = true;
}
}
Some notes on this.
Note the use of volatile. This is important.
Make sure you initialize found—which must be a device pointer—to false before launching the kernel!
Threads will not exit instantly when another thread updates found. They will exit only the next time they return to the top of the while loop.
How you implement do_some_work matters. If it is too much work (or too variable), then the delay to exit after a result is found will be long (or variable). If it is too little work, then your threads will be spending most of their time checking found rather than doing useful work.
do_some_work is also responsible for allocating tasks (i.e. computing/incrementing indices), and how you do that is problem specific.
If the number of blocks you launch is much larger than the maximum occupancy of the kernel on the present GPU, and a match is not found in the first running "wave" of thread blocks, then this kernel (and the one below) can deadlock. If a match is found in the first wave, then later blocks will only run after found == true, which means they will launch, then exit immediately. The solution is to launch only as many blocks as can be resident simultaneously (aka "maximal launch"), and update your task allocation accordingly.
If the number of tasks is relatively small, you can replace the while with an if and run just enough threads to cover the number of tasks. Then there is no chance for deadlock (but the first part of the previous point applies).
workLeftToDo() is problem-specific, but it would return false when there is no work left to do, so that we don't deadlock in the case that no match is found.
Now, the above may result in excessive partition camping (all threads banging on the same memory), especially on older architectures without L1 cache. So you might want to write a slightly more complicated version, using a shared status per block.
__global___ void kernel(volatile bool *found, ...)
{
volatile __shared__ bool someoneFoundIt;
// initialize shared status
if (threadIdx.x == 0) someoneFoundIt = *found;
__syncthreads();
while(!someoneFoundIt && workLeftToDo()) {
bool iFoundIt = do_some_work(...);
// if I found it, tell everyone they can exit
if (iFoundIt) { someoneFoundIt = true; *found = true; }
// if someone in another block found it, tell
// everyone in my block they can exit
if (threadIdx.x == 0 && *found) someoneFoundIt = true;
__syncthreads();
}
}
This way, one thread per block polls the global variable, and only threads that find a match ever write to it, so global memory traffic is minimized.
Aside: __global__ functions are void because it's difficult to define how to return values from 1000s of threads into a single CPU thread. It is trivial for the user to contrive a return array in device or zero-copy memory which suits his purpose, but difficult to make a generic mechanism.
Disclaimer: Code written in browser, untested, unverified.
If you feel adventurous, an alternative approach to stopping kernel execution would be to just execute
// (write result to memory here)
__threadfence();
asm("trap;");
if an answer is found.
This doesn't require polling memory, but is inferior to the solution that Mark Harris suggested in that it makes the kernel exit with an error condition. This may mask actual errors (so be sure to write out your results in a way that clearly allows to tell a successful execution from an error), and it may cause other hiccups or decrease overall performance as the driver treats this as an exception.
If you look for a safe and simple solution, go with Mark Harris' suggestion instead.
The global function doesn't really contain a great amount of threads like you think it does. It is simply a kernel, function that runs on device, that is called by passing paramaters that specify the thread model. The model that CUDA employs is a 2D grid model and then a 3D thread model inside of each block on the grid.
With the type of problem you have it is not really necessary to use anything besides a 1D grid with 1D of threads on in each block because the string pool doesn't really make sense to split into 2D like other problems (e.g. matrix multiplication)
I'll walk through a simple example of say 100 strings in the string pool and you want them all to be checked in a parallelized fashion instead of sequentially.
//main
//Should cudamalloc and cudacopy to device up before this code
dim3 dimGrid(10, 1); // 1D grid with 10 blocks
dim3 dimBlocks(10, 1); //1D Blocks with 10 threads
fun<<<dimGrid, dimBlocks>>>(, Height)
//cudaMemCpy answerIdx back to integer on host
//kernel (Not positive on these types as my CUDA is very rusty
__global__ void fun(char *strings[], char *stringToMatch, int *answerIdx)
{
int idx = blockIdx.x * 10 + threadIdx.x;
//Obviously use whatever function you've been using for string comparison
//I'm just using == for example's sake
if(strings[idx] == stringToMatch)
{
*answerIdx = idx
}
}
This is obviously not the most efficient and is most likely not the exact way to pass paramaters and work with memory w/ CUDA, but I hope it gets the point across of splitting the workload and that the 'global' functions get executed on many different cores so you can't really tell them all to stop. There may be a way I'm not familiar with, but the speed up you will get by just dividing the workload onto the device (in a sensible fashion of course) will already give you tremendous performance improvements. To get a sense of the thread model I highly recommend reading up on the documents on Nvidia's site for CUDA. They will help tremendously and teach you the best way to set up the grid and blocks for optimal performance.

How best to transfer a large number of arrays of chars to the GPU?

I am new to CUDA and am trying to do some processing of a large number of arrays. Each array is an array of about 1000 chars (not a string, just stored as chars) and there can be up to 1 million of them, so about 1 gb of data to be transfered. This data is already all loaded into memory and I have a pointer to each array, but I don't think I can rely on all the data being sequential in memory, so I can't just transfer it all with one call.
I currently made a first go at it with thrust, and based my solution kind of on this message ... I made a struct with a static call that allocates all the memory, and then each individual constructor copies that array, and I have a transform call which takes in the struct with the pointer to the device array.
My problem is that this is obviously extremely slow, since each array is copied individually. I'm wondering how to transfer this data faster.
In this question (the question is mostly unrelated, but I think the user is trying to do something similar) talonmies suggests that they try and use a zip iterator but I don't see how that would help transfer a large number of arrays.
I also just found out about cudaMemcpy2DToArray and cudaMemcpy2D while writing this question, so maybe those are the answer, but I don't see immediately how these would work, since neither seem to take pointers to pointers as input...
Any suggestions are welcome...
One way to do this is as marina.k suggested, batching your transfers only as you need them. Since you said each array only contains about 1000 chars, you could assign each char to a thread (since on Fermi we can allocate 1024 threads per block) and have each array handled by one block. In this case you may be able to transfer all the arrays for one "round" in one call - can you use a FORTRAN style, where you make one gigantic array and to get the 5th element of the "third" 1000 char array you would go:
third_array[5] = big_array[5 + 2*1000]
so that the first 1000 char array makes up the first 1000 elements of big_array, the second 1000 char array makes up the second 1000 elements of big_array, etc. ? In this case your chars would be continuous in memory and you could move the set you were going to process with one kernel launch in only one memcpy. Then as soon as you launch one kernel, you refill big_array on the CPU side and copy it asynchronously to the GPU.
Within each kernel, you could simply handle each array within 1 block, so that block N handles the (N-1)-thousandth element up to the N-thousandth of d_big_array (where you copied all those chars to).
Did you try pinned memory? This may provide a considerable speed-up on some hardware configurations.
Take try of async, you can assign the same job to different streams, each stream process a small part of date, make tranfer and computation at the same time
here is code:
cudaMemcpyAsync(
inputDevPtr + i * size, hostPtr + i * size, size, cudaMemcpyHostToDevice, stream[i]
);
MyKernel<<<100, 512, 0, stream[i]>>> (outputDevPtr + i * size, inputDevPtr + i * size, size);
cudaMemcpyAsync(
hostPtr + i * size, outputDevPtr + i * size, size, cudaMemcpyDeviceToHost, stream[i]
);

Very poor memory access performance with CUDA

I'm very new to CUDA, and trying to write a test program.
I'm running the application on GeForce GT 520 card, and get VERY poor performance.
The application is used to process some image, with each row being handled by a separate thread.
Below is a simplified version of the application. Please note that in the real application, all constants are actually variables, provided be the caller.
When running the code below, it takes more than 20 seconds to complete the execution.
But as opposed to using malloc/free, when l_SrcIntegral is defined as a local array (as it appears in the commented line), it takes less than 1 second to complete the execution.
Since the actual size of the array is dynamic (and not 1700), this local array can't be used in the real application.
Any advice how to improve the performance of this rather simple code would be appreciated.
#include "cuda_runtime.h"
#include <stdio.h>
#define d_MaxParallelRows 320
#define d_MinTreatedRow 5
#define d_MaxTreatedRow 915
#define d_RowsResolution 1
#define k_ThreadsPerBlock 64
__global__ void myKernel(int Xi_FirstTreatedRow)
{
int l_ThreadIndex = blockDim.x * blockIdx.x + threadIdx.x;
if (l_ThreadIndex >= d_MaxParallelRows)
return;
int l_Row = Xi_FirstTreatedRow + (l_ThreadIndex * d_RowsResolution);
if (l_Row <= d_MaxTreatedRow) {
//float l_SrcIntegral[1700];
float* l_SrcIntegral = (float*)malloc(1700 * sizeof(float));
for (int x=185; x<1407; x++) {
for (int i=0; i<1700; i++)
l_SrcIntegral[i] = i;
}
free(l_SrcIntegral);
}
}
int main()
{
cudaError_t cudaStatus;
cudaStatus = cudaSetDevice(0);
int l_ThreadsPerBlock = k_ThreadsPerBlock;
int l_BlocksPerGrid = (d_MaxParallelRows + l_ThreadsPerBlock - 1) / l_ThreadsPerBlock;
int l_FirstRow = d_MinTreatedRow;
while (l_FirstRow <= d_MaxTreatedRow) {
printf("CUDA: FirstRow=%d\n", l_FirstRow);
fflush(stdout);
myKernel<<<l_BlocksPerGrid, l_ThreadsPerBlock>>>(l_FirstRow);
cudaDeviceSynchronize();
l_FirstRow += (d_MaxParallelRows * d_RowsResolution);
}
printf("CUDA: Done\n");
return 0;
}
1.
As #aland said, you will maybe even encounter worse performance calculating just one row in each kernel call.
You have to think about processing the whole input, just to theoretically use the power of the massive parallel processing.
Why start multiple kernels with just 320 threads just to calculate one row?
How about using as many blocks you have rows and let the threads per block process one row.
(320 threads per block is not a good choice, check out how to reach better occupancy)
2.
If your fast resources as registers and shared memory are not enough, you have to use a tile apporach which is one of the basics using GPGPU programming.
Separate the input data into tiles of equal size and process them in a loop in your thread.
Here I posted an example of such a tile approach:
Parallelization in CUDA, assigning threads to each column
Be aware of range checks in that tile approach!
Example to give you the idea:
Calculate the sum of all elements in a column vector in an arbitrary sized matrix.
Each block processes one column and the threads of that block store in a tile loop their elements in a shared memory array. When finished they calculate the sum using parallel reduction, just to start the next iteration.
At the end each block calculated the sum of its vector.
You can still use dynamic array sizes using shared memory. Just pass a third argument in the <<<...>>> of the kernel call. That'd be the size of your shared memory per block.
Once you're there, just bring all relevant data into your shared array (you should still try to keep coalesced accesses) bringing one or several (if it's relevant to keep coalesced accesses) elements per thread. Sync threads after it's been brought (only if you need to stop race conditions, to make sure the whole array is in shared memory before any computation is done) and you're good to go.
Also: you should tessellate using blocks and threads, not loops. I understand that's just an example using a local array, but still, it could be done tessellating through blocks/threads and not nested for loops (which are VERY bad for performance!) I hope you're running your sample code using just 1 block and 1 thread, otherwise it wouldn't make much sense.

Cumulative sum in two dimensions on array in nested loop -- CUDA implementation?

I have been thinking of how to perform this operation on CUDA using reductions, but I'm a bit at a loss as to how to accomplish it. The C code is below. The important part to keep in mind -- the variable precalculatedValue depends on both loop iterators. Also, the variable ngo is not unique to every value of m... e.g. m = 0,1,2 might have ngo = 1, whereas m = 4,5,6,7,8 could have ngo = 2, etc. I have included sizes of loop iterators in case it helps to provide better implementation suggestions.
// macro that translates 2D [i][j] array indices to 1D flattened array indices
#define idx(i,j,lda) ( (j) + ((i)*(lda)) )
int Nobs = 60480;
int NgS = 1859;
int NgO = 900;
// ngo goes from [1,900]
// rInd is an initialized (and filled earlier) as:
// rInd = new long int [Nobs];
for (m=0; m<Nobs; m++) {
ngo=rInd[m]-1;
for (n=0; n<NgS; n++) {
Aggregation[idx(n,ngo,NgO)] += precalculatedValue;
}
}
In a previous case, when precalculatedValue was only a function of the inner loop variable, I saved the values in unique array indices and added them with a parallel reduction (Thrust) after the fact. However, this case has me stumped: the values of m are not uniquely mapped to the values of ngo. Thus, I don't see a way of making this code efficient (or even workable) to use a reduction on. Any ideas are most welcome.
Just a stab...
I suspect that transposing your loops might help.
for (n=0; n<NgS; n++) {
for (m=0; m<Nobs; m++) {
ngo=rInd[m]-1;
Aggregation[idx(n,ngo,NgO)] += precalculatedValue(m,n);
}
}
The reason I did this is because idx varies more rapidly with ngo (function of m) than with n, so making m the inner loop improves coherence. Note I also made precalculatedValue a function of (m, n) because you said that it is -- this makes the pseudocode clearer.
Then, you could start by leaving the outer loop on the host, and making a kernel for the inner loop (64,480-way parallelism is enough to fill most current GPUs).
In the inner loop, just start by using an atomicAdd() to handle collisions. If they are infrequent, on Fermi GPUs performance shouldn't be too bad. In any case, you will be bandwidth bound since arithmetic intensity of this computation is low. So once this is working, measure the bandwidth you are achieving, and compare to the peak for your GPU. If you are way off, then think about optimizing further (perhaps by parallelizing the outer loop -- one iteration per thread block, and do the inner loop using some shared memory and thread cooperation optimizations).
The key: start simple, measure performance, and then decide how to optimize.
Note that this calculation looks very similar to a histogram calculation, which has similar challenges, so you might want to google for GPU histograms to see how they have been implemented.
One idea is to sort (rInd[m], m) pairs using thrust::sort_by_key() and then (since the rInd duplicates will be grouped together), you can iterate over them and do your reductions without collisions. (This is one way to do histograms.) You could even do this with thrust::reduce_by_key().