CUDA block synchronization differences between GTS 250 and Fermi devices - cuda

So I've been working on program in which I'm creating a hash table in global memory. The code is completely functional (albeit slower) on a GTS250 which is a Compute 1.1 device. However, on a Compute 2.0 device (C2050 or C2070) the hash table is corrupt (data is incorrect and pointers are sometimes wrong).
Basically the code works fine when only one block is utilized (both devices). However, when 2 or more blocks are used, it works only on the GTS250 and not on any Fermi devices.
I understand that the warp scheduling and memory architecture between the two platforms are different and I am taking that into account when developing the code. From my understanding, using __theadfence() should make sure any global writes are committed and visible to other blocks, however, from the corrupt hash table, it appears that they are not.
I've also posted the problem on the NVIDIA CUDA developer forum and it can be found here.
Relevant code below:
__device__ void lock(int *mutex) {
while(atomicCAS(mutex, 0, 1) != 0);
}
__device__ void unlock(int *mutex) {
atomicExch(mutex, 0);
}
__device__ void add_to_global_hash_table(unsigned int key, unsigned int count, unsigned int sum, unsigned int sumSquared, Table table, int *globalHashLocks, int *globalFreeLock, int *globalFirstFree)
{
// Find entry if it exists
unsigned int hashValue = hash(key, table.count);
lock(&globalHashLocks[hashValue]);
int bucketHead = table.entries[hashValue];
int currentLocation = bucketHead;
bool found = false;
Entry currentEntry;
while (currentLocation != -1 && !found) {
currentEntry = table.pool[currentLocation];
if (currentEntry.data.x == key) {
found = true;
} else {
currentLocation = currentEntry.next;
}
}
if (currentLocation == -1) {
// If entry does not exist, create entry
lock(globalFreeLock);
int newLocation = (*globalFirstFree)++;
__threadfence();
unlock(globalFreeLock);
Entry newEntry;
newEntry.data.x = key;
newEntry.data.y = count;
newEntry.data.z = sum;
newEntry.data.w = sumSquared;
newEntry.next = bucketHead;
// Add entry to table
table.pool[newLocation] = newEntry;
table.entries[hashValue] = newLocation;
} else {
currentEntry.data.y += count;
currentEntry.data.z += sum;
currentEntry.data.w += sumSquared;
table.pool[currentLocation] = currentEntry;
}
__threadfence();
unlock(&globalHashLocks[hashValue]);
}

As pointed out by LSChien in this post, the issue is with L1 cache coherency. While using __threadfence() will guarantee shared and global memory writes are visible to other threads, since it is not atomic, thread x in block 1 may reach a cached memory value until thread y in block 0 has executed to the threadfence instruction. Instead LSChien suggested a hack in his post of using an atomicCAS() to force the thread to read from global memory instead of a cached value. The proper way to do this is by declaring the memory as volatile, requiring that every write to that memory be visible to all other threads in the grid immediately.

__threadfence guarantees that writes to global memory are visible to other threads in the current block before returning. That is not the same as "write operation on global memory is complete"! Think caching on each multicore.

Related

The way to properly do multiple CUDA block synchronization

I like to do CUDA synchronization for multiple blocks. It is not for each block where __syncthreads() can easily handle it.
I saw there are exiting discussions on this topic, for example cuda block synchronization, and I like the simple solution brought up by #johan, https://stackoverflow.com/a/67252761/3188690, essentially it uses a 64 bits counter to track the synchronized blocks.
However, I wrote the following code trying to accomplish the similar job but meet a problem. Here I used the term environment so that the wkNumberEnvs of blocks within this environment shall be synchronized. It has a counter. I used atomicAdd() to count how many blocks have already been synchronized themselves, once the number of sync blocks == wkBlocksPerEnv, I know all blocks finished sync and it is free to go. However, it has a strange outcome that I am not sure why.
The problem comes from this while loop. Since the first threads of all blocks are doing the atomicAdd, there is a while loop to check until the condition meets. But I find that some blocks will be stuck into the endless loop, which I am not sure why the condition cannot be met eventually? And if I printf some messages either in *** I can print here 1 or *** I can print here 2, there is no endless loop and everything is perfect. I do not see something obvious.
const int wkBlocksPerEnv = 2;
__device__ int env_sync_block_count[wkNumberEnvs];
__device__ void syncthreads_for_env(){
// sync threads for each block so all threads in this block finished the previous tasks
__syncthreads();
// sync threads for wkBlocksPerEnv blocks for each environment
if(wkBlocksPerEnv > 1){
const int kThisEnvId = get_env_scope_block_id(blockIdx.x);
if (threadIdx.x == 0){
// incrementing env_sync_block_count by 1
atomicAdd(&env_sync_block_count[kThisEnvId], 1);
// *** I can print here 1
while(env_sync_block_count[kThisEnvId] != wkBlocksPerEnv){
// *** I can print here 2
}
// Do the next job ...
}
}
There are two potential issues with your code. Caching and block scheduling.
Caching can prevent you from observing an updated value during the while loop.
Block scheduling can cause a dead-lock if you wait for an update of a block which has not yet been scheduled. Since CUDA does not guarantee a specific order of scheduled blocks, the only way to prevent this dead-lock is to limit the number of blocks in the grid such that all blocks can run simultaneously.
Following code shows how you could synchronize multiple blocks while avoiding above issues. I adapted the code from the multi-grid synchronization given in the CUDA-sample conjugateGradientMultiDeviceCG https://github.com/NVIDIA/cuda-samples/blob/master/Samples/4_CUDA_Libraries/conjugateGradientMultiDeviceCG/conjugateGradientMultiDeviceCG.cu#L186
On pre-Volta devices, it uses volatile memory accesses. Volta and later uses acquire/release semantics.
Grid size is limited by querying device properties.
#include <cassert>
#include <cstdio>
constexpr int wkBlocksPerEnv = 13;
__device__
int getEnv(int blockId){
return blockId / wkBlocksPerEnv;
}
__device__
int getRankInEnv(int blockId){
return blockId % wkBlocksPerEnv;
}
__device__
unsigned char load_arrived(unsigned char *arrived) {
#if __CUDA_ARCH__ < 700
return *(volatile unsigned char *)arrived;
#else
unsigned int result;
asm volatile("ld.acquire.gpu.global.u8 %0, [%1];"
: "=r"(result)
: "l"(arrived)
: "memory");
return result;
#endif
}
__device__
void store_arrived(unsigned char *arrived,
unsigned char val) {
#if __CUDA_ARCH__ < 700
*(volatile unsigned char *)arrived = val;
#else
unsigned int reg_val = val;
asm volatile(
"st.release.gpu.global.u8 [%1], %0;" ::"r"(reg_val) "l"(arrived)
: "memory");
// Avoids compiler warnings from unused variable val.
(void)(reg_val = reg_val);
#endif
}
#if 0
//wrong implementation which does not synchronize. to check that kernel assert does trigger without proper synchronization
__device__
void syncthreads_for_env(unsigned char* temp){
}
#else
//temp must have at least size sizeof(unsigned char) * total_number_of_blocks in grid
__device__
void syncthreads_for_env(unsigned char* temp){
__syncthreads();
const int env = getEnv(blockIdx.x);
const int blockInEnv = getRankInEnv(blockIdx.x);
unsigned char* const mytemp = temp + env * wkBlocksPerEnv;
if(threadIdx.x == 0){
if(blockInEnv == 0){
// Leader block waits for others to join and then releases them.
// Other blocks in env can arrive in any order, so the leader have to wait for
// all others.
for (int i = 0; i < wkBlocksPerEnv - 1; i++) {
while (load_arrived(&mytemp[i]) == 0)
;
}
for (int i = 0; i < wkBlocksPerEnv - 1; i++) {
store_arrived(&mytemp[i], 0);
}
__threadfence();
}else{
// Other blocks in env note their arrival and wait to be released.
store_arrived(&mytemp[blockInEnv - 1], 1);
while (load_arrived(&mytemp[blockInEnv - 1]) == 1)
;
}
}
__syncthreads();
}
#endif
__global__
void kernel(unsigned char* synctemp, int* array){
const int env = getEnv(blockIdx.x);
const int blockInEnv = getRankInEnv(blockIdx.x);
if(threadIdx.x == 0){
array[blockIdx.x] = 1;
}
syncthreads_for_env(synctemp);
if(threadIdx.x == 0){
int sum = 0;
for(int i = 0; i < wkBlocksPerEnv; i++){
sum += array[env * wkBlocksPerEnv + i];
}
assert(sum == wkBlocksPerEnv);
}
}
int main(){
const int smem = 0;
const int blocksize = 128;
int deviceId = 0;
int numSMs = 0;
int maxBlocksPerSM = 0;
cudaGetDevice(&deviceId);
cudaDeviceGetAttribute(&numSMs, cudaDevAttrMultiProcessorCount, deviceId);
cudaOccupancyMaxActiveBlocksPerMultiprocessor(
&maxBlocksPerSM,
kernel,
blocksize,
smem
);
int maxBlocks = maxBlocksPerSM * numSMs;
maxBlocks -= maxBlocks % wkBlocksPerEnv; //round down to nearest multiple of wkBlocksPerEnv
printf("wkBlocksPerEnv %d, maxBlocks: %d\n", wkBlocksPerEnv, maxBlocks);
int* d_array;
unsigned char* d_synctemp;
cudaMalloc(&d_array, sizeof(int) * maxBlocks);
cudaMalloc(&d_synctemp, sizeof(unsigned char) * maxBlocks);
cudaMemset(d_synctemp, 0, sizeof(unsigned char) * maxBlocks);
kernel<<<maxBlocks, blocksize>>>(d_synctemp, d_array);
cudaFree(d_synctemp);
cudaFree(d_array);
return 0;
}
Atomic value is going to global memory but in the while-loop you read it directly and it must be coming from the cache which will not automatically synchronize between threads (cache-coherence only handled by explicit synchronizations like threadfence). Thread gets its own synchronization but other threads may not see it.
Even if you use threadfence, the threads in same warp would be in dead-lock waiting forever if they were the first to check the value before any other thread updates it. But should work with newest GPUs supporting independent thread scheduling.
I like to do CUDA synchronization for multiple blocks.
You should learn to dis-like it. Synchronization is always costly, even when implemented just right, and inter-core synchronization all the more so.
if (threadIdx.x == 0){
// incrementing env_sync_block_count by 1
atomicAdd(&env_sync_block_count[kThisEnvId], 1);
while(env_sync_block_count[kThisEnvId] != wkBlocksPerEnv)
// OH NO!!
{
}
}
This is bad. With this code, the first warp of each block will perform repeated reads of env_sync_block_count[kThisEnvId]. First, and as #AbatorAbetor mentioned, you will face the problem of cache incoherence, causing your blocks to potentially read the wrong value from a local cache well after the global value has long changed.
Also, your blocks will hog up the multiprocessors. Blocks will stay resident and have at least one active warp, indefinitely. Who's to say the will be evicted from their multiprocessor to schedule additional blocks to execute? If I were the GPU, I wouldn't allow more and more active blocks to pile up. Even if you don't deadlock - you'll be wasting a lot of time.
Now, #AbatorAbetor's answer avoids the deadlock by limiting the grid size. And I guess that works. But unless you have a very good reason to write your kernels this way - the real solution is to just break up your algorithm into consecutive kernels (or better yet, figure out how to avoid the need to synchronize altogether).
a mid-way approach is to only have some blocks get past the point of synchronization. You could do that by not waiting except on some condition which holds for a very limited number of blocks (say you had a single workgroup - then only the blocks which got the last K possible counter values, wait).

racecheck error from a data structure in shared memory

I have a data structure hash table, which has the linear probing hash scheme and is designed as lock-free with CAS.
The hash table
constexpr uint64_t HASH_EMPTY = 0xffffffffffffffff;
struct OnceLock {
static const unsigned LOCK_FRESH = 0;
static const unsigned LOCK_WORKING = 1;
static const unsigned LOCK_DONE = 2;
volatile unsigned lock;
__device__ void init() {
lock = LOCK_FRESH;
}
__device__ bool enter() {
unsigned lockState = atomicCAS ( (unsigned*) &lock, LOCK_FRESH, LOCK_WORKING );
return lockState == LOCK_FRESH;
}
__device__ void done() {
__threadfence();
lock = LOCK_DONE;
__threadfence();
}
__device__ void wait() {
while ( lock != LOCK_DONE );
}
};
template <typename T>
struct agg_ht {
OnceLock lock;
uint64_t hash;
T payload;
};
template <typename T>
__global__ void initAggHT ( agg_ht<T>* ht, int32_t num ) {
for (int i = blockIdx.x * blockDim.x + threadIdx.x; i < num; i += blockDim.x * gridDim.x) {
ht[i].lock.init();
ht[i].hash = HASH_EMPTY;
}
}
// returns candidate bucket
template <typename T>
__device__ int hashAggregateGetBucket ( agg_ht<T>* ht, int32_t ht_size, uint64_t grouphash, int& numLookups, T* payl ) {
int location=-1;
bool done=false;
while ( !done ) {
location = ( grouphash + numLookups ) % ht_size;
agg_ht<T>& entry = ht [ location ];
numLookups++;
if ( entry.lock.enter() ) {
entry.payload = *payl;
entry.hash = grouphash;
entry.lock.done();
}
entry.lock.wait();
done = (entry.hash == grouphash);
if ( numLookups == ht_size ) {
printf ( "agg_ht hash table full at threadIdx %d & blockIdx %d \n", threadIdx.x, blockIdx.x );
break;
}
}
return location;
}
Then I have a minimal kernel as well as the main function, just to let the hash table run. An important thing is the hash table is annotated with __shared__, which is allocated in the shared memory in an SM for fast accesses.
(I did not add any input data with cudaMalloc there to hold the example minimal.)
#include <cstdint>
#include <cstdio>
/**hash table implementation**/
constexpr int HT_SIZE = 1024;
__global__ void kernel() {
__shared__ agg_ht<int> aht2[HT_SIZE];
{
int ht_index;
unsigned loopVar = threadIdx.x;
unsigned step = blockDim.x;
while(loopVar < HT_SIZE) {
ht_index = loopVar;
aht2[ht_index].lock.init();
aht2[ht_index].hash = HASH_EMPTY;
loopVar += step;
}
}
int key = 1;
int value = threadIdx.x;
__syncthreads();
int bucket = -1;
int bucketFound = 0;
int numLookups = 0;
while(!(bucketFound)) {
bucket = hashAggregateGetBucket ( aht2, HT_SIZE, key, numLookups, &(value));
int probepayl = aht2[bucket].payload;
bucketFound = 1;
bucketFound &= ((value == probepayl));
}
}
int main() {
kernel<<<1, 128>>>();
cudaDeviceSynchronize();
return 0;
}
The standard way to compile it, if the file is called test.cu:
$ nvcc -G test.cu -o test
I have to say, this hash table would always give me the correct answer during concurrent insertions under huge-sized input.
However, when I ran racecheck on it, I saw Errors everywhere:
$ compute-sanitizer --tool racecheck ./test
========= COMPUTE-SANITIZER
========= Error: Race reported between Write access at 0xd20 in /tmp/test.cu:61:int hashAggregateGetBucket<int>(agg_ht<T1> *, int, unsigned long, int &, T1 *)
========= and Read access at 0xe50 in /tmp/test.cu:65:int hashAggregateGetBucket<int>(agg_ht<T1> *, int, unsigned long, int &, T1 *) [1016 hazards]
=========
========= Error: Race reported between Write access at 0x180 in /tmp/test.cu:25:OnceLock::done()
========= and Read access at 0xd0 in /tmp/test.cu:30:OnceLock::wait() [992 hazards]
=========
========= Error: Race reported between Write access at 0xcb0 in /tmp/test.cu:60:int hashAggregateGetBucket<int>(agg_ht<T1> *, int, unsigned long, int &, T1 *)
========= and Read access at 0x1070 in /tmp/test.cu:103:kernel() [508 hazards]
=========
========= RACECHECK SUMMARY: 3 hazards displayed (3 errors, 0 warnings)
I was confused, that I believe this linear-probing hash table can pass my unit test but has data race hazards everywhere. I suppose those hazards are irrelevant for the correctness. (?)
After a while of debugging, I still could not get the hazard errors away. I strongly believe the volatile is the cause. I was hoping someone might be able to shed some light on it and give me a hand to fix those annoying hazards.
I also hope this question could reflect some design idea on the topic: data structure on shared memory. During searching on StackOverflow, what I saw is merely plain raw array in shared memory.
I suppose those hazards are irrelevant for the correctness. (?)
I wouldn't try to certify the "correctness" of your application or algorithm. If that is what you are looking for, please just disregard my answer.
I was hoping someone might be able to shed some light on it
A shared memory race condition occurs when one thread writes to a location in shared memory, and another thread reads from that location, and there is no intervening synchronization in the code to ensure that the write happens before the read (or perhaps, more correctly, that the written value is visible to the reading thread). This is not a careful, exhaustive definition, but it suffices for what we are dealing with here.
In so far as that definition goes, you certainly have that activity in your code. One specific case that is being flagged is one thread writing here:
entry.hash = grouphash;
and another thread reading the same location here:
done = (entry.hash == grouphash);
Inspecting your code we can see that there is no __syncthreads() statement between those two code positions. Furthermore, due to the loop that encompasses that activity, there are more than one hazard associated with this (there are two).
The other interaction being flagged is one thread writing to lock here:
entry.lock.done();
and another thread reading the same lock location here:
entry.lock.wait();
The hazard reported here are actually being reported against other lines of code because these are both function calls. Again, there is no intervening synchronization.
I acknowledge that due to the looping nature of your application, I'm not sure it's necessary for "correctness" that either of these thread to thread communication paths get picked up at the earliest opportunity. However, I have not studied your application carefully, nor do I intend to state anything about correctness.
and give me a hand to fix those annoying hazards.
As it happens, both of these interactions are in a small section of your code, so we can cause these 3 hazards to go away with the following additions, according to my testing:
__syncthreads(); // add this line
entry.lock.wait();
done = (entry.hash == grouphash);
__syncthreads(); // add this line
The first sync intersects the obvious write-read connections between the lines I have already indicated. The second sync is needed due to the looping nature of the code at this point.
Also note that proper usage of __syncthreads() is such that all threads in the threadblock can reach that sync point. A quick perusal of what you have here didn't suggest to me that the above lines/additions would need to be handled carefully, but you should confirm that and be aware of that for general application/usage. It may be that the while bucketFound loop would create a situation here that should be handled differently, however the compute-sanitizer --tool synccheck did not report any issues, running on V100, with the additions I suggested here.

How to collect individual results of the threads within a block?

In my Kernel, the threads are processing a small part of an array in global memory.
After processing I would also like to set a flag indicating that the result of the calculation is zero for all threads within a block:
__global__ void kernel( int *a, bool *blockIsNull) {
int tid = blockIdx.x * blockDim.x + threadIdx.x;
int result = 0;
// {...} Here calculate result
a[tid] = result;
// some code here, but I don't know, that's my question...
if (condition)
blockIsNull[blockIdx.x] = true; // if all threads have returned result==0
}
Each individual thread owns the information. But I don't find an efficient way to collect it.
For example, I could have a counter in shared memory that is atomically incremented by each thread when result==0. So when the counter reaches blockDim.x it means that all threads have returned zero. Althought not tested, I am afraid that this solution will have a negative impact on performance (atomic functions are slow).
A zero result does not occur very often, so it is very unlikely to have zeros for all threads within a block. I would like to find a solution that has little impact on the performance in the general case.
What would be your recommendation ?
It sounds like you want to perform a block level reduction of the condition value across a block. Just about all CUDA hardware supports a set of very useful warp voting primitives. You could use the __all() warp vote to determine that each warp of threads satisfied the condition, and then use __all() again to check whether all warps satisfy the condition. In code, it might look like this:
__global__ void kernel( int *a, bool *blockIsNull) {
// assume that threads per block is <= 1024
__shared__ volatile int blockcondition[32];
int laneid = threadIdx.x % 32;
int warpid = threadIdx.x / 32;
// Set each condition value to non zero to begin
if (warpid == 0) {
blockcondition[threadIdx.x] = 1;
}
__syncthreads();
//
// your code goes here
//
// warpcondition holds the vote from each warp
int warpcondition = __all(condition);
// First thread in each warp loads the warp vote to shared memory
if (laneid == 0) {
blockcondition[warpid] = warpcondition;
}
__syncthreads();
// First warp reduces all the votes in shared memory
if (warpid == 0) {
int result = __all(blockcondition[threadIdx.x] != 0);
// first thread stores the block result to global memory
if (laneid == 0) {
blockIsNull[blockIdx.x] = (result !=0);
}
}
}
[ Huge disclaimer: written in browser, never compiled or tested, use at own risk ]
This code should (I think) work for any number of threads per block up to 1024. You could, if required, adjust the size of blockcondition to a smaller value if you were confident of an upper block size limit less than 1024. Probably the smartest way would be to use C++ templating and make the warp count limit a template parameter.

CUDA synchronization and reading global memory

I have something like this:
__global__ void globFunction(int *arr, int N) {
int idx = blockIdx.x* blockDim.x+ threadIdx.x;
// calculating and Writing results to arr ...
__syncthreads();
// reading values of another threads(ex i+1)
int val = arr[idx+1]; // IT IS GIVING OLD VALUE
}
int main() {
// declare array, alloc memory, copy memory, etc.
globFunction<<< 4000, 256>>>(arr, N);
// do something ...
return 0;
}
Why am I getting the old value when I read arr[idx+1]? I called __syncthreads, so I expect to see the updated value. What did I do wrong? Am I reading a cache or what?
Using the __syncthreads() function only synchronizes the threads in the current block. In this case this would be the 256 threads per block you created when you launched the kernel. So in your given array, for each index value that crosses over into another block of threads, you'll end up reading a value from global memory that is not synchronized with respect to the threads in the current block.
One thing you can do to circumvent this issue is create shared thread-local storage using the __shared__ CUDA directive that allows the threads in your blocks to share information among themselves, but prevents threads from other blocks accessing the memory allocated for the current block. Once your calculation within the block is complete (and you can use __syncthreads() for this task), you can then copy back into the globally accessible memory the values in the shared block-level storage.
Your kernel could look something like:
__global__ void globFunction(int *arr, int N)
{
__shared__ int local_array[THREADS_PER_BLOCK]; //local block memory cache
int idx = blockIdx.x* blockDim.x+ threadIdx.x;
//...calculate results
local_array[threadIdx.x] = results;
//synchronize the local threads writing to the local memory cache
__syncthreads();
// read the results of another thread in the current thread
int val = local_array[(threadIdx.x + 1) % THREADS_PER_BLOCK];
//write back the value to global memory
arr[idx] = val;
}
If you must synchronize threads across blocks, you should be looking for another way to solve your problem, since the CUDA programing model works most effectively when a problem can be broken down into blocks, and threads synchronization only needs to take place within a block.

Parallel Reduction in CUDA for calculating primes

I have a code to calculate primes which I have parallelized using OpenMP:
#pragma omp parallel for private(i,j) reduction(+:pcount) schedule(dynamic)
for (i = sqrt_limit+1; i < limit; i++)
{
check = 1;
for (j = 2; j <= sqrt_limit; j++)
{
if ( !(j&1) && (i&(j-1)) == 0 )
{
check = 0;
break;
}
if ( j&1 && i%j == 0 )
{
check = 0;
break;
}
}
if (check)
pcount++;
}
I am trying to port it to GPU, and I would want to reduce the count as I did for the OpenMP example above. Following is my code, which apart from giving incorrect results is also slower:
__global__ void sieve ( int *flags, int *o_flags, long int sqrootN, long int N)
{
long int gid = blockIdx.x*blockDim.x+threadIdx.x, tid = threadIdx.x, j;
__shared__ int s_flags[NTHREADS];
if (gid > sqrootN && gid < N)
s_flags[tid] = flags[gid];
else
return;
__syncthreads();
s_flags[tid] = 1;
for (j = 2; j <= sqrootN; j++)
{
if ( gid%j == 0 )
{
s_flags[tid] = 0;
break;
}
}
//reduce
for(unsigned int s=1; s < blockDim.x; s*=2)
{
if( tid % (2*s) == 0 )
{
s_flags[tid] += s_flags[tid + s];
}
__syncthreads();
}
//write results of this block to the global memory
if (tid == 0)
o_flags[blockIdx.x] = s_flags[0];
}
First of all, how do I make this kernel fast, I think the bottleneck is the for loop, and I am not sure how to replace it. And next, my counts are not correct. I did change the '%' operator and noticed some benefit.
In the flags array, I have marked the primes from 2 to sqroot(N), in this kernel I am calculating primes from sqroot(N) to N, but I would need to check whether each number in {sqroot(N),N} is divisible by primes in {2,sqroot(N)}. The o_flags array stores the partial sums for each block.
EDIT: Following the suggestion, I modified my code (I understand about the comment on syncthreads now better); I realized that I do not need the flags array and just the global indexes work in my case. What concerns me at this point is the slowness of the code (more than correctness) that could be attributed to the for loop. Also, after a certain data size (100000), the kernel was producing incorrect results for subsequent data sizes. Even for data sizes less than 100000, the GPU reduction results are incorrect (a member in the NVidia forum pointed out that that may be because my data size is not of a power of 2).
So there are still three (may be related) questions -
How could I make this kernel faster? Is it a good idea to use shared memory in my case where I have to loop over each tid?
Why does it produce correct results only for certain data sizes?
How could I modify the reduction?
__global__ void sieve ( int *o_flags, long int sqrootN, long int N )
{
unsigned int gid = blockIdx.x*blockDim.x+threadIdx.x, tid = threadIdx.x;
volatile __shared__ int s_flags[NTHREADS];
s_flags[tid] = 1;
for (unsigned int j=2; j<=sqrootN; j++)
{
if ( gid % j == 0 )
s_flags[tid] = 0;
}
__syncthreads();
//reduce
reduce(s_flags, tid, o_flags);
}
While I profess to know nothing about sieving for primes, there are a host of correctness problems in your GPU version which will stop it from working correctly irrespective of whether the algorithm you are implementing is correct or not:
__syncthreads() calls must be unconditional. It is incorrect to write code where branch divergence could leave some threads within the same warp unable to execute a __syncthreads() call. The underlying PTX is bar.sync and the PTX guide says this:
Barriers are executed on a per-warp basis as if all the threads in a
warp are active. Thus, if any thread in a warp executes a bar
instruction, it is as if all the threads in the warp have executed the
bar instruction. All threads in the warp are stalled until the barrier
completes, and the arrival count for the barrier is incremented by the
warp size (not the number of active threads in the warp). In
conditionally executed code, a bar instruction should only be used if
it is known that all threads evaluate the condition identically (the
warp does not diverge). Since barriers are executed on a per-warp
basis, the optional thread count must be a multiple of the warp size.
Your code unconditionally sets s_flags to one after conditionally loading some values from global memory. Surely that cannot be the intent of the code?
The code lacks a synchronization barrier between the sieving code and the reduction, this can lead to a shared memory race and incorrect results from the reduction.
If you are planning on running this code on a Fermi class card, the shared memory array should be declared volatile to prevent compiler optimization from potentially breaking the shared memory reduction.
If you fix those things, the code might work. Performance is a completely different issue. Certainly on older hardware, the integer modulo operation was very, very slow and not recommended. I can recall reading some material suggesting that Sieve of Atkin was a useful approach to fast prime generation on GPUs.