I need some help understanding the behavior of Ron Farber's code: http://www.drdobbs.com/parallel/cuda-supercomputing-for-the-masses-part/208801731?pgno=2
I'm not understanding how the use of shared mem is giving faster performance over the non-shared memory version. i.e. If I add a few more index calculation steps and use add another Rd/Wr cycle to access the shared mem, how can this be faster than just using global mem alone? The same number or Rd/Wr cycles access global mem in either case. The data is still access only once per kernel instance. Data still goes in/out using global mem. The num of kernel instances is the same. The register count looks to be the same. How can adding more processing steps make it faster. (We are not subtracting any process steps.) Essentially we are doing more work, and it is getting done faster.
Shared mem access is much faster than global, but it is not zero, (or negative).
What am I missing?
The 'slow' code:
__global__ void reverseArrayBlock(int *d_out, int *d_in) {
int inOffset = blockDim.x * blockIdx.x;
int outOffset = blockDim.x * (gridDim.x - 1 - blockIdx.x);
int in = inOffset + threadIdx.x;
int out = outOffset + (blockDim.x - 1 - threadIdx.x);
d_out[out] = d_in[in];
}
The 'fast' code:
__global__ void reverseArrayBlock(int *d_out, int *d_in) {
extern __shared__ int s_data[];
int inOffset = blockDim.x * blockIdx.x;
int in = inOffset + threadIdx.x;
// Load one element per thread from device memory and store it
// *in reversed order* into temporary shared memory
s_data[blockDim.x - 1 - threadIdx.x] = d_in[in];
// Block until all threads in the block have written their data to shared mem
__syncthreads();
// write the data from shared memory in forward order,
// but to the reversed block offset as before
int outOffset = blockDim.x * (gridDim.x - 1 - blockIdx.x);
int out = outOffset + threadIdx.x;
d_out[out] = s_data[threadIdx.x];
}
Early CUDA-enabled devices (compute capability < 1.2) would not treat the d_out[out] write in your "slow" version as a coalesced write. Those devices would only coalesce memory accesses in the "nicest" case where i-th thread in a half warp accesses i-th word. As a result, 16 memory transactions would be issued to service the d_out[out] write for every half warp, instead of just one memory transaction.
Starting with compute capability 1.2, the rules for memory coalescing in CUDA became much more relaxed. As a result, the d_out[out] write in the "slow" version would also get coalesced, and using shared memory as a scratch pad is no longer necessary.
The source of your code sample is article "CUDA, Supercomputing for the Masses: Part 5", which was written in June 2008. CUDA-enabled devices with compute capability 1.2 only arrived on the market 2009, so the writer of the article clearly talked about devices with compute capability < 1.2.
For more details, see section F.3.2.1 in the NVIDIA CUDA C Programming Guide.
this is because the shared memory is closer to the computing units, hence the latency and peak bandwidth will not be the bottleneck for this computation (at least in the case of matrix multiplication)
But most importantly, the top reason is that a lot of the numbers in the tile are being reused by many threads. So if you access from global you are retrieving those numbers multiple times. Writing them once to shared memory will eliminate that wasted bandwidth usage
When looking at the global memory accesses, the slow code reads forwards and writes backwards. The fast code both read and writes forwards. I think the fast code if faster because the cache hierarchy is optimized in, some way, for accessing the global memory in descending order (towards higher memory addresses).
CPUs do some speculative fetching, where they will fill cache lines from higher memory addresses before the data has been touched by the program. Maybe something similar happens on the GPU.
Related
I am studying the performance of random memory access on a Kepler GPU, K40m. The kernel I use is pretty simple as follows,
__global__ void scatter(int *in1, int *out1, int * loc, const size_t n) {
int globalSize = gridDim.x * blockDim.x;
int globalId = blockDim.x * blockIdx.x + threadIdx.x;
for (unsigned int i = globalId; i < n; i += globalSize) {
int pos = loc[i];
out1[pos] = in1[i];
}
}
That is, I will read an array in1 as well as a location array loc. Then I permute in1 according to loc and output to the array out1. Generally, out1[loc[i]] = in1[i]. Note that the location array is sufficiently shuffled and each element is unique.
And I just use the default nvcc compilation setting with -O3 flag opened. The L1 dcache is disabled. Also I fix my # blocks to be 8192 and block size of 1024.
I use nvprof to profile my program. It is easy to know that most of the instructions in the kernel should be memory access. For an instruction of a warp, since each thread demands a discrete 4 Byte data, the instruction should be replayed multiple times (at most 31 times?) and issue multiple memory transactions to fulfill the need of all the threads within the warp. However, the metric "inst_replay_overhead" seems to be confusing: when # tuples n = 16M, the replay overhead is 13.97, which makes sense to me. But when n = 600M, the replay overhead becomes 34.68. Even for larger data, say 700M and 800M, the replay overhead will reach 85.38 and 126.87.
The meaning of "inst_replay_overhead", according to document, is "Average number of replays for each instruction executed". Is that mean when n = 800M, on average each instruction executed has been replayed 127 times? How comes the replay time much larger than 31 here? Am I misunderstanding something or am I missing other factors that will also contribute greatly to the replay times? Thanks a lot!
You may be misunderstanding the fundamental meaning of an instruction replay.
inst_replay_overhead includes the number of times an instruction was issued, but wasn't able to be completed. This can occur for various reasons, which are explained in this answer. Pertinent excerpt from the answer:
If the SM is not able to complete the issued instruction due to
constant cache miss on immediate constant (constant referenced in the instruction),
address divergence in an indexed constant load,
address divergence in a global/local memory load or store,
bank conflict in a
shared memory load or store,
address conflict in an atomic or
reduction operation,
load or store operation require data to be
written to the load store unit or read from a unit exceeding the
read/write bus width (e.g. 128-bit load or store), or
load cache miss
(replay occurs to fetch data when the data is ready in the cache)
then
the SM scheduler has to issue the instruction multiple times. This is
called an instruction replay.
I'm guessing this happens because of scattered reads in your case. This concept of instruction replay also exists on the CPU side of things. Wikipedia article here.
Is there any relation between the size of the shared memory and the maximum number of threads per block?. In my case I use Max threads per block = 512, my program makes use of all the threads and it uses considerable amount of shared memory.
Each thread has to do a particular task repeatedly. For example my kernel might look like,
int threadsPerBlock = (blockDim.x * blockDim.y * blockDim.z);
int bId = (blockIdx.x * gridDim.y * gridDim.z) + (blockIdx.y * gridDim.z) + blockIdx.z;
for(j = 0; j <= N; j++) {
tId = threadIdx.x + (j * threadsPerBlock);
uniqueTid = bId*blockDim.x + tId;
curand_init(uniqueTid, 0, 0, &seedValue);
randomP = (float) curand_uniform( &seedValue );
if(randomP <= input_value)
/* Some task */
else
/* Some other task */
}
But my threads are not going into next iteration (say j = 2). Am i missing something obvious here?
You have to distinct between shared memory and global memory. The former is always per block. The latter refers to the off-chip memory that is available on the GPU.
So generally speaking, there is a kind of relation when it comes to threads, i.e. when having more threads per block, the maximum amount of shared memory stays the same.
Also refer to e.g. Using Shared Memory in CUDA C/C++.
There is no immediate relationship between the maximum number of threads per block and the size of the shared memory (not 'device memory' - they're not the same thing).
However, there is an indirect relationship, in that with different Compute Capabilities, both these numbers change:
Compute Capability
1.x
2.x - 3.x
Threads per block
512
1024
Max shared memory (per block)
16KB
48KB
as one of them has increased with newer CUDA devices, so has the other.
Finally, there is a block-level resource which is affected, used up, by the launching of more threads: The Register File. There is a single register file which all block threads share, and the constraint is
ThreadsPerBlock x RegistersPerThread <= RegisterFileSize
It is not trivial to determine how many registers your kernel code is using; but as a rule of thumb, if you use "a lot" of local variables, function call parameters etc., you might hit the above limit, and will not be able to schedule as many threads.
I present here some code
__constant__ int array[1024];
__global__ void kernel1(int *d_dst) {
int tId = threadIdx.x + blockIdx.x * blockDim.x;
d_dst[tId] = array[tId];
}
__global__ void kernel2(int *d_dst, int *d_src) {
int tId = threadIdx.x + blockIdx.x * blockDim.x;
d_dst[tId] = d_src[tId];
}
int main(int argc, char **argv) {
int *d_array;
int *d_src;
cudaMalloc((void**)&d_array, sizeof(int) * 1024);
cudaMalloc((void**)&d_src, sizeof(int) * 1024);
int *test = new int[1024];
memset(test, 0, sizeof(int) * 1024);
for (int i = 0; i < 1024; i++) {
test[i] = 100;
}
cudaMemcpyToSymbol(array, test, sizeof(int) * 1024);
kernel1<<< 1, 1024 >>>(d_array);
cudaMemcpy(d_src, test, sizeof(int) * 1024, cudaMemcpyHostToDevice);
kernel2<<<1, 32 >>>(d_array, d_src),
free(test);
cudaFree(d_array);
cudaFree(d_src);
return 0;
}
Which simply shows constant memory and global memory usage. On its execution the "kernel2" executes about 4 times faster (in terms of time) than "kernel1"
I understand from the Cuda C programming guide, that this this because accesses to constant memory are getting serialized. Which brings me to the idea that constant memory can be best utilized if a warp accesses a single constant value such as integer, float, double etc. but accessing an array is not beneficial at all. In other terms, I can say a warp must access a single address in order to have any beneficial optimization/speedup gains from constant memory access. Is this correct?
I also want to know, if I keep a structure instead of a simple type in my constant memory. Any access to the structure by a thread with in a warp; is also considered as single memory access or more? I mean a structure might contain multiple simple types and array for example; when accessing these simple types, are these accesses also serialized or not?
Last question would be, in case I do have an array with constant values, which needs to be accessed via different threads within a warp; for faster access it should be kept in global memory instead of constant memory. Is that correct?
Anyone can refer me some example code where an efficient constant memory usage is shown.
regards,
I can say a warp must access a single address in order to have any beneficial optimization/speedup gains from constant memory access. Is this correct?
Yes this is generally correct and is the principal intent of usage of constant memory/constant cache. The constant cache can serve up one quantity per SM "at a time". The precise wording is as follows:
The constant memory space resides in device memory and is cached in the constant cache.
A request is then split into as many separate requests as there are different memory addresses in the initial request, decreasing throughput by a factor equal to the number of separate requests.
The resulting requests are then serviced at the throughput of the constant cache in case of a cache hit, or at the throughput of device memory otherwise.
An important takeaway from the text above is the desire for uniform access across a warp to achieve best performance. If a warp makes a request to __constant__ memory where different threads in the warp are accessing different locations, those requests will be serialized. Therefore if each thread in a warp is accessing the same value:
int i = array[20];
then you will have the opportunity for good benefit from the constant cache/memory. If each thread in a warp is accessing a unique quantity:
int i = array[threadIdx.x];
then the accesses will be serialized, and the constant data usage will be disappointing, performance-wise.
I also want to know, if I keep a structure instead of a simple type in my constant memory. Any access to the structure by a thread with in a warp; is also considered as single memory access or more?
You can certainly put structures in constant memory. The same rules apply:
int i = constant_struct_ptr->array[20];
has the opportunity to benefit, but
int i = constant_struct_ptr->array[threadIdx.x];
does not. If you access the same simple type structure element across threads, that is ideal for constant cache usage.
Last question would be, in case I do have an array with constant values, which needs to be accessed via different threads within a warp; for faster access it should be kept in global memory instead of constant memory. Is that correct?
Yes, if you know that in general your accesses will break the constant memory one 32-bit quantity per cycle rule, then you'll probably be better off leaving the data in ordinary global memory.
There are a variety of cuda sample codes that demonstrate usage of __constant__ data. Here are a few:
graphics volumeRender
imaging bilateralFilter
imaging convolutionTexture
finance MonteCarloGPU
and there are others.
EDIT: responding to a question in the comments, if we have a structure like this in constant memory:
struct Simple { int a, int b, int c} s;
And we access it like this:
int p = s.a + s.b + s.c;
^ ^ ^
| | |
cycle: 1 2 3
We will have good usage of the constant memory/cache. When the C code gets compiled, under the hood it will generate machine code accesses corresponding to 1,2,3 in the diagram above. Let's imagine that access 1 occurs first. Since access 1 is to the same memory location independent of which thread in the warp, during cycle 1, all threads will receive the value in s.a and it will take advantage of the cache for best possible benefit. Likewise for accesses 2 and 3. If on the other hand we had:
struct Simple { int a[32], int b[32], int c[32]} s;
...
int idx = threadIdx.x + blockDim.x * blockIdx.x;
int p = s.a[idx] + s.b[idx] + s.c[idx];
This would not give good usage of constant memory/cache. Instead, if this were typical of our accesses to s, we'd probably have better performance locating s in ordinary global memory.
I have a buffer in global memory that I want to copy in shared memory for each block as to speed up my read-only access. Each thread in each block will use the whole buffer at different positions concurrently.
How does one do that?
I know the size of the buffer only at run time:
__global__ void foo( int *globalMemArray, int N )
{
extern __shared__ int s_array[];
int idx = blockIdx.x * blockDim.x + threadIdx.x;
if( idx < N )
{
...?
}
}
The first point to make is that shared memory is limited to a maximum of either 16kb or 48kb per streaming multiprocessor (SM), depending on which GPU you are using and how it is configured, so unless your global memory buffer is very small, you will not be able to load all of it into shared memory at the same time.
The second point to make is that the contents of shared memory only has the scope and lifetime of the block it is associated with. Your sample kernel only has a single global memory argument, which makes me think that you are either under the misapprehension that the contents of a shared memory allocation can be preserved beyond the life span of the block that filled it, or that you intend to write the results of the block calculations back into same global memory array from which the input data was read. The first possibility is wrong and the second will result in memory races and inconsistant results. It is probably better to think of shared memory as a small, block scope L1 cache which is fully programmer managed than some sort of faster version of global memory.
With those points out of the way, a kernel which loaded sucessive segments of a large input array, processed them and then wrote some per thread final result back input global memory might look something like this:
template <int blocksize>
__global__ void foo( int *globalMemArray, int *globalMemOutput, int N )
{
__shared__ int s_array[blocksize];
int npasses = (N / blocksize) + (((N % blocksize) > 0) ? 1 : 0);
for(int pos = threadIdx.x; pos < (blocksize*npasses); pos += blocksize) {
if( pos < N ) {
s_array[threadIdx.x] = globalMemArray[pos];
}
__syncthreads();
// Calculations using partial buffer contents
.......
__syncthreads();
}
// write final per thread result to output
globalMemOutput[threadIdx.x + blockIdx.x*blockDim.x] = .....;
}
In this case I have specified the shared memory array size as a template parameter, because it isn't really necessary to dynamically allocate the shared memory array size at runtime, and the compiler has a better chance at performing optimizations when the shared memory array size is known at compile time (perhaps in the worst case there could be selection between different kernel instances done at run time).
The CUDA SDK contains a number of good example codes which demonstrate different ways that shared memory can be used in kernels to improve memory read and write performance. The matrix transpose, reduction and 3D finite difference method examples are all good models of shared memory usage. Each also has a good paper which discusses the optimization strategies behind the shared memory use in the codes. You would be well served by studying them until you understand how and why they work.
Im working on my game project (tower defence) and Im trying to compute the distance between all criters and a tower with JCuda using the shared memory. For each tower I run 1 block with N threds, where N equals the number of critters on the map. Im computing distance between all criters and that tower for given block, and I store the smallest found distance so far in the block's shared memory. My current code looks like that:
extern "C"
__global__ void calcDistance(int** globalInputData, int size, int
critters, int** globalQueryData, int* globalOutputData) {
//shared memory
__shared__ float minimum[2];
int x = threadIdx.x + blockIdx.x * blockDim.x;
int y = blockIdx.y;
if (x < critters) {
int distance = 0;
//Calculate the distance between tower and criter
for (int i = 0; i < size; i++) {
int d = globalInputData[x][i] - globalQueryData[y][i];
distance += d * d;
}
if (x == 0) {
minimum[0] = distance;
minimum[1] = x;
}
__syncthreads();
if (distance < minimum[0]) {
minimum[0] = distance;
minimum[1] = x;
}
__syncthreads();
globalOutputData[y * 2] = minimum[0];
globalOutputData[y] = minimum[1];
}
}
The problem is if I rerun the code using the same input multiple times (I free all the memory on both host and device after each run) I get different output each time I the code gets executed for blocks (tower) number > 27... Im fairly sure it has something to do with the shared memory and the way im dealing with it, as rewriting the code to use global memory gives the same result whenever the code gets executed. Any ideas?
There is a memory race problem (so read-after-write correctness) in that kernel here:
if (distance < minimum[0]) {
minimum[0] = distance;
minimum[1] = x;
}
When executed, every thread in the block is going to try and simultaneously read and write the value of minimum. There are no guarantees what will happen when multiple threads in a warp try writing to the same shared memory location, and there are no guarantees what values that other warps in the same block will read when loading from a memory location to which is being written. Memory access is not atomic, and there is no locking or serialization which would ensure that code performed the type of reduction operation you seem to be trying to do.
A milder version of the same problem applies to the write back to global memory at the end of the kernel:
__syncthreads();
globalOutputData[y * 2] = minimum[0];
globalOutputData[y] = minimum[1];
The barrier before the writes ensures that the writes to minimum will be completed prior that a "final" (although inconsistent) value will be stored in minimum, but then every thread in the block will execute the write.
If your intention is to have each thread compute a distance, and then for the minimum of the distance values over the block to get written out to global memory, you will have to either use atomic memory operations (for shared memory this is supported on compute 1.2/1.3 and 2.x devices only), or write an explicit shared memory reduction. After that, only one thread should execute the write back to global memory.
Finally, you also have a potential synchronization correctness problem that could cause the kernel to hang. __syncthreads() (which maps to the PTX bar instruction) demands that every thread in the block arrive and execute the instruction prior to the kernel continuing. Having this sort of control flow:
if (x < critters) {
....
__syncthreads();
....
}
will cause the kernel to hang if some threads in the block can branch around the barrier and exit while others wait at the barrier. There should never be any branch divergence around a __syncthreads() call to ensure execution correctness of a kernel in CUDA.
So, in summary, back to the drawing board on at least three issues in the current code.