CUDA not so fast against CPU with OpenMP? - cuda

I am trying to compute cross-correlation amongst 450 vectors each of size 20000.
While doing this on CPU i stored the data in 2D matrix with rows=20000 and cols=450.
The serial code for the computation looks like
void computeFF_cpu( float * nSamples, float * nFeatures, float ** data, float ** corr
#pragma omp parallel for shared(corr, data)
for( int i=0 ; i<nFeatures ; i++ )
{
for( int j=0 ; j<nFeatures ; j++ )
corr[i][j] = pearsonCorr( data[i], data[j], nSamples );
}
int main()
{
.
.
**for( int z=0 ; z<1000 ; z++ )**
computeFF_cpu( 20000, 450, data, corr );
.
.
}
This works perfectly. Now I have attempted to solve this problem with GPU. I have converted the 2D data matrix into row-major format in GPU memory and I have verified that the copy is correctly made.
The vectors are stored as a matrix of size 900000 (ie. 450*20000) in row major format. Organized as follows
<---nSamples of f1---><---nSamples of f2 ---><---nSamples of f3--->......
My cuda code to compute cross-correlation is as follows
// kernel for computation of ff
__global__ void computeFFCorr(int nSamples, int nFeatures, float * dev_data, float * dev_ff)
{
int tid = blockIdx.x + blockIdx.y*gridDim.x;
if( blockIdx.x == blockIdx.y )
dev_ff[tid] = 1.0;
else if( tid < nFeatures*nFeatures )
dev_ff[tid] = pearsonCorrelationScore_gpu( dev_data+(blockIdx.x*nSamples), dev_data+(blockIdx.y*nSamples), nSamples );
}
main()
{
.
.
// Call kernel for computation of ff
**for( int z=0 ; z<1000 ; z++ )**
computeFFCorr<<<dim3(nFeatures,nFeatures),1>>>(nSamples, nFeatures, dev_data, corr);
//nSamples = 20000
// nFeatures = 450
// dev_data -> data matrix in row major form
// corr -> result matrix also stored in row major
.
.
}

Seems like I have found an answer to my own question. I have the following experiment. I have changed the values of z (ie. number of times the function is executed). This kind of approach was suggested in several of previous post on stackoverflow under cuda tag.
Here is the table --
Z=100 ; CPU=11s ; GPU=14s
Z=200 ; CPU=18s ; GPU=23s
Z=300 ; CPU=26s ; GPU=34s
Z=500 ; CPU=41s ; GPU=53s
Z=1000; CPU=99s ; GPU=101s
Z=1500; CPU=279s; GPU=150s
Z=2000; CPU=401s; GPU=203s
It is evident that as the number of computation grows GPU is able to scale much better than the CPU.

Related

CUDA's nvvp reports non-ideal memory access pattern, but bandwidth is almost peaking

EDIT: new minimal working example to illustrate the question and better explanation of nvvp's outcome (following suggestions given in the comments).
So, I have crafted a "minimal" working example, which follows:
#include <cuComplex.h>
#include <iostream>
int const n = 512 * 100;
typedef float real;
template < class T >
struct my_complex {
T x;
T y;
};
__global__ void set( my_complex< real > * a )
{
my_complex< real > & d = a[ blockIdx.x * 1024 + threadIdx.x ];
d = { 1.0f, 0.0f };
}
__global__ void duplicate_whole( my_complex< real > * a )
{
my_complex< real > & d = a[ blockIdx.x * 1024 + threadIdx.x ];
d = { 2.0f * d.x, 2.0f * d.y };
}
__global__ void duplicate_half( real * a )
{
real & d = a[ blockIdx.x * 1024 + threadIdx.x ];
d *= 2.0f;
}
int main()
{
my_complex< real > * a;
cudaMalloc( ( void * * ) & a, sizeof( my_complex< real > ) * n * 1024 );
set<<< n, 1024 >>>( a );
cudaDeviceSynchronize();
duplicate_whole<<< n, 1024 >>>( a );
cudaDeviceSynchronize();
duplicate_half<<< 2 * n, 1024 >>>( reinterpret_cast< real * >( a ) );
cudaDeviceSynchronize();
my_complex< real > * a_h = new my_complex< real >[ n * 1024 ];
cudaMemcpy( a_h, a, sizeof( my_complex< real > ) * n * 1024, cudaMemcpyDeviceToHost );
std::cout << "( " << a_h[ 0 ].x << ", " << a_h[ 0 ].y << " )" << '\t' << "( " << a_h[ n * 1024 - 1 ].x << ", " << a_h[ n * 1024 - 1 ].y << " )" << std::endl;
return 0;
}
When I compile and run the above code, kernels duplicate_whole and duplicate_half take just about the same time to run.
However, when I analyze the kernels using nvvp I get different reports for each of the kernels in the following sense. For kernel duplicate_whole, nvvp warns me that at line 23 (d = { 2.0f * d.x, 2.0f * d.y };) the kernel is performing
Global Load L2 Transaction/Access = 8, Ideal Transaction/Access = 4
I agree that I am loading 8 byte words. What I do not understand is why 4 bytes is the ideal word size. In special, there is no performance difference between the kernels.
I suppose that there must be circumstances where this global store access pattern could cause performance degradation. What are these?
And why is that I do not get a performance hit?
I hope that this edit has clarified some unclear points.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I'll start wit some kernel code to exemplify my question, which will follow below
template < class data_t >
__global__ void chirp_factors_multiply( std::complex< data_t > const * chirp_factors,
std::complex< data_t > * data,
int M,
int row_length,
int b,
int i_0
)
{
#ifndef CUGALE_MUL_SHUFFLE
// Output array length:
int plane_area = row_length * M;
// Process element:
int i = blockIdx.x * row_length + threadIdx.x + i_0;
my_complex< data_t > const chirp_factor = ref_complex( chirp_factors[ i ] );
my_complex< data_t > datum;
my_complex< data_t > datum_new;
for ( int i_b = 0; i_b < b; ++ i_b )
{
my_complex< data_t > & ref_datum = ref_complex( data[ i_b * plane_area + i ] );
datum = ref_datum;
datum_new.x = datum.x * chirp_factor.x - datum.y * chirp_factor.y;
datum_new.y = datum.x * chirp_factor.y + datum.y * chirp_factor.x;
ref_datum = datum_new;
}
#else
// Output array length:
int plane_area = row_length * M;
// Element to process:
int i = blockIdx.x * row_length + ( threadIdx.x + i_0 ) / 2;
my_complex< data_t > const chirp_factor = ref_complex( chirp_factors[ i ] );
// Real and imaginary part of datum (not respectively for odd threads):
data_t datum_a;
data_t datum_b;
// Even TIDs will read data in regular order, odd TIDs will read data in inverted order:
int parity = ( threadIdx.x % 2 );
int shuffle_dir = 1 - 2 * parity;
int inwarp_tid = threadIdx.x % warpSize;
for ( int i_b = 0; i_b < b; ++ i_b )
{
int data_idx = i_b * plane_area + i;
datum_a = reinterpret_cast< data_t * >( data + data_idx )[ parity ];
datum_b = __shfl_sync( 0xFFFFFFFF, datum_a, inwarp_tid + shuffle_dir, warpSize );
// Even TIDs compute real part, odd TIDs compute imaginary part:
reinterpret_cast< data_t * >( data + data_idx )[ parity ] = datum_a * chirp_factor.x - shuffle_dir * datum_b * chirp_factor.y;
}
#endif // #ifndef CUGALE_MUL_SHUFFLE
}
Let us consider the case where data_t is float, which is memory bandwidth limited. As it can be seen above, there are two versions of the kernel, one which reads/writes 8 bytes (a whole complex number) per thread and another which reads/writes 4 bytes per thread and then shuffles the results so the complex product is computed correctly.
The reason why I have written the version using shuffle is because nvvp insisted that reading 8 bytes per thread was not the best idea because this memory access pattern would be inefficient. This is the case even though in both systems tested (GTX 1050 and GTX Titan Xp) memory bandwidth was very close to theoretical maximum.
Surely enough I knew that no improvement was likely to happen, and this was indeed the case: both kernels take pretty much the same time to run. So, my question is the following:
Why is that nvvp reports that reading 8 bytes would be less efficient than reading 4 bytes per thread? In which circumstances would that be the case?
As a side note, single precision is more important to me, but double is useful in some cases too. Interestingly enough, in the case where data_t is double, there is no execution time difference too between the two kernel versions, even though in this case the kernel is compute bound and the shuffle version performs some more flops than the original version.
Note: the kernels are applied to a row_length * M * b dataset (b images with row_length columns and M lines) and the chirp_factor array is row_length * M. Both kernels run perfecly fine (I can edit the question to show you the calls to both versions if you have doubts about it).
The issue here has to do with how the compiler is processing your code. nvvp is merely dutifully reporting what is happening when you run your code.
If you use the cuobjdump -sass tool on your executable, you will discover that the duplicate_whole routine is doing two 4-byte loads and two 4-byte stores. This is not optimal, partly becuase there is a stride in each load and store (each load and store touches alternate elements in memory).
The reason for this is that the compiler does not know the alignment of your my_complex struct. Your struct would be legal for use in situations that would prevent the compiler from generating a (legal) 8-byte load. As discussed here we can fix this by informing the compiler that we only intend to use the struct in alignment scenarios where a CUDA 8-byte load is legal (i.e. it is "naturally aligned"). The modification to your struct looks like this:
template < class T >
struct __align__(8) my_complex {
T x;
T y;
};
With that change to your code, the compiler generates 8-byte loads for the duplicate_whole kernel, and you should see a different report from the profiler. You should use this sort of decoration only when you understand what it means and are willing to enter into a contract with the compiler that you will ensure this is the case. If you do something unusual, like unusual pointer casting, you can violate your end of the bargain and generate a machine fault.
The reason you don't see much performance difference almost certainly has to do with CUDA load/store behavior and the GPU caches
When you do a strided load, the GPU loads an entire cacheline anyway, even though (in this case) you only need half the elements (the real elements) for that particular load operation. However you need the other half of the elements (the imaginary elements) anyway; they will be loaded on the next instruction, and this instruction most likely hits in the cache, due to the previous load.
On a strided store in this case, writing strided elements in one instruction and the alternate elements in the next instruction will end up using one of the caches as a "coalescing buffer". This isn't coalescing in the typical sense used in CUDA terminology; that sort of coalescing only applies to a single instruction. However the cache "coalescing buffer" behavior allows it to "accumulate" multiple writes to an already-resident line, before that line gets written out or evicted. This is approximately equivalent to "write-back" cache behavior.

Construct binary tree recursively in cuda

I want to build a binary tree in a vector s.t. parent's value would be the sum of its both children. To recursively build the tree in C would look like:
int construct(int elements[], int start, int end, int* tree, int index) {
if (start == end) {
tree[index] = elements[start];
return tree[index];
}
int middle = start + (end - start) / 2;
tree[index] = construct(elements, start, middle, tree, index*2) +
construct(elements, middle, end, tree, index*2+1);
return tree[index];
}
But I don't know how to build it in the CUDA in a parallel way by utilizing the thread. One reference I found useful is
How should we go about parallelizing this kind of recursive algorithm? One way is to use the approach presented by Garanzha et al., which processes the levels of nodes sequentially, starting from the root. The idea is to maintain a growing array of nodes in a breadth-first order, so that every level in the hierarchy corresponds to a linear range of nodes. On a given level, we launch one thread for each node that falls into this range. The thread starts by reading first and last from the node array and calling findSplit(). It then appends the resulting child nodes to the same node array using an atomic counter and writes out their corresponding sub-ranges. This process iterates so that each level outputs the nodes contained on the next level, which then get processed in the next round.
which process each level sequentially and parallelize the nodes at each level. I think it makes total sense, but I don't how to implement that exactly, can somebody give me an idea or example on how to do that?
I am not sure the indexing scheme described above would work.
Here is a sample code that could work: (though the tree indexing might not suit your needs):
__global__ void buildtreelevel(const int* elements, int count, int* tree)
{
int parentcount = (count + 1) >> 1;
for (int k = threadIdx.x + blockDim.x * blockIdx.x ; k < parentcount ; k += blockDim.x * gridDim.x)
{
if ((2*k+1) < count)
tree[k] = elements[k*2] + elements[k*2+1] ;
else
tree[k] = elements[k*2] ;
}
}
This function only processes one tree level at a time. The overall tree size is provided by :
int treesize (int count, int& maxlevel)
{
int res = 1 ;
while (count > 1)
{
count = (count + 1) >> 1 ;
res += count ;
++maxlevel;
}
return res ;
}
And building the whole tree requires several calls to the buildtreelevel kernel:
int buildtree (int grid, int block, const int* d_elements, int count, int** h_tree, int* d_data)
{
const int* ptr_elements = d_elements ;
int* ptr_data = d_data ;
int level = 0 ;
int levelcount = count ;
while (levelcount > 1)
{
buildtreelevel <<< grid, block >>> (ptr_elements, levelcount, ptr_data) ;
levelcount = (levelcount + 1) >> 1 ;
h_tree [level++] = ptr_data ;
ptr_elements = ptr_data ;
ptr_data += levelcount ;
}
return level ;
}
Synchronization only needs to occur at the end as all kernels are executed on stream 0.
int main()
{
int nElements = 10000000 ;
int* d_elements ;
int* d_data ;
int** h_tree ;
int maxlevel = 1 ;
cudaMalloc ((void**)&d_elements, nElements * sizeof (int)) ;
cudaMalloc ((void**)&d_data, treesize(nElements, maxlevel) * sizeof (int)) ;
h_tree = new int*[maxlevel];
buildtree (64, 256, d_elements, nElements, h_tree, d_data) ;
cudaError_t res = cudaDeviceSynchronize() ;
if (cudaSuccess != res)
fprintf (stderr, "ERROR (%d) : %s \n", res, cudaGetErrorString(res));
cudaDeviceReset();
}
Your tree structure is stored in h_tree, which is a host array of device pointers.
This is not optimal, but might be a good start (using aligned int4 with __ldg) and processing 4 levels at a time might improve performance.

cudamemcpyasync and streams behaviour understanding

I have this simple code shown below which is doing nothing but just copies some data to the device from host using the streams. But I am confused after running the nvprof as to cudamemcpyasync is really async and understanding of the streams.
#include <stdio.h>
#define NUM_STREAMS 4
cudaError_t memcpyUsingStreams (float *fDest,
float *fSrc,
int iBytes,
cudaMemcpyKind eDirection,
cudaStream_t *pCuStream)
{
int iIndex = 0 ;
cudaError_t cuError = cudaSuccess ;
int iOffset = 0 ;
iOffset = (iBytes / NUM_STREAMS) ;
/*Creating streams if not present */
if (NULL == pCuStream)
{
pCuStream = (cudaStream_t *) malloc(NUM_STREAMS * sizeof(cudaStream_t));
for (iIndex = 0 ; iIndex < NUM_STREAMS; iIndex++)
{
cuError = cudaStreamCreate (&pCuStream[iIndex]) ;
}
}
if (cuError != cudaSuccess)
{
cuError = cudaMemcpy (fDest, fSrc, iBytes, eDirection) ;
}
else
{
for (iIndex = 0 ; iIndex < NUM_STREAMS; iIndex++)
{
iOffset = iIndex * iOffset ;
cuError = cudaMemcpyAsync (fDest + iOffset , fSrc + iOffset, iBytes / NUM_STREAMS , eDirection, pCuStream[iIndex]) ;
}
}
if (NULL != pCuStream)
{
for (iIndex = 0 ; iIndex < NUM_STREAMS; iIndex++)
{
cuError = cudaStreamDestroy (pCuStream[iIndex]) ;
}
free (pCuStream) ;
}
return cuError ;
}
int main()
{
float *hdata = NULL ;
float *ddata = NULL ;
int i, j, k, index ;
cudaStream_t *abc = NULL ;
hdata = (float *) malloc (sizeof (float) * 256 * 256 * 256) ;
cudaMalloc ((void **) &ddata, sizeof (float) * 256 * 256 * 256) ;
for (i=0 ; i< 256 ; i++)
{
for (j=0; j< 256; j++)
{
for (k=0; k< 256 ; k++)
{
index = (((i * 256) + j) * 256) + k;
hdata [index] = index ;
}
}
}
memcpyUsingStreams (ddata, hdata, sizeof (float) * 256 * 256 * 256, cudaMemcpyHostToDevice, abc) ;
cudaFree (ddata) ;
free (hdata) ;
return 0;
}
The nvprof results are as below.
Start Duration Grid Size Block Size Regs* SSMem* DSMem* Size Throughput Device Context Stream Name
104.35ms 10.38ms - - - - - 16.78MB 1.62GB/s 0 1 7 [CUDA memcpy HtoD]
114.73ms 10.41ms - - - - - 16.78MB 1.61GB/s 0 1 8 [CUDA memcpy HtoD]
125.14ms 10.46ms - - - - - 16.78MB 1.60GB/s 0 1 9 [CUDA memcpy HtoD]
135.61ms 10.39ms - - - - - 16.78MB 1.61GB/s 0 1 10 [CUDA memcpy HtoD]
So I didnt understand the point of using the streams here because of the start time. It looks sequential to me. Please help me to understand as what I am doing wrong here. I am using tesla K20c card.
The PCI Express link that connects your GPU to the system only has one channel going to the card and one channel coming from the card. That means at most, you can have a single cudaMemcpy(Async) operation that is actually executing at any given time, per direction (i.e. one DtoH and one HtoD, at the most). All other cudaMemcpy(Async) operations will get queued up, waiting for those ahead to complete.
You cannot have two operations going in the same direction at the same time. One at a time, per direction.
As #JackOLantern states, the principal benefit for streams is to overlap memcopies and compute, or else to allow multiple kernels to execute concurrently. It also allows one DtoH copy to run concurrently with one HtoD copy.
Since your program does all HtoD copies, they all get executed serially. Each copy has to wait for the copy ahead of it to complete.
Even getting an HtoD and DtoH memcopy to execute concurrently requires a device with multiple copy engines; you can discover this about your device using deviceQuery.
I should also point out, to enable concurrent behavior, you should use cudaHostAlloc, not malloc, for your host side buffers.
EDIT: The answer above has GPUs in view that have at most 2 copy engines (one per direction) and is still correct for such GPUs. However there exist some newer Pascal and Volta family member GPUs that have more than 2 copy engines. In that case, with 2 (or more) copy engines per direction, it is theoretically possible to have 2 (or more) transfers "in-flight" in that direction. However this doesn't change the characteristics of the PCIE (or NVLink) bus itself. You are still limited to the available bandwidth, and the exact low level behavior (whether such transfers appear to be "serialized" or else appear to run concurrently, but take longer due to sharing of bandwidth) should not matter much in most cases.

Texture fetch slower than direct global access, chapter 7 from "Cuda by example" book

I am reading and testing the examples in the book "Cuda By example. An introduction to General Purpose GPU Programming".
When testing the examples in chapter 7, relative to texture memory, I realized that access to global memory via texture cache is much slower than direct access (My NVIDIA GPU is GeForceGTX 260, compute capability 1.3 and I am using NVDIA CUDA 4.2):
Time per frame with texture fetch (1D or 2D) for a 256*256 image: 93 ms
Time per frame not using texture (just direct global access) for 256*256: 8.5 ms
I have double checked the code several times and I have also been reading the "CUDA C Programming guide" and "CUDA C Best practices Guide" which come along with the SDK, and I do not really understand the problem.
As far as I understand, texture memory is just global memory with a specific access mechanism implementation to make it look like a cache (?). I am wondering whether coalesced access to global memory will make texture fetch slower, but I cannot be sure.
Does anybody have a similar problem?
(I found some links in NVIDIA forums for a similar problem, but the link is no longer available.)
The testing code looks this way, only including the relevant parts:
//#define TEXTURE
//#define TEXTURE2
#ifdef TEXTURE
// According to C programming guide, it should be static (3.2.10.1.1)
static texture<float> texConstSrc;
static texture<float> texIn;
static texture<float> texOut;
#endif
__global__ void copy_const_kernel( float *iptr
#ifdef TEXTURE2
){
#else
,const float *cptr ) {
#endif
// map from threadIdx/BlockIdx to pixel position
int x = threadIdx.x + blockIdx.x * blockDim.x;
int y = threadIdx.y + blockIdx.y * blockDim.y;
int offset = x + y * blockDim.x * gridDim.x;
#ifdef TEXTURE2
float c = tex1Dfetch(texConstSrc,offset);
#else
float c = cptr[offset];
#endif
if ( c != 0) iptr[offset] = c;
}
__global__ void blend_kernel( float *outSrc,
#ifdef TEXTURE
bool dstOut ) {
#else
const float *inSrc ) {
#endif
// map from threadIdx/BlockIdx to pixel position
int x = threadIdx.x + blockIdx.x * blockDim.x;
int y = threadIdx.y + blockIdx.y * blockDim.y;
int offset = x + y * blockDim.x * gridDim.x;
int left = offset - 1;
int right = offset + 1;
if (x == 0) left++;
if (x == SXRES-1) right--;
int top = offset - SYRES;
int bottom = offset + SYRES;
if (y == 0) top += SYRES;
if (y == SYRES-1) bottom -= SYRES;
#ifdef TEXTURE
float t, l, c, r, b;
if (dstOut) {
t = tex1Dfetch(texIn,top);
l = tex1Dfetch(texIn,left);
c = tex1Dfetch(texIn,offset);
r = tex1Dfetch(texIn,right);
b = tex1Dfetch(texIn,bottom);
} else {
t = tex1Dfetch(texOut,top);
l = tex1Dfetch(texOut,left);
c = tex1Dfetch(texOut,offset);
r = tex1Dfetch(texOut,right);
b = tex1Dfetch(texOut,bottom);
}
outSrc[offset] = c + SPEED * (t + b + r + l - 4 * c);
#else
outSrc[offset] = inSrc[offset] + SPEED * ( inSrc[top] +
inSrc[bottom] + inSrc[left] + inSrc[right] -
inSrc[offset]*4);
#endif
}
// globals needed by the update routine
struct DataBlock {
unsigned char *output_bitmap;
float *dev_inSrc;
float *dev_outSrc;
float *dev_constSrc;
cudaEvent_t start, stop;
float totalTime;
float frames;
unsigned size;
unsigned char *output_host;
};
void anim_gpu( DataBlock *d, int ticks ) {
checkCudaErrors( cudaEventRecord( d->start, 0 ) );
dim3 blocks(SXRES/16,SYRES/16);
dim3 threads(16,16);
#ifdef TEXTURE
volatile bool dstOut = true;
#endif
for (int i=0; i<90; i++) {
#ifdef TEXTURE
float *in, *out;
if (dstOut) {
in = d->dev_inSrc;
out = d->dev_outSrc;
} else {
out = d->dev_inSrc;
in = d->dev_outSrc;
}
#ifdef TEXTURE2
copy_const_kernel<<<blocks,threads>>>( in );
#else
copy_const_kernel<<<blocks,threads>>>( in,
d->dev_constSrc );
#endif
blend_kernel<<<blocks,threads>>>( out, dstOut );
dstOut = !dstOut;
#else
copy_const_kernel<<<blocks,threads>>>( d->dev_inSrc,
d->dev_constSrc );
blend_kernel<<<blocks,threads>>>( d->dev_outSrc,
d->dev_inSrc );
swap( d->dev_inSrc, d->dev_outSrc );
#endif
}
// Some stuff for the events
// ...
}
I have been testing the results with the nvvp (NVIDIA profiler)
The result are quite curious as they show that there are a lot of texture cache misses (which are probably the cause for the bad performance).
The result from the profiler show also information that is difficult to understand even using the guide "CUPTI_User_GUide):
text_cache_hit: Number of texture cache hits (they are accounted only for one SM according to 1.3 capability).
text_cache_miss: Number of texture cache miss (they are accounted only for one SM according to 1.3 capability).
The following are the results for an example of 256*256 without using texture cache (only relevant info is shown):
Name Duration(ns) Grid_Size Block_Size
"copy_const_kernel(...) 22688 16,16,1 16,16,1
"blend_kernel(...)" 51360 16,16,1 16,16,1
Following are the results using 1D texture cache:
Name Duration(ns) Grid_Size Block_Size tex_cache_hit tex_cache_miss
"copy_const_kernel(...)" 147392 16,16,1 16,16,1 0 1024
"blend_kernel(...)" 841728 16,16,1 16,16,1 79 5041
Following are the results using 2D texture cache:
Name Duration(ns) Grid_Size Block_Size tex_cache_hit tex_cache_miss
"copy_const_kernel(...)" 150880 16,16,1 16,16,1 0 1024
"blend_kernel(...)" 872832 16,16,1 16,16,1 2971 2149
These result show several interesting info:
There are no cache hits at all for the "copy const" function (although ideally the memory is "spatially located", in the sense that each thread accesses memory which is near to the memory acceded by other near threads). I guess that this is because the threads within this function do not access memory from other threads, which seems to be the way for the texture cache to be usable (being the "spatially located" concept quite confusing)
There are some cache hits in the 1D and a lot more in the 2D case for the function "blend_kernel". I guess that it is due to the fact that within that function, any thread access memory from their neighbours threads. I cannot understand why there are more in 2D than 1d.
The duration time is greater in the texture cases than in the no texture case (nearly about one order of magnitude). Perhaps related with the so many texture cache misses.
For the "copy_const" function there are 1024 total accesses for the SM and 5120 for the "blend kernel". The relation 5:1 is correct due to the fact that there are 5 fetches in "blend" and only 1 in "copy_const". Anyway, I cannot understand where all this 1024 come from: ideally, this event "text cache miss/hot" only accounts for one SM (I have 24 in my GeForceGTX 260) and it only accounts for warps ( 32 thread size). Therefore, I have 256 threads/32=8 warps per SM and 256 blocks/24 = 10 or 11 "iterations" per SM, so I would be expecting something like 80 or 88 fetches (more over, some other event like sm_cta_launched, which is the number of thread blocks per SM, which is supposed to be supported in my 1.3 device, is always 0...)

CUDA kernel call in a simple sample

It's the first parallel code of cuda by example .
Can any one describe me about the kernel call : <<< N , 1 >>>
This is the code with important points :
#define N 10
__global__ void add( int *a, int *b, int *c ) {
int tid = blockIdx.x; // this thread handles the data at its thread id
if (tid < N)
c[tid] = a[tid] + b[tid];
}
int main( void ) {
int a[N], b[N], c[N];
int *dev_a, *dev_b, *dev_c;
// allocate the memory on the GPU
// fill the arrays 'a' and 'b' on the CPU
// copy the arrays 'a' and 'b' to the GPU
add<<<N,1>>>( dev_a, dev_b, dev_c );
// copy the array 'c' back from the GPU to the CPU
// display the results
// free the memory allocated on the GPU
return 0;
}
Why it used of <<< N , 1 >>> that it means we used of N blocks and 1 thread in each block ?? since we can write this <<< 1 , N >>> and used 1 block and N thread in this block for more optimization.
For this little example, there is no particular reason (as Bart already told you in the comments). But for a larger, more realistic example you should always keep in mind that the number of threads per block is limited. That is, if you use N = 10000, you could not use <<<1,N>>> anymore, but <<<N,1>>> would still work.