In my gpu max threads per block is 1024. I am working on a image processing project using CUDA. Now if I want to use shared memory is that mean that I can only work with 1024 pixels using one block and need to copy only those 1024 elements to the shared memory
Your question is quite unclear, so I will answer to what is asked in the title.
The amount of data that can be hold in shared memory in CUDA depends on the Compute Capability of your GPU.
For instance, on CC 2.x and 3.x :
On devices of compute capability 2.x and 3.x, each multiprocessor has 64KB of on-chip memory that can be partitioned between L1 cache and shared memory.
See Configuring the amount of shared memory section here : Nvidia Parallel Forall Devblog : Using Shared Memory in CUDA C/C++
The optimization you have to think about is to avoid bank conflicts by mapping the threads' access to memory banks. This is introduced in this blog and you should read about it.
Related
Since simultaneous access to managed memory on devices of compute capability lower than 6.x is not possible (CUDA Toolkit Documentation), is there a way to simulatneously access managed memory by CPU and GPU with compute capability 5.0 or any method that can make CPU access managed memory while GPU kernel is running.
is there a way to simulatneously access managed memory by CPU and GPU with compute capability 5.0
No.
or any method that can make CPU access managed memory while GPU kernel is running.
Not on a compute capability 5.0 device.
You can have "simultaneous" CPU and GPU access to data using CUDA zero-copy techniques.
A full tutorial on both Unified memory as well as Pinned/Mapped/Zero-copy memory is well beyond the scope of what I can write in an answer here. Unified Memory has its own section in the programming guide. Both of these topics are extensively covered here on the cuda tag on SO as well as many other places on the web. Any questions will likely be answerable with a google search.
In a nutshell, zero-copy memory on 64-bit OS is accessed via a host pinning API such as cudaHostAlloc(). The memory so allocated is host memory, and always remains there, but it is accessible to the GPU. The access to this memory from GPU to host memory occurs across the PCIE bus, so it is much slower than normal global memory access. The pointer returned by the allocation (on 64-bit OS) is usable in both host and device code. You can study CUDA sample codes that use zero-copy techniques such as simpleZeroCopy.
By contrast, ordinary unified memory (UM) is data that will be migrated to the processor that is using it. In a pre-pascal UM regime, this migration is triggered by kernel calls and synchronizing operations. Simultaneous access by host and device in this regime is not possible. For pascal and beyond devices in a proper UM post-pascal regime (basically, 64-bit linux only, CUDA 8+), the data is migrated on-demand, even during kernel execution, thus allowing a limited form of "simultaneous" access. Unified Memory has various behavior modes, and some of those will cause a unified memory allocation to "decay" into a pinned/zero-copy host allocation, under some circumstances.
How do I access and output memory statistics (used memory, available memory) from all types of memory at runtime in CUDA C++?
Global memory, Texture memory, Shared memory, Local memory, Registers, (Constant memory?)
Bonus question: Could you point me to documentation on how to do it with the Windows CUDA profiler tool? Is memory profiling supported on all cards, or is it just some specific models that can do it?
For a runtime check of overall memory usage on the device, use the cudaMemGetInfo API. Note that there is no such thing as dedicated texture memory on NVIDIA devices. Textures are stored in global memory and there is no way to separately account for them using any of the CUDA APIs that I am aware of. You can also programmatically inquire about the size of runtime components which consume global memory (runtime heap, printf buffer, stack) using the cudaDeviceGetLimit API.
Constant memory is statically assigned at compile time, and you can get the constant memory usage for a particular translation unit via compile time switches.
There is no way of that I am aware of to check SM level resource usage (registers, shared memory, local memory) dynamically at runtime. You can query the per thread and per block resource requirements of a particular kernel function at runtime using the cudaFuncGetAttributes API.
The Visual profiler can show the same information collected at runtime in its detail view. I am not a big user of the visual profiler, so I am not sure about whether it collects device level memory usage dynamically during a run. I certainly don't recall seeing anything like that, but that doesn't mean it doesn't exist.
Could you please explain the differences between using both "16 KB shared memory + 48K L1 cache" or "48 KB shared memory + 16 KB L1 cache" in CUDA programming? What should I expect in time execution? When could I expect smaller gpu time?
On Fermi and Kepler nVIDIA GPUs, each SM has a 64KB chunk of memory which can be configured as 16/48 or 48/16 shared memory/L1 cache. Which mode you use depends on how much use of shared memory your kernel makes. If your kernel uses a lot of shared memory then you would probably find that configuring it as 48KB shared memory allows higher occupancy and hence better performance.
On the other hand, if your kernel does not use shared memory at all, or if it only uses a very small amount per thread, then you would configure it as 48KB L1 cache.
How much a "very small amount" is is probably best illustrated with the occupancy calculator which is a spreadsheet included with the CUDA Toolkit and also available here. This spreadsheet allows you to investigate the effect of different shared memory per block and different block sizes.
According to "CUDA C Programming Guide", a constant memory access benefits only if a multiprocessor constant cache is hit (Section 5.3.2.4)1. Otherwise there can be even more memory requests for a half-warp than in case of the coalesced global memory read. So why the constant memory size is limited to 64 KB?
One more question in order not to ask twice. As far as I understand, in the Fermi architecture the texture cache is combined with the L2 cache. Does texture usage still make sense or the global memory reads are cached in the same manner?
1Constant Memory (Section 5.3.2.4)
The constant memory space resides in device memory and is cached in the constant cache mentioned in Sections F.3.1 and F.4.1.
For devices of compute capability 1.x, a constant memory request for a warp is first split into two requests, one for each half-warp, that are issued independently.
A request is then split into as many separate requests as there are different memory addresses in the initial request, decreasing throughput by a factor equal to the number of separate requests.
The resulting requests are then serviced at the throughput of the constant cache in case of a cache hit, or at the throughput of device memory otherwise.
The constant memory size is 64 KB for compute capability 1.0-3.0 devices. The cache working set is only 8KB (see the CUDA Programming Guide v4.2 Table F-2).
Constant memory is used by the driver, compiler, and variables declared __device__ __constant__. The driver uses constant memory to communicate parameters, texture bindings, etc. The compiler uses constants in many of the instructions (see disassembly).
Variables placed in constant memory can be read and written using the host runtime functions cudaMemcpyToSymbol() and cudaMemcpyFromSymbol() (see the CUDA Programming Guide v4.2 section B.2.2). Constant memory is in device memory but is accessed through the constant cache.
On Fermi texture, constant, L1 and I-Cache are all level 1 caches in or around each SM. All level 1 caches access device memory through the L2 cache.
The 64 KB constant limit is per CUmodule which is a CUDA compilation unit. The concept of CUmodule is hidden under the CUDA runtime but accessible by the CUDA Driver API.
I know "Maximum amount of shared memory per multiprocessor" for GPU with Compute Capability 2.0 is 48KB as is said in the guide.
I'm a little confused about the amount of shared memory I can use for each block ? How many blocks are in a multiprocessor. I'm using GeForce GTX 580.
On Fermi, you can use up to 16kb or 48kb (depending on the configuration you select) of shared memory per block - the number of blocks which will run concurrently on a multiprocessor is determined by how much shared memory and registers each block requires, up to a maximum of 8. If you use 48kb, then only a single block can run concurrently. If you use 1kb per block, then up to 8 blocks could run concurrently per multiprocessor, depending on their register usage.