Flash/AIR Context3D memory limits - actionscript-3

In the original Context3D docs, it says:
Resource Total memory
Vertex buffers 256 MB
Index buffers 128 MB
Programs 16 MB
Textures 128 MB¹
Cube textures 256 MB
Is this still true even with the latest graphics card monsters? (I need to be able to use more than 512MB memory for textures)
If yes, is it also true in both Flash and AIR builds?

Related

Why is the max amount of shared memory per block on Ampere GPUs not a multiple of 16 KiB?

Traditionally, NVIDIA GPUs have offered CUDA thread blocks shared memory in amounts always divisible by 16 KiB (see e.g. in this table). However, with Ampere 8.0 and 8.6 GPUs, the amounts are 99 KiB and 163 KiB. How come? Is this due to a problem with the hardware? It couldn't be typo, could it?
Quoting https://docs.nvidia.com/cuda/cuda-c-programming-guide/#shared-memory-8-x
Similar to the Volta architecture, the amount of the unified data cache reserved for shared memory is configurable on a per kernel basis. For the NVIDIA Ampere GPU architecture, the unified data cache has a size of 192 KB for devices of compute capability 8.0 and 128 KB for devices of compute capability 8.6. The shared memory capacity can be set to 0, 8, 16, 32, 64, 100, 132 or 164 KB for devices of compute capability 8.0, and to 0, 8, 16, 32, 64 or 100 KB for devices of compute capability 8.6.
Note that the maximum amount of shared memory per thread block is smaller than the maximum shared memory partition available per SM. The 1 KB of shared memory not made available to a thread block is reserved for system use.
So, in total, there is a multiple of 16kb of memory available, but it is split up with the cache.

Configuration and the mapping of constant memory in CUDA

I'd like to know if the configuration of constant memory changes as the underlying architecture evolves from Kepler to Volta. To be specific, I have two questions:
1) Does the sizes of constant memory and per-SM constant cache change?
2) What's the mapping from the cmem space to constant memory?
When compiling cuda code to PTX with adding '-v' to nvcc, we can see the memory usage like: ptxas info : Used 20 registers, 80 bytes cmem[0], 348 bytes cmem[2]. So does the cmem space maps to constant memory? Does accessing to each cmem space go through the on-chip constant cache?
I have found the answer for the 1st question.
In the CUDA C Programming Guide, table14 shows the size of constant memory and constant cache for different CCs.
The constant memory size is always 64KB from CC2.x to 6.x. The on-chip constant cache size is 8KB till CC 3.0 and increases to 10KB for the later.

CUDA bank conflict for L1 cache?

On NVIDIA's 2.x architecture, each warp has 64kb of memory that is by default partitioned into 48kb of Shared Memory and 16kb of L1 cache (servicing global and constant memory).
We all know about the bank conflicts of accessing Shared Memory - the memory is divided into 32 banks of size 32-bits to allow simultaneous independent access by all 32 threads. On the other hand, Global Memory, though much slower, does not experience bank conflicts because memory requests are coalesced across the warp.
Question: Suppose some data from global or constant memory is cached in the L1 cache for a given warp. Is access to this data subject to bank conflicts, like Shared Memory (since the L1 Cache and the Shared Memory are in fact the same hardware), or is it bank-conflict-free in the way that Global/Constant memory is?
On NVIDIA's 2.x architecture, each warp has 64kb of memory that is by
default partitioned into 48kb of Shared Memory and 16kb of L1 cache
Compute capability 2.x devices have 64 KB of SRAM per Streaming Multiprocessor (SM) that can beconfigured as
16 KB L1 and 48 KB shared memory, or
48 KB L1 and 16 KB shared memory.
(servicing global and constant memory).
Loads and stores to global memory, local memory, and surface memory go through the L1. Accesses to constant memory go through dedicated constant caches.
We all know about the bank conflicts of accessing Shared Memory - the
memory is divided into 32 banks of size 32-bits to allow simultaneous
independent access by all 32 threads. On the other hand, Global
Memory, though much slower, does not experience bank conflicts because
memory requests are coalesced across the warp.
Accesses through L1 to global or local memory are done per cache line (128 B). When a load request is issued to L1 the LSU needs to perform an address divergence calculation to determine which threads are accessing the same cache line. The LSU unit then has to perform a L1 cache tag look up. If the line is cached then it is written back to the register file; otherwise, the request is sent to L2. If the warp has threads not serviced by the request then a replay is requested and the operation is reissued with the remaining threads.
Multiple threads in a warp can access the same bytes in the cache line without causing a conflict.
Question: Suppose some data from global or constant memory is cached
in the L1 cache for a given warp.
Constant memory is not cached in L1 it is cached in the constant caches.
Is access to this data subject to bank conflicts, like Shared Memory
(since the L1 Cache and the hared Memory are in fact the same
hardware), or is it bank-conflict-free in the way that global/Constant
memory is?
L1 and the constant cache access a single cache line at a time so there are no bank conflicts.

cuda and cudamalloc allocation large block of memory fails

I have a GTX570 with 2Gb of memory, when I try to allocate more memory with one cudamalloc call than about 804Mb I get into to trouble. Anyone any ideas to why that is? It is my first call so I doubt it is fragmentation.
No problem:
Memory avaliable: Free: 2336116736, Total: 2684026880
requesting 804913152 bytes
no error
Memory avaliable: Free: 1531199488, Total: 2684026880
requesting 804913152 bytes
no error
Memory avaliable: Free: 726286336, Total: 2684026880
Problem:
Memory avaliable: Free: 2327601152, Total: 2684026880
requesting 805306368 bytes
out of memory
Memory avaliable: Free: 2327597056, Total: 2684026880
requesting 805306368 bytes
out of memory
Memory avaliable: Free: 2327597056, Total: 2684026880
This is caused restrictions imposed by the Windows WDDM subsystem. There is a hard limit imposed on how much memory can be allocated, calculated as
MIN ( ( System Memory Size in MB - 512 MB ) / 2, PAGING_BUFFER_SEGMENT_SIZE )
For desktop windows PAGING_BUFFER_SEGMENT_SIZE is about 2Gb IIRC. You have two options to work around this:
Get a Telsa card and use the dedicated Windows TCC mode driver which takes memory management of the device away from WDDM, eliminating the restriction.
Install linux or use a CUDA aware live distribution for your GPU computing. The Linux driver has no restrictions on memory allocations beyond the free memory capacity of the device.

Load from shared memory the one and same 32 bytes (ulong4) by the each warp thread

If each warp accesses the shared memory at the same address, how would that load the 32 bytes of data (ulong4)? Will it be 'broadcasted'? Would the access time be the same as if each thread loaded the 2 bytes 'unsigned short int'?
Now, in case I need to load from shared memory 32/64 same bytes in each warp, how could I do this?
On devices before compute capability 3.0 shared memory accesses are always 32 bit / 4 bytes wide and will be broadcast if all threads of a warp access the same address. Wider accesses will compile to multiple instructions.
On compute capability 3.0 shared memory accesses can be configured to be either 32 bit wide or 64 bit wide using cudaDeviceSetSharedMemConfig(). The chosen setting will apply to the entire kernel though.
[As I had originally missed the little word "shared" in the question, I gave a completely off-topic answer for global memory instead. Since that one should still be correct, I'll leave it in here:]
It depends:
Compute capability 1.0 and 1.1 don't broadcast and use 64 separate 32 byte memory transactions (two times 16 bytes, extended to the minimum 32 byte transactions size for each thread of the warp)
Compute capability 1.2 and 1.3 broadcast, so two 32 byte transactions (two times 16 bytes, extended to minimum 32 byte transaction size) suffice for all threads of the warp
Compute capability 2.0 and higher just read a 128 byte cache line and satisfy all requests from there.
The compute capability 1.x devices will waste 50% of the transferred data, as a single thread can load at most 16 bytes, but the minimum transaction size is or 32 bytes. Additionally, 32 byte transactions are a lot slower that 128 byte transactions.
The time will be the same as if just 8 bytes were read by each thread because of the minimum transaction size, and because data paths are sufficiently wide to transfer either 8 or 16 bytes to each thread per transaction.
Reading 2× or 4× the data will take 2× or 4× as long on compute capability 1.x, but only minimally longer on 2.0 and higher if the data falls into the same cache line so no further memory transactions are necessary.
So on compute capability 2.0 and higher you don't need to worry. On 1.x read the data through the constant cache or a texture if it is constant, or reorder it in shared memory otherwise (assuming your kernel is memory bandwidth bound).