modular arithmetic on the gpu - cuda

I am working on the GPU algorithm which is supposed to do a lot of modular computations. Particularly, various operations on matrices in a finite field which in the long run
reduce to primitive operations like: (a*b - c*d) mod m or (a*b + c) mod m where a,b,c and d are residues modulo m and m is a 32-bit prime.
Through experimentation I learned that the performance of the algorithm is mostly limited by slow modular arithmetic because integer modulo (%) and division operations are not supported on the GPU in hardware.
I appreciate if somebody can give me an idea how to realize efficient modular computations with CUDA ?
To see how this is implemented on CUDA, I use the following code snippet:
__global__ void mod_kernel(unsigned *gout, const unsigned *gin) {
unsigned tid = threadIdx.x;
unsigned a = gin[tid], b = gin[tid * 2], m = gin[tid * 3];
typedef unsigned long long u64;
__syncthreads();
unsigned r = (unsigned)(((u64)a * (u64)b) % m);
__syncthreads();
gout[tid] = r;
}
This code is not supposed to work, I just wanted to see how modular reduction is
implemented on CUDA.
When I disassemble this with cuobjdump --dump-sass (thanks njuffa for advice!), I see the following:
/*0098*/ /*0xffffdc0450ee0000*/ BAR.RED.POPC RZ, RZ;
/*00a0*/ /*0x1c315c4350000000*/ IMUL.U32.U32.HI R5, R3, R7;
/*00a8*/ /*0x1c311c0350000000*/ IMUL.U32.U32 R4, R3, R7;
/*00b0*/ /*0xfc01dde428000000*/ MOV R7, RZ;
/*00b8*/ /*0xe001000750000000*/ CAL 0xf8;
/*00c0*/ /*0x00000007d0000000*/ BPT.DRAIN 0x0;
/*00c8*/ /*0xffffdc0450ee0000*/ BAR.RED.POPC RZ, RZ;
Note that between the two calls to bar.red.popc there is a call to 0xf8 procedure which implements some sophisticated algorithm (about 50 instructions or even more). Not surpising that mod (%) operation is slow

Some time ago I experimented a lot with modular arithmetic on the GPU. On Fermi GPUs you can use double-precision arithmetic to avoid expensive div and mod operations. For example, modular multiplication can be done as follows:
// fast truncation of double-precision to integers
#define CUMP_D2I_TRUNC (double)(3ll << 51)
// computes r = a + b subop c unsigned using extended precision
#define VADDx(r, a, b, c, subop) \
asm volatile("vadd.u32.u32.u32." subop " %0, %1, %2, %3;" : \
"=r"(r) : "r"(a) , "r"(b), "r"(c));
// computes a * b mod m; invk = (double)(1<<30) / m
__device__ __forceinline__
unsigned mul_m(unsigned a, unsigned b, volatile unsigned m,
volatile double invk) {
unsigned hi = __umulhi(a*2, b*2); // 3 flops
// 2 double instructions
double rf = __uint2double_rn(hi) * invk + CUMP_D2I_TRUNC;
unsigned r = (unsigned)__double2loint(rf);
r = a * b - r * m; // 2 flops
// can also be replaced by: VADDx(r, r, m, r, "min") // == umin(r, r + m);
if((int)r < 0)
r += m;
return r;
}
However this only works for 31-bit integer modulos (if 1 bit is not critical for you)
and you also need to precompute 'invk' beforehand. This gives absolute minimum of instructions I can achieve, ie.:
SHL.W R2, R4, 0x1;
SHL.W R8, R6, 0x1;
IMUL.U32.U32 R4, R4, R6;
IMUL.U32.U32.HI R8, R2, R8;
I2F.F64.U32 R8, R8;
DFMA R2, R2, R8, R10;
IMAD.U32.U32 R4, -R12, R2, R4;
ISETP.GE.AND P0, pt, R4, RZ, pt;
#!P0 IADD R4, R12, R4;
For description of the algorithm, you can have a look at my paper:
gpu_resultants. Other operations like (xy - zw) mod m are also explained there.
Out of curiosity, I compared the performance of the resultant algorithm
using your modular multiplication:
unsigned r = (unsigned)(((u64)a * (u64)b) % m);
against the optimized version with mul_m.
Modular arithmetic with default % operation:
low_deg: 11; high_deg: 2481; bits: 10227
nmods: 330; n_real_pts: 2482; npts: 2495
res time: 5755.357910 ms; mod_inv time: 0.907008 ms; interp time: 856.015015 ms; CRA time: 44.065857 ms
GPU time elapsed: 6659.405273 ms;
Modular arithmetic with mul_m:
low_deg: 11; high_deg: 2481; bits: 10227
nmods: 330; n_real_pts: 2482; npts: 2495
res time: 1100.124756 ms; mod_inv time: 0.192608 ms; interp time: 220.615143 ms; CRA time: 10.376352 ms
GPU time elapsed: 1334.742310 ms;
So on the average it is about 5x faster. Note also that, you might not see a speed-up if you just evaluate raw arithmetic performance using a kernel with a bunch of mul_mod operations (like saxpy example). But in real applications with control logic, synchronization barriers etc. the speed-up is very noticeable.

A high-end Fermi GPU (e.g. a GTX 580) will likely give you the best performance among shipping cards for this. You would want all 32-bit operands to be of type "unsigned int" for best performance, as there is some additional overhead for the handling of signed divisions and modulos.
The compiler generates very efficient code for division and modulo with fixed divisor As I recall it is usually around three to five machine instructions instructions on Fermi and Kepler. You can check the generated SASS (machine code) with cuobjdump --dump-sass. You might be able to use templated functions with constant divisors if you only use a few different divisors.
You should see on the order of sixteen inlined SASS instructions being generated for the unsigned 32-bit operations with variable divisor, across Fermi and Kepler. The code is limited by the throughput of integer multiplies and for Fermi-class GPUs is competitive with hardware solutions. Somewhat reduced performance is seen on currently shipping Kepler-class GPUs due to their reduced integer multiply throughput.
[Added later, after clarification of the question:]
Unsigned 64-bit division and modulo with variable divisor on the other hand are called subroutines of about 65 instructions on Fermi and Kepler. They look close to optimal. On Fermi, this is still reasonably competitive with hardware implementations (note that 64-bit integer divisions are not exactly super fast on CPUs that provide this as a built-in instruction). Below is some code that I posted to the NVIDIA forums some time back for the kind of task described in the clarification. It avoids the expensive division, but does assume that fairly large batches of operands are sharing the same divisior. It uses double-precision arithmetic, which is especially fast on Tesla-class GPUs (as opposed to consumer cards). I only did a cursory test of the code, you might want to test this more carefully before deploying it.
// Let b, p, and A[i] be integers < 2^51
// Let N be a integer on the order of 10000
// for i from 1 to N
// A[i] <-- A[i] * b mod p
/*---- kernel arguments ----*/
unsigned long long *A;
double b, p; /* convert from unsigned long long to double before passing to kernel */
double oop; /* pass precomputed 1.0/p to kernel */
/*---- code inside kernel -----*/
double a, q, h, l, rem;
const double int_cvt_magic = 6755399441055744.0; /* 2^52+2^51 */
a = (double)A[i];
/* approximate quotient and round it to the nearest integer */
q = __fma_rn (a * b, oop, int_cvt_magic);
q = q - int_cvt_magic;
/* back-multiply, representing p*q as a double-double h:l exactly */
h = p * q;
l = __fma_rn (p, q, -h);
/* remainder is double-width product a*b minus double-double h:l */
rem = __fma_rn (a, b, -h);
rem = rem - l;
/* remainder may be negative as quotient rounded; fix if necessary */
if (rem < 0.0) rem += p;
A[i] = (unsigned long long)rem;

There are tricks to efficiently perform mod operations but if only m is radix 2.
For instance, x mod y == x & (y-1), where y is 2^n. Performing bitwise operation is the fastest.
Otherwise, probably a look-up table?
Below is a link on discussion of efficient modulo implementation. You might need to implement it yourself to get the most out of it.
Efficient computation of mod

Related

How __align__ works in cuda C?

I am practicing an exercise for Array of Struct (AoS). A struct with/without __align__ has been defined like:
#ifdef TESTALIGN8
struct __align__(8) InnerStruct {
float x;
float y;
};
#else
struct InnerStruct {
float x;
float y;
};
#endif
The test case is
__global__ void testGpuInnerStruct(InnerStruct *data, InnerStruct *result, const int n) {
unsigned int idx = blockIdx.x * blockDim.x + threadIdx.x;
if (idx < n) {
result[idx].x = data[idx].x + 10;
result[idx].y = data[idx].y + 20;
}
}
The file could be found at gist
Both cases were profiled by ncu-ui on Quadro RTX 4000 and the Memory Workload Analysis is like
Performance without __align__(8)
Performance with __align__(8)
Why L1 hit of latter case is 0%? In my mind, the minimum granularity of load/store is 32 bytes and sizeof(InnerStruct) is 8 bytes with or without __align__(8) qualifier, the InnerStruct.x and InnerStruct.y would always be read in a same load with or without L1 cache. How __align__ impacts the performance like this?
Why L1 hit of latter case is 0%?
The __align__(8) directive allows the compiler to discover that it can convert 2 separate loads into a single load. The result is that whereas in the non-decorated case, the compiler generates 2 loads, and the 2nd load derives benefit (cache hit rate) from the first load, in the decorated case, there is only one load instruction. Therefore there is no observed cache benefit.
For the non-decorated case, the compiler does something like this:
if (idx < n) {
//result[idx].x = data[idx].x + 10;
LDG R0, [result[idx].x]; // pulls two 128-byte L1 cachelines per warp: MISS L1,MISS L2
FADD R1, R0, 10;
STG [result[idx].x], R1; // HIT L2
//result[idx].y = data[idx].y + 20;
LDG R0, [result[idx].y]; // benefits from the L1 cache: HIT L1
FADD R1, R0, 20;
STG [result[idx].y], R1; // HIT L2
}
For the decorated case, the compiler does something like:
if (idx < n) {
LDG.64 R0,[result[idx].x]; // nothing populated cache prior to this: MISS L1,MISS L2
//result[idx].x = data[idx].x + 10;
FADD R0, R0, 10;
//result[idx].y = data[idx].y + 20;
FADD R1, R1, 20;
STG.64 [result[idx].x], R0; // HIT L2
}
Thus there is only one load instruction, which does not get any cache benefit.
In the non-decorated case, the compiler cannot assume that the struct is aligned to 8 bytes. It can only assume a 4-byte alignment (the natural alignment for float type). If the struct only has 4 byte alignment (and not 8-byte alignment), then the LDG.64 instruction is not legal, because that instruction requires a "natural" 8-byte alignment. Therefore in the non-decorated case, the compiler must use two 4-byte loads, because it cannot assume 8 byte alignment, whereas in the decorated case, it knows that LDG.64 is legal, and so it uses it instead.
(Aside: I suspect your GPU is not actually a Quadro 4000, but instead maybe a Quadro RTX 4000, because the Quadro 4000 was a fermi-class GPU which is not supported by any recent version of CUDA, much less nsight compute.)

Is there a way to access value of constant memory bank in CUDA

I have been trying to debug cuda programs that use inline PTX assembly. Specifically, I am debugging at the instruction level, and am trying to determine the values of arguments to the instructions. Occasionally, the disassembly includes a reference to constant memory. I am trying to have gdb print the value of this constant memory, but have not found any documentation that shows how to do this.
For instance, a disassembly includes
IADD R0, R0, c[0x0] [0x148]
I want to determine how to have gdb print the value of c[0x0] [0x148]. I have tried using print * (#constant) ... but this does not seem to work (I pass 0x148 here and it prints out nothing). Is this possible to do in cuda-gdb?
I have tried to avoid this by passing the compiler option --disable-optimizer-constants during compilation, but this does not work.
The way to do this is to
print *(void * #parameter *) addr
where addr is the address inside the constant bank 0 that should be printed.
Example
Suppose we have a simple kernel in a file called foo.cu:
#include <cuda.h>
#include <stdio.h>
#include <cuda_runtime.h>
__global__ void myKernel(int a, int b, int *d)
{
*d = a + b;
}
int main(int argc, char *argv[]) {
if (argc < 3) {
printf("Requires inputs a and b to be specified\n");
return 0;
}
int * dev_d;
int d;
cudaMalloc(&dev_d, sizeof(*dev_d));
myKernel<<<1, 1>>>(atoi(argv[1]), atoi(argv[2]), dev_d);
cudaMemcpy(&d, dev_d, sizeof(d), cudaMemcpyDeviceToHost);
cudaFree(dev_d);
printf("D is: %d\n", d);
return 0;
}
which is compiled via
$ nvcc foo.cu -o foo.out
Next, suppose we are interested in disassembling this program, so we execute cuda-gdb with a command-line for our program:
$ cuda-gdb --args ./foo.out 10 15
Inside cuda-gdb, we get to the kernel by typing
(cuda-gdb) set cuda break_on_launch application
(cuda-gdb) start
Temporary breakpoint 1, 0x000055555555b12a in main ()
(cuda-gdb) cont
Inside the kernel, we view the disassembly we are interested in debugging:
(cuda-gdb) x/15i $pc
=> 0x555555b790a8 <_Z8myKerneliiPi+8>: MOV R1, c[0x0][0x20]
0x555555b790b0 <_Z8myKerneliiPi+16>: MOV R0, c[0x0][0x144]
0x555555b790b8 <_Z8myKerneliiPi+24>: MOV R2, c[0x0][0x148]
0x555555b790c0 <_Z8myKerneliiPi+32>:
0x555555b790c8 <_Z8myKerneliiPi+40>: MOV R3, c[0x0][0x14c]
0x555555b790d0 <_Z8myKerneliiPi+48>: IADD R0, R0, c[0x0][0x140]
0x555555b790d8 <_Z8myKerneliiPi+56>: STG.E [R2], R0
0x555555b790e0 <_Z8myKerneliiPi+64>:
0x555555b790e8 <_Z8myKerneliiPi+72>: NOP
0x555555b790f0 <_Z8myKerneliiPi+80>: NOP
0x555555b790f8 <_Z8myKerneliiPi+88>: NOP
0x555555b79100 <_Z8myKerneliiPi+96>:
0x555555b79108 <_Z8myKerneliiPi+104>: EXIT
0x555555b79110 <_Z8myKerneliiPi+112>: BRA 0x70
0x555555b79118 <_Z8myKerneliiPi+120>: NOP
The second argument being passed to the IADD instruction is in one of the constant memory banks. Let's find out what its value actually is. We advance go to the IADD instruction:
(cuda-gdb) stepi 4
0x0000555555b790d0 in myKernel(int, int, int*)<<<(1,1,1),(1,1,1)>>> ()
(cuda-gdb) x/i $pc
=> 0x555555b790d0 <_Z8myKerneliiPi+48>: IADD R0, R0, c[0x0][0x140]
We can now obtain the contents of c[0x0][0x140] as follows:
(cuda-gdb) print (int) *(void * #parameter *) 0x140
$1 = 10
Here, we knew the argument should have 32 bits, so we cast it as an (32-bit) int. If we hadn't done this, we would get too many bits, e.g.:
(cuda-gdb) print *(void * #parameter *) 0x140
$2 = 0xf0000000a
Note the hexadecimal format can be retained by adding /x after the print command:
(cuda-gdb) print/x (int) *(void * #parameter *)0x140
$3 = 0xa

Is dynamic and static shared memory allocation allowed at the same time in CUDA? [duplicate]

Suppose I have two __device__ CUDA function, each having the following local variable:
__shared__ int a[123];
and another function (say it's my kernel, i.e. a __global__ function), with:
extern __shared__ int b[];
Is this explicitly allowed/forbidden by nVIDIA? (I don't see it in the programming guide section B.2.3 on __shared__) Do the sizes all count together together towards the shared memory limit, or is it the maximum possibly in use at a single time? Or some other rule?
This can be considered a follow-up question to this one.
The shared memory is split in two parts: statically allocated and dynamically allocated. The first part is calculated during compilation, and each declaration is an actual allocation - activating ptxas info during compilation illustrates it here:
ptxas info : Used 22 registers, 384 bytes smem, 48 bytes cmem[0]
Here, we have 384 bytes, which is 3 arrays of 32 ints. (see sample corde below).
You may pass a pointer to shared memory since Kepler, to another function allowing a device sub-function to access another shared memory declaration.
Then, comes the dynamically allocated shared memory, which reserved size is declared during kernel call.
Here is an example of some various uses in a couple of functions. Note the pointer value of each shared memory region.
__device__ void dev1()
{
__shared__ int a[32] ;
a[threadIdx.x] = threadIdx.x ;
if (threadIdx.x == 0)
printf ("dev1 : %x\n", a) ;
}
__device__ void dev2()
{
__shared__ int a[32] ;
a[threadIdx.x] = threadIdx.x * 5 ;
if (threadIdx.x == 0)
printf ("dev2 : %x\n", a) ;
}
__global__ void kernel(int* res, int* res2)
{
__shared__ int a[32] ;
extern __shared__ int b[];
a[threadIdx.x] = 0 ;
b[threadIdx.x] = threadIdx.x * 3 ;
dev1();
__syncthreads();
dev2();
__syncthreads();
res[threadIdx.x] = a[threadIdx.x] ;
res2[threadIdx.x] = b[threadIdx.x] ;
if (threadIdx.x == 0)
printf ("global a : %x\n", a) ;
if (threadIdx.x == 0)
printf ("global b : %x\n", b) ;
}
int main()
{
int* dres ;
int* dres2 ;
cudaMalloc <> (&dres, 32*sizeof(int)) ;
cudaMalloc <> (&dres2, 32*sizeof(int)) ;
kernel<<<1,32,32*sizeof(float)>>> (dres, dres2);
int hres[32] ;
int hres2[32] ;
cudaMemcpy (hres, dres, 32 * sizeof(int), cudaMemcpyDeviceToHost) ;
cudaMemcpy (hres2, dres2, 32 * sizeof(int), cudaMemcpyDeviceToHost) ;
for (int k = 0 ; k < 32 ; ++k)
{
printf ("%d -- %d \n", hres[k], hres2[k]) ;
}
return 0 ;
}
This code outputs the ptxas info using 384 bytes smem, that is one array for global a array, a second for dev1 method a array, and a third for dev2 method a array. Totalling 3*32*sizeof(float)=384 bytes.
When running the kernel with dynamic shared memory equals to 32*sizeof(float), the pointer to b starts right after these three arrays.
EDIT:
The ptx file generated by this code holds declarations of statically-defined shared memory,
.shared .align 4 .b8 _ZZ4dev1vE1a[128];
.shared .align 4 .b8 _ZZ4dev2vE1a[128];
.extern .shared .align 4 .b8 b[];
except for the entry-point where it is defined in the body of the method
// _ZZ6kernelPiS_E1a has been demoted
The shared space of the memory is defined in the PTX documentation here:
The shared (.shared) state space is a per-CTA region of memory for threads in a CTA to share data. An address in shared memory can be read and written by any thread in a CTA. Use ld.shared and st.shared to access shared variables.
Though with no detail on the runtime. There is a word in the programming guide here with no further detail on the mixing of the two.
During PTX compilation, the compiler may know the amount of shared memory that is statically allocated. There might be some supplemental magic. Looking at the SASS, the first instructions use the SR_LMEMHIOFF
1 IADD32I R1, R1, -0x8;
2 S2R R0, SR_LMEMHIOFF;
3 ISETP.GE.U32.AND P0, PT, R1, R0, PT;
and calling functions in reverse order assign different values to the statically-allocated shared memory (looks very much like a form of stackalloc).
I believe the ptxas compiler calculates all the shared memory it might need in the worst case when all method may be called (when not using one of the method and using function pointers, the b address does not change, and the unallocated shared memory region is never accessed).
Finally, as einpoklum suggests in a comment, this is experimental and not part of a norm/API definition.

IADD.X GPU instruction

When looking into the SASS output generated for the NVIDIA Fermi architecture, the instruction IADD.X is observed. From NVIDIA documentation, IADD means integer add, but not understanding what it means by IADD.X. Can somebody please help... Is this meaning an integer addition with extended number of bits?
The instruction snippet is:
IADD.X R5, R3, c[0x0][0x24]; /* 0x4800400090315c43 */
Yes, the .X stands for eXtended precision. You will see IADD.X used together with IADD.CC, where the latter adds the less significant bits, and produces a carry flag (thus the .CC), and this carry flag is then incorporated into addition of the more significant bits performed by IADD.X.
Since NVIDIA GPUs are basically 32-bit processors with 64-bit addressing capability, a frequent use of this idiom is in address (pointer) arithmetic. The use of 64-bit integer types, such as long long int or uint64_t will likewise lead to the use of these instructions.
Here is a worked example of a kernel doing 64-bit integer addition. This CUDA code was compiled for compute capability 3.5 with CUDA 7.5, and the machine code dumped with cuobjdump --dump-sass.
__global__ void addint64 (long long int a, long long int b, long long int *res)
{
*res = a + b;
}
MOV R1, c[0x0][0x44];
MOV R2, c[0x0][0x148]; // b[31:0]
MOV R0, c[0x0][0x14c]; // b[63:32]
IADD R4.CC, R2, c[0x0][0x140]; // tmp[31:0] = b[31:0] + a[31:0]; carry-out
MOV R2, c[0x0][0x150]; // res[31:0]
MOV R3, c[0x0][0x154]; // res[63:32]
IADD.X R5, R0, c[0x0][0x144]; // tmp[63:32] = b[63:32] + a[63:32] + carry-in
ST.E.64 [R2], R4; // [res] = tmp[63:0]
EXIT

Using both dynamically-allocated and statically-allocated shared memory

Suppose I have two __device__ CUDA function, each having the following local variable:
__shared__ int a[123];
and another function (say it's my kernel, i.e. a __global__ function), with:
extern __shared__ int b[];
Is this explicitly allowed/forbidden by nVIDIA? (I don't see it in the programming guide section B.2.3 on __shared__) Do the sizes all count together together towards the shared memory limit, or is it the maximum possibly in use at a single time? Or some other rule?
This can be considered a follow-up question to this one.
The shared memory is split in two parts: statically allocated and dynamically allocated. The first part is calculated during compilation, and each declaration is an actual allocation - activating ptxas info during compilation illustrates it here:
ptxas info : Used 22 registers, 384 bytes smem, 48 bytes cmem[0]
Here, we have 384 bytes, which is 3 arrays of 32 ints. (see sample corde below).
You may pass a pointer to shared memory since Kepler, to another function allowing a device sub-function to access another shared memory declaration.
Then, comes the dynamically allocated shared memory, which reserved size is declared during kernel call.
Here is an example of some various uses in a couple of functions. Note the pointer value of each shared memory region.
__device__ void dev1()
{
__shared__ int a[32] ;
a[threadIdx.x] = threadIdx.x ;
if (threadIdx.x == 0)
printf ("dev1 : %x\n", a) ;
}
__device__ void dev2()
{
__shared__ int a[32] ;
a[threadIdx.x] = threadIdx.x * 5 ;
if (threadIdx.x == 0)
printf ("dev2 : %x\n", a) ;
}
__global__ void kernel(int* res, int* res2)
{
__shared__ int a[32] ;
extern __shared__ int b[];
a[threadIdx.x] = 0 ;
b[threadIdx.x] = threadIdx.x * 3 ;
dev1();
__syncthreads();
dev2();
__syncthreads();
res[threadIdx.x] = a[threadIdx.x] ;
res2[threadIdx.x] = b[threadIdx.x] ;
if (threadIdx.x == 0)
printf ("global a : %x\n", a) ;
if (threadIdx.x == 0)
printf ("global b : %x\n", b) ;
}
int main()
{
int* dres ;
int* dres2 ;
cudaMalloc <> (&dres, 32*sizeof(int)) ;
cudaMalloc <> (&dres2, 32*sizeof(int)) ;
kernel<<<1,32,32*sizeof(float)>>> (dres, dres2);
int hres[32] ;
int hres2[32] ;
cudaMemcpy (hres, dres, 32 * sizeof(int), cudaMemcpyDeviceToHost) ;
cudaMemcpy (hres2, dres2, 32 * sizeof(int), cudaMemcpyDeviceToHost) ;
for (int k = 0 ; k < 32 ; ++k)
{
printf ("%d -- %d \n", hres[k], hres2[k]) ;
}
return 0 ;
}
This code outputs the ptxas info using 384 bytes smem, that is one array for global a array, a second for dev1 method a array, and a third for dev2 method a array. Totalling 3*32*sizeof(float)=384 bytes.
When running the kernel with dynamic shared memory equals to 32*sizeof(float), the pointer to b starts right after these three arrays.
EDIT:
The ptx file generated by this code holds declarations of statically-defined shared memory,
.shared .align 4 .b8 _ZZ4dev1vE1a[128];
.shared .align 4 .b8 _ZZ4dev2vE1a[128];
.extern .shared .align 4 .b8 b[];
except for the entry-point where it is defined in the body of the method
// _ZZ6kernelPiS_E1a has been demoted
The shared space of the memory is defined in the PTX documentation here:
The shared (.shared) state space is a per-CTA region of memory for threads in a CTA to share data. An address in shared memory can be read and written by any thread in a CTA. Use ld.shared and st.shared to access shared variables.
Though with no detail on the runtime. There is a word in the programming guide here with no further detail on the mixing of the two.
During PTX compilation, the compiler may know the amount of shared memory that is statically allocated. There might be some supplemental magic. Looking at the SASS, the first instructions use the SR_LMEMHIOFF
1 IADD32I R1, R1, -0x8;
2 S2R R0, SR_LMEMHIOFF;
3 ISETP.GE.U32.AND P0, PT, R1, R0, PT;
and calling functions in reverse order assign different values to the statically-allocated shared memory (looks very much like a form of stackalloc).
I believe the ptxas compiler calculates all the shared memory it might need in the worst case when all method may be called (when not using one of the method and using function pointers, the b address does not change, and the unallocated shared memory region is never accessed).
Finally, as einpoklum suggests in a comment, this is experimental and not part of a norm/API definition.