Compiling CUDA program for a GeForce 310 (compute capability 1.2) with unmatched options "-arch=compute_20 -code=sm_20" - cuda

I'm compiling a CUDA program using nvcc with options -arch=20 -code=20 for a GeForce 310 GPU having compute capability 1.2. The program seems to run normally as follows.
wangli#wangli-desktop:~/wangliC2050/1D-EncodeV6.1$ make
nvcc -O --ptxas-options=-v 1D-EncodeV6.1.cu -o 1D-EncodeV6.1 -I../../NVIDIA_GPU_Computing_SDK/C/common/inc -I../../NVIDIA_GPU_Computing_SDK/shared/inc -arch=compute_20 -code=sm_20
ptxas info : Compiling entry function '_Z6EncodePhPjS0_S_S_' for 'sm_20'
ptxas info : Function properties for _Z6EncodePhPjS0_S_S_
0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info : Used 14 registers, 52 bytes cmem[0]
wangli#wangli-desktop:~/wangliC2050/1D-EncodeV6.1$ ./1D-EncodeV6.1
########################### Encoding start (loopCount=10)#######################
#p n size averageTime(s) averageThroughput(MB/s) errorRate(0~1)
#================= Encode on GPU v6.1 ===============
4 4 4 0.000294 0.051837 100.000000
#################### Encoding stop #########################
So, I wonder:
Why this program could run on a GeForce 310 with nvcc options -arch=compute_20 -code=sm_20 which do not match the compute capability 1.2 of the card?
What will happen if the value of the -arch option will differ from that of the -code option?
Thanks.

A CUDA executable typically contains 2 types of program data: SASS code which is basically GPU machine code, and PTX which is an intermediate code (although it's pretty close to machine code). As long as PTX code is present in the executable, then if the driver decides that a proper SASS binary is not available for the GPU that the code will actually run on, it will do a "JIT-compile" step at application launch, to create the necessary binary code appropriate for the device in question, using the PTX code in the application package.
This is what is happening in your case.
If arch != code, then you're creating device code that architecturally conforms to the arch type, but is compiled to use machine level instructions that are associated with the code type. For example, if I compile for arch = 1.2 and code = 2.0, I cannot use double types (they will be demoted to float, because double is not supported in a 1.2 architecture) but the SASS machine code generated will be ready to execute on a cc 2.0 device, and will not require a JIT-compile step for that kind of device.
The NVCC manual has more information particularly the section on steering code generation.

Related

Determining shared memory usage in CUDA Fortran

I've been writing some basic CUDA Fortran code. I would like to be able to determine the amount of shared memory my program uses per thread block (for occupancy calculation). I have been compiling with -Mcuda=ptxinfo in the hope of finding this information. The compilation output ends with
ptxas info : Function properties for device_procedures_main_kernel_
432 bytes stack frame, 1128 bytes spill stores, 604 bytes spill loads
ptxas info : Used 63 registers, 96 bytes smem, 320 bytes cmem[0]
which is the only place in the output that smem is mentioned. There is one array in the global subroutine main_kernel with the shared attribute. If I remove the shared attribute then I get
ptxas info : Function properties for device_procedures_main_kernel_
432 bytes stack frame, 1124 bytes spill stores, 532 bytes spill loads
ptxas info : Used 63 registers, 320 bytes cmem[0]
The smem has disappeared. It seems that only shared memory in main_kernel is being counted: device subroutines in my code use variables with the shared attribute but these don't appear to be mentioned in the output e.g the device subroutine evalfuncs includes shared variable declarations but the relevant output is
ptxas info : Function properties for device_procedures_evalfuncs_
504 bytes stack frame, 1140 bytes spill stores, 508 bytes spill loads
Do all variables with the shared attribute need to be declared in a global subroutine?
Do all variables with the shared attribute need to be declared in a global subroutine?
No.
You haven't shown an example code, your compile command, nor have you identified the version of the PGI compiler tools you are using. However, the most likely explanation I can think of for what you are seeing is that as of PGI 14.x, the default CUDA compile option is to generate relocatable device code. This is documented in section 2.2.3 of the current PGI release notes:
2.2.3. Relocatable Device Code
An rdc option is available for the –ta=tesla and –Mcuda flags that specifies to generate
relocatable device code. Starting in PGI 14.1 on Linux and in PGI 14.2 on Windows, the default
code generation and linking mode for Tesla-target OpenACC and CUDA Fortran is rdc,
relocatable device code.
You can disable the default and enable the old behavior and non-relocatable code by specifying
any of the following: –ta=tesla:nordc, –Mcuda=nordc, or by specifying any 1.x compute
capability or any Radeon target.
So a specific option to (disable)enable this is:
–Mcuda=(no)rdc
(note that -Mcuda=rdc is the default, if you don't specify this option)
CUDA Fortran separates Fortran host code from device code. For the device code, the CUDA Fortran compiler does a CUDA Fortran->CUDA C conversion, and passes the auto-generated CUDA C code to the CUDA C compiler. Therefore, the behavior and expectations of switches like rdc and ptxinfo are derived from the behavior of the underlying equivalent CUDA compiler options (-rdc=true and -Xptxas -v, respectively).
When CUDA device code is compiled without the rdc option, the compiler will normally try to inline device (sub)routines that are called from a kernel, into the main kernel code. Therefore, when the compiler is generating the ptxinfo, it can determine all resource requirements (e.g. shared memory, registers, etc.) when it is compiling (ptx assembly) the kernel code.
When the rdc option is specified, however, the compiler may (depending on some other switches and function attributes) leave the device subroutines as separately callable routines with their own entry point (i.e. not inlined). In that scenario, when the device compiler is compiling the kernel code, the call to the device subroutine just looks like a call instruction, and the compiler (at that point) has no visibility into the resource usage requirements of the device subroutine. This does not mean that there is an underlying flaw in the compile sequence. It simply means that the ptxinfo mechanism cannot accurately roll up the resource requirements of the kernel and all of it's called subroutines, at that point in time.
The ptxinfo output also does not declare the total amount of shared memory used by a device subroutine, when it is compiling that subroutine, in rdc mode.
If you turn off the rdc mode:
–Mcuda=nordc
I believe you will see an accurate accounting of the shared memory used by a kernel plus all of its called subroutines, given a few caveats, one of which is that the compiler is able to successfully inline your called subroutines (pretty likely, and the accounting should still work even if it can't) another of which is that you are working with a kernel plus all of its called subroutines in the same file (i.e. translation unit). If you have kernels that are calling device subroutines in different translation units, then the rdc option is the only way to make it work.
Shared memory will still be appropriately allocated for your code at runtime, regardless (assuming you have not violated the total amount of shared memory available). You can also get an accurate reading of the shared memory used by a kernel by profiling your code, using a profiler such as nvvp or nvprof.
If this explanation doesn't describe what you are seeing, I would suggest providing a complete sample code, as well as the exact compile command you are using, plus the version of PGI tools you are using. (I think it's a good suggestion for future questions as well.)

Does 'code=sm_X' embed only binary (cubin) code, or also PTX code, or both?

I am little bit confused about the 'code=sm_X' option within the '-gencode' statement.
An example: What does the NVCC compiler option
-gencode arch=compute_13,code=sm_13
embed in the library ?
Only the machine code (cubin code) for GPUs with CC 1.3, or also the PTX code for GPUs with CC 1.3 ?
In the 'Maxwell compatibility guide', it is stated "Only the back-end target versions(s) specified by the 'code=' clause will be retained in the resulting binary".
From that, I would infer that the given compiler option only embeds machine code for GPUs with CC 1.3 and no PTX code. This would mean that it would not be possible to run this library e.g. on aa Maxwell generation card, as there is no PTX code embeded within the library from which the machine code could be 'just-in-time' (JIT) compiled.
On the other side, on the GTC 2013 presentation 'Introduction to the CUDA Toolkit as an Application Build Tool' by NVIDIA it is stated that the '-gencode arch=compute_13,code=sm_13' is enough for all GPUs with CC >= 1.3, and that with this compiler option for GPUs with CC > 1.3 the machine code is JIT-ed from the PTX code. So, the information given in the Maxwell compatibility guide and this GTC presentation is conflicting in my opinion.
nvcc has many formats by which the code generation options can be specified. A read of section 6 of the nvcc manual may be instructive.
when using this format:
nvcc -gencode arch=compute_13,code=sm_13 ...
only the SASS code for a sm_13 (cc 1.3) device will be retained. There will be no PTX retained in the executable object, and so the code can only run on a device capable of running cc1.3 SASS.
Using the above command format, in order to embed a PTX version of the source code into the executable object, it's necessary to use a virtual architecture specification for the option provided to code=.... Since this particular format (using -gencode) does not allow specification of multiple targets in a single switch, we must pass the -gencode switch multiple times to nvcc, one for each target we desire to be embedded in the executable object.
So extending the above example, we could use the following:
nvcc -gencode arch=compute_13,code=sm_13 -gencode arch=compute_13,code=compute_13 ...
This would embed both cc1.3 SASS (by the first gencode switch) and cc1.3 PTX (by the second gencode switch) in the executable. Devices capable of running cc1.3 SASS code directly will use that. Other devices (of compute capability greater than cc 1.3) will do a JIT-compile step by the driver, to convert the cc1.3 PTX code to a SASS code with an architecture suitable for the device in question.
I agree that the GTC 2013 presentation (e.g. slide 37) seems to suggest that
nvcc -gencode arch=compute_13,code=sm_13 ...
is sufficient for all devices of compute capability 1.3 or higher. It is not, and this is easy to demonstrate. If you compile a code using the above format, and attempt to run it on a cc 2.0 device, it will fail with an "invalid device function" error associated with any kernel or kernels you have in your code.
Again, nvcc has a variety of command formats and "shortcuts" for specifying code generation. Some relatively simple ones, such as:
nvcc -arch=sm_13 ...
will embed both a PTX and SASS version of the code in the executable object, resulting in the kind of forward-compatibility suggested.

Does CUDA applications' compute capability automatically upgrade?

If I compile a CUDA program with a lower Compute Capability, e.g 1.3 (nvcc flag sm_13), and run it on a device with Compute Capability 2.1, will it exploit the features of Compute 2.1 or not?
In that situation, Will the compute 2.1 device behave like a compute 1.3 device?
No, it won't exploit any features you need to explicitly program for.
Only those features that are transparent to the user (like cache or larger register files) will be used.
Additionally, you need to make sure your object file contains a version of the code compiled to the PTX intermediate language, that can be dynamically compiled to the target architecture, or you program will not even run.
Compile to a virtual architecture (nvcc -arch compute_13) to ensure that, or create a fat binary with code for multiple architectures using the -gencode option to nvcc.
With a fat binary, you can program for features available only on higher compute capability if you wrap the code inside #if __CUDA_ARCH__ >= xyz preprocessor conditionals.

Setting 32 bit address size in inline PTX

I'm in the processing of converting PTX written as a separate file to inline PTX. In the separate PTX file, I was defining the ISA and target as follows:
.version 1.2
.target sm_13
In the PTX file generated by the compiler, after having inlined the PTX, the compiler has specified ISA and target as follows:
.version 3.0
.target sm_20
.address_size 64
The .address_size 64 is problematic for me because it means that I would have to update the pointer arithmetic that I do in the inline PTX from 32 bit to 64 bit.
Given that 32 bits can address 4GB, more memory than my card has, is it possible to make the compiler specify a 32 bit address size, so that I don't have to update the pointer arithmetic?
Are 32 bit addresses supported on sm_20, given the new unified addressing system?
The 64 bit version of the NVCC compiler produces 64 bit PTX by default. If you try passing -m32 to nvcc as a command line option, it will generate 32 bit pointers. The option is covered in the NVCC documentation:
http://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html#options-for-guiding-compiler-driver

nvcc -Xptxas –v compiler flag has no effect

I have a CUDA project. It consists of several .cpp files that contain my application logic and one .cu file that contains multiple kernels plus a __host__ function that invokes them.
Now I would like to determine the number of registers used by my kernel(s). My normal compiler call looks like this:
nvcc -arch compute_20 -link src/kernel.cu obj/..obj obj/..obj .. -o bin/..exe -l glew32 ...
Adding the "-Xptxas –v" compiler flag to this call unfortunately has no effect. The compiler still produces the same textual output as before. The compiled .exe also works the same way as before with one exception: My framerate jumps to 1800fps, up from 80fps.
I had the same problem, here is my solution:
Compile *cu files into device only *ptx file, this will discard host code
nvcc -ptx *.cu
Compile *ptx file:
ptxas -v *.ptx
The second step will show you number of used registers by kernel and amount of used shared memory.
Convert the compute_20 to sm_20 in your compiler call. That should fix it.
When using "-Xptxas -v", "-arch" together, we can not get verbose information(register num, etc.). If we want to see the verbose without losing the chance of assigning GPU architecture(-arch, -code) ahead, we can do the following steps: nvcc -arch compute_XX *.cu -keep then ptxas -v *.ptx. But we will obtain many processing files. Certainly, kogut's answer is to the point.
when you compile
nvcc --ptxas-options=-v
You may want to ctrl your compiler verbose option defaults.
For example is VStudio goto :
Tools->Options->ProjectsAndSolutions->BuildAndRun
then set the verbosity output to Normal.
Not exactly what you were looking for, but you can use the CUDA visual profiler shipped with the nvidia gpu computing sdk. Besides many other useful informations, it shows the number of registers used by each kernel in you application.