GPGPUsim PTX extraction - cuda

Just as the title says, I'm learning how to use the GPGPUsim. And when I read the "PTX extraction" section of the manual, I found that it says "In CUDA version 4.0 and later, the fat cubin file used to extract the ptx and sass is not available any more." which makes me confused. How to understand this, what happened in CUDA version 4.0 and later.
Thank you anyway :)

When CUDA 4.0 was released (in 2011!), the device toolchain was switched to a fully ELF based object model. Prior to that, a plain text file with encoded binary sections for emitted SASS code and plain text for PTX was used. As a result, to extract PTX or SASS from an ELF CUDA object requires a utility cuobjdump to access the requisite code.
Thus the pre/post CUDA 4.0 distinction.

Related

Run CUDA SASS instructions [duplicate]

With CuObjDump SASS can be generated from Cubin file using
cuobjdump -sass <input file>, But is there any way to convert the SASS back to Cubin.
There are no "assemblers" provided as part of the official NVIDIA CUDA toolchain. The NVIDIA toolchain can take CUDA C/C++, or PTX, and convert it to a cubin or other executable format.
However there are some community-developed assemblers:
Perhaps the most recent one at this time (probably the only one worth considering at this time) is maxas.
There also was an older one asfermi developed in the Fermi generation of CUDA GPUs. I don't think it has been updated or maintained.
I would like to add that depending on the architecture (maxwell/kepler etc), you can use a community developed assembler/dissembler to convert the SASS back to Cubin. Here are some:
Maxas: https://github.com/NervanaSystems/maxas
KeplerAs: https://github.com/PAA-NCIC/PPoPP2017_artifact/tree/master/KeplerAs

Can I use cuda without using nvcc on my host code?

I'm writing a single header library that executes a cuda kernel. I was wondering if there is a way to get around the <<<>>> syntax, or get C source output from nvcc?
You can avoid the host language extensions by using the CUDA driver API instead. It is a little more verbose and you will require a little more boilerplate code to manage the context, but it is not too difficult.
Conventionally, you would compile to PTX or a binary payload to load at runtime, however NVIDIA now also ship an experimental JIT CUDA C compiler library, libNVVM, which you could try if you want JIT from source.

Check if SIMD machine is generated for LLVM IR

I have a C++ program that uses the LLVM libraries to generate an LLVM IR module and it compiles and executes it.
The code uses vector types and I want to check if it translates to SIMD instructions correctly on my architecture.
How do I find this out? Is there a way to see the assembly code that is generated out of this IR?
You're probably looking for some combination of -emit-llvm which outputs IR instead of native assembly, and -S which outputs assembly instead of object files.

How can I read the PTX?

I am working with Capabilities 3.5, CUDA 5 and VS 2010 (and obviously Windows).
I am interested in reading the compiled code to understand better the implication of my C code changes.
What configuration do I need in VS to compile the code for readability (is setting the compilation to PTX enough?)?
What tool do I need to reverse engineer the generated PTX to be able to read it?
In general, to create a ptx version of a particular .cu file, the command is:
nvcc -ptx mycode.cu
which will generate a mycode.ptx file containing the ptx code corresponding to the file you used. It's probably instructive to use the -src-in-ptx option as well:
nvcc -ptx -src-in-ptx mycode.cu
Which will intersperse the lines of source code with the lines of ptx they correspond to.
To comprehend ptx, start with the documentation
Note that the compiler may generate ptx code that doesn't correspond to the source code very well, or is otherwise confusing, due to optimizations. You may wish (perhaps to gain insight) to compile some test cases using the -G switch as well, to see how the non-optimized version compares.
Since the windows environment may vary from machine to machine, I think it's easier if you just look at the path your particular version of msvc++ is using to invoke nvcc (look at the console output from one of your projects when you compile it) and prepend the commands I give above with that path. I'm not sure there's much utility in trying to build this directly into Visual Studio, unless you have a specific need to compile from ptx to an executable. There are also a few sample codes that have to do with ptx in some fashion.
Also note for completeness that ptx is not actually what's executed by the device (but generally pretty close). It is an intermediate code that can be re-targetted to devices within a family by nvcc or a portion of the compiler that also lives in the GPU driver. To see the actual code executed by the device, we use the executable instead of the source code, and the tool to extract the machine assembly code is:
cuobjdump -sass mycode.exe
Similar caveats about prepending an appropriate path, if needed. I would start with the ptx. I think for what you want to do, it's enough.

Does CUDA use an interpreter or a compiler?

This is a bit of silly question, but I'm wondering if CUDA uses an interpreter or a compiler?
I'm wondering because I'm not quite sure how CUDA manages to get source code to run on two cards with different compute capabilities.
From Wikipedia:
Programmers use 'C for CUDA' (C with Nvidia extensions and certain restrictions), compiled through a PathScale Open64 C compiler.
So, your answer is: it uses a compiler.
And to touch on the reason it can run on multiple cards (source):
CUDA C/C++ provides an abstraction, it's a means for you to express how you want your program to execute. The compiler generates PTX code which is also not hardware specific. At runtime the PTX is compiled for a specific target GPU - this is the responsibility of the driver which is updated every time a new GPU is released.
These official documents CUDA C Programming Guide and The CUDA Compiler Driver (NVCC) explain all the details about the compilation process.
From the second document:
nvcc mimics the behavior of the GNU compiler gcc: it accepts a range
of conventional compiler options, such as for defining macros and
include/library paths, and for steering the compilation process.
Not just limited to cuda , shaders in directx or opengl are also complied to some kind of byte code and converted to native code by the underlying driver.