I would like the generated CUDA code to be saved in a file for examination. Is this possible with OpenAcc and PGI compilers?
You should be able to pass -ta=nvidia,keepgpu,keepptx to any of the PGI GPU compilers, which will retain the intermediate code emitted by the toolchain during the build.
Also refer to the command line help, e.g.:
pgcc -help
Note that PGI compilers have moved to a more integrated toolchain recently, which eliminates the generation of CUDA C intermediate source files, so the above approach works but gives you intermediate files that are not C code (they are llvm and ptx). If you want CUDA C intermediate code, you can also add the nollvm option:
-ta=nvidia,keepgpu,keepptx,nollvm
The "kept" files will generally have .gpu and .h extensions for llvm/CUDA C code, and .ptx extension for PTX.
Related
I am porting some calculation from C# to CUDA.
There many classes in C# which I want to port, for each c# class I create .cu and .cuh file in my CUDA project.
All classes related, and all they used in calculations.
I need to save structure of my C# code, because it will be very easy to made error in other case.
P.S. In case I put all code in one file - everything works as expected but read or fix some issues becomes real pain.
I want to compile CUDA project and use it in my C# via ManagedCuda library.
I can compile test CUDA project with one .cu file to .ptx file, load it in C# via ManagedCuda and call function from it.
But when I want to compile my real projects with multiple cu files, in result I got multiple .ptx files for each .cu file in project, even more I am not able to load this .ptx file via ManagedCuda, I got next error:
ErrorInvalidPtx: This indicates that a PTX JIT compilation failed.
But this error expected, because there cross reference in ptx files, and they have sense only if the loaded together.
My goal is to compile my CUDA project to one file, but in same time I do not want to be limited to only specific video card which I have. For this I need to use PTX(or cubin with ptx included) this PTX file will be compiled for specific device in moment you load it.
I tried to set Generate Relocatable Device Code to Yes (-rdc=true) and compile to PTX and Cubin - result same I get few independent files for each .cu file.
The very short answer is no, you can't do that. The toolchain cannot merged PTX code at the compilation phase.
If you produce multiple PTX files, you will need to use the JIT linker facilities of the CUDA runtime to produce a module which can be loaded into your context. I have no idea whether Managed CUDA supports that or not.
Edit to add that is appears that Managed CUDA does support runtime linking (see here).
I'm writing a single header library that executes a cuda kernel. I was wondering if there is a way to get around the <<<>>> syntax, or get C source output from nvcc?
You can avoid the host language extensions by using the CUDA driver API instead. It is a little more verbose and you will require a little more boilerplate code to manage the context, but it is not too difficult.
Conventionally, you would compile to PTX or a binary payload to load at runtime, however NVIDIA now also ship an experimental JIT CUDA C compiler library, libNVVM, which you could try if you want JIT from source.
I have a C++ program that uses the LLVM libraries to generate an LLVM IR module and it compiles and executes it.
The code uses vector types and I want to check if it translates to SIMD instructions correctly on my architecture.
How do I find this out? Is there a way to see the assembly code that is generated out of this IR?
You're probably looking for some combination of -emit-llvm which outputs IR instead of native assembly, and -S which outputs assembly instead of object files.
I am working with Capabilities 3.5, CUDA 5 and VS 2010 (and obviously Windows).
I am interested in reading the compiled code to understand better the implication of my C code changes.
What configuration do I need in VS to compile the code for readability (is setting the compilation to PTX enough?)?
What tool do I need to reverse engineer the generated PTX to be able to read it?
In general, to create a ptx version of a particular .cu file, the command is:
nvcc -ptx mycode.cu
which will generate a mycode.ptx file containing the ptx code corresponding to the file you used. It's probably instructive to use the -src-in-ptx option as well:
nvcc -ptx -src-in-ptx mycode.cu
Which will intersperse the lines of source code with the lines of ptx they correspond to.
To comprehend ptx, start with the documentation
Note that the compiler may generate ptx code that doesn't correspond to the source code very well, or is otherwise confusing, due to optimizations. You may wish (perhaps to gain insight) to compile some test cases using the -G switch as well, to see how the non-optimized version compares.
Since the windows environment may vary from machine to machine, I think it's easier if you just look at the path your particular version of msvc++ is using to invoke nvcc (look at the console output from one of your projects when you compile it) and prepend the commands I give above with that path. I'm not sure there's much utility in trying to build this directly into Visual Studio, unless you have a specific need to compile from ptx to an executable. There are also a few sample codes that have to do with ptx in some fashion.
Also note for completeness that ptx is not actually what's executed by the device (but generally pretty close). It is an intermediate code that can be re-targetted to devices within a family by nvcc or a portion of the compiler that also lives in the GPU driver. To see the actual code executed by the device, we use the executable instead of the source code, and the tool to extract the machine assembly code is:
cuobjdump -sass mycode.exe
Similar caveats about prepending an appropriate path, if needed. I would start with the ptx. I think for what you want to do, it's enough.
I want to intercept at PTX level of opencl programs on NVIDIA GPU.
I imagine the routine would probably look like this.
First, I write an opencl program (both host and device code), using NVIDIA compiler to produce respective ptx code. Then I write what I want to do by modifying the PTX code (please don't ask why I didn't do this on the device C code - I have some reasons for it). But problem is, after being modified, how do I compile this PTX code to binary code?
You can use ptxas, which is included in the CUDA toolkit. It compiles .ptx into .cubin, which can then be loaded with the driver API.