CUDA/PTX 32-bit vs. 64-bit - cuda

CUDA compilers have options for producing 32-bit or 64-bit PTX. What is the difference between these? Is it like for x86, NVidia GPUs actually have 32-bit and 64-bit ISAs? Or is it related to host code only?

Pointers are certainly the most obvious difference. 64 bit machine model enables 64-bit pointers. 64 bit pointers enable a variety of things, such as address spaces larger than 4GB, and unified virtual addressing. Unified virtual addressing in turn enables other things, such as GPUDirect Peer-to-Peer. The CUDA IPC API also depends on 64 bit machine model.
The x64 ISA is not completely different than the x86 ISA, it's mostly an extension of it. Those familiar with the x86 ISA will find the x64 ISA familiar, with natural extensions for 64-bits where needed. Likewise 64 bit machine model is an extension of the capabilities of the PTX ISA to 64-bits. Most PTX instructions work exactly the same way.
32 bit machine model can handle 64 bit data types (such as double and long long), so frequently there don't need to be any changes to properly written CUDA C/C++ source code to compile for 32 bit machine model or 64 bit machine model. If you program directly in PTX, you may have to account for the pointer size differences, at least.

Related

Query whether CUDA device supports 32-bit or 64-bit addressing

I would like to discover, at runtime, whether a CUDA GPU supports 32-bit or 64-bit addressing. For context, I'm using LLVM to generate PTX at runtime, and need to know whether to set the target triple to nvptx or nvptx64.
There doesn't appear to be a direct query for this via cuDeviceGetAttribute, but is there some other query or heuristic that can give me this information?
64 bit addressing is a hard requirement for unified addressing to work. Also all NVidia GPUs that are 64 bit addressing capable do support unified addressing. So testing if unified addressing is supported for a given device context also tells if 64 bit addressing is supported.
The field unifiedAddressing of struct cudaDevice prop queried with cudaGetDeviceProperties gives that information.

How is WebGL or CUDA code actually translated into GPU instructions?

When you write shaders and such in WebGL or CUDA, how is that code actually translated into GPU instructions?
I want to learn how you can write super low-level code that optimizes graphic rendering to the extreme, in order to see exactly how GPU instructions are executed, at the hardware/software boundary.
I understand that, for CUDA for example, you buy their graphics card (GPU), which is somehow implemented to optimize graphics operations. But then how do you program on top of that (in a general sense), without C?
The reason for this question is because on a previous question, I got the sense that you can't program the GPU directly by using assembly, so I am a bit confused.
If you look at docs like CUDA by example, that's all just C code (though they do have things like cudaMalloc and cudaFree, which I don't know what that's doing behind the scenes). But under the hood, that C must be being compiled to assembly or at least machine code or something, right? And if so, how is that accessing the GPU?
Basically I am not seeing how, at a level below C or GLSL, how the GPU itself is being instructed to perform operations. Can you please explain? Is there some snippet of assembly that demonstrates how it works, or anything like that? Or is there another set of some sort of "GPU registers" in addition to the 16 "CPU registers" on x86 for example?
The GPU driver compiles it to something the GPU understands, which is something else entirely than x86 machine code. For example, here's a snippet of AMD R600 assembly code:
00 ALU: ADDR(32) CNT(4) KCACHE0(CB0:0-15)
0 x: MUL R0.x, KC0[0].x, KC0[1].x
y: MUL R0.y, KC0[0].y, KC0[1].y
1 z: MUL R0.z, KC0[0].z, KC0[1].z
w: MUL R0.w, KC0[0].w, KC0[1].w
01 EXP_DONE: PIX0, R0
END_OF_PROGRAM
The machine code version of that would be executed by the GPU. The driver orchestrates the transfer of the code to the GPU and instructs it to run it. That is all very device specific, and in the case of nvidia, undocumented (at least, not officially documented).
The R0 in that snippet is a register, but on GPUs registers usually work a bit differently. They exist "per thread", and are in a way a shared resource (in the sense that using many registers in a thread means that fewer threads will be active at the same time). In order to have many threads active at once (which is how GPUs tolerate memory latency, whereas CPUs use out of order execution and big caches), GPUs usually have tens of thousands of registers.
Those languages are translated to machine code via a compiler. That compiler just is part of the drivers/runtimes of the various APIs, and is totally implementation specific. There are no families of common instruction sets we are used to in CPU land - like x86, arm or whatever. Different GPUs all have their own incompatible insruction set. Furthermore, there are no APIs with which to upload and run arbitrary binaries on those GPUs. And there is little publically available documentation for that, depending on the vendor.
The reason for this question is because on a previous question, I got the sense that you can't program the GPU directly by using assembly, so I am a bit confused.
Well, you can. In theory, at least. If you do not care about the fact that your code will only work on a small family of ASICs, and if you have all the necessary documentation for that, and if you are willing to implement some interface to the GPU allowing to run those binaries, you can do it. If you want to go that route, you could look at the Mesa3D project, as it provides open source drivers for a number of GPUs, including an llvm-based compiler infrastructure to generate the binaries for the particular architecture.
In practice, there is no useful way of bare metal GPU programming on a large scale.

What type of machine language do PCs generally run on

I've recently begun researching what it would take to program a JIT compiler. I've been studying on machine language, but I haven't been able to find what type of machine languages most standard PCs run on. I found this PDF which seems to explain a type of ML, but it says it's MIPS, which, after looking it up, seems to be some kind of old, videogame console/router machine language. So, my question is,
What machine language do most modern personal computers (i.e. laptops, desktops) run on?
Or, is it indeterminable? Are there many machine languages? Or maybe I'm wrong, and MIPS is standard?
The machine language used by a given processor is a function of its instruction-set architecture ("ISA").
Most desktop and laptop computers today running Microsoft Windows use "64-bit" processors implementing the "x86-64" ISA, such as those in Intel's "Core i5" and "Core i7" processor families. Commonly referred to as "x64", this is the 64-bit extension (created by AMD) for the original "IA-32" ISA (created by Intel).
Both "IA-32" and "x64" are examples of Complex Instruction Set Computing ("CISC") architectures. On the other hand, MIPS is an example of the much simpler Reduced Instruction Set Computing ("RISC") style of architectures.
When talking about JIT compilers, it is important to distinguish between the ISA of the virtual machine running the byte-code and the ISA of the underlying physical processor. Most virtual machines are based upon RISC architectures, because of their relative simplicity. However, most likely this VM-plus-JIT-compiler will be physically running on an x64-compatible CISC processor.

differences between virtual and real architecture of cuda

Trying to understand the differences between virtual and real architecture of cuda, and how the different configurations will affect the performance of the program, e.g.
-gencode arch=compute_20,code=sm_20
-gencode arch=compute_20,code=sm_21
-gencode arch=compute_21,code=sm_21
...
The following explanation was given in NVCC manual,
GPU compilation is performed via an intermediate representation, PTX
([...]), which can
be considered as assembly for a virtual GPU architecture. Contrary to an actual graphics
processor, such a virtual GPU is defined entirely by the set of capabilities, or features,
that it provides to the application. In particular, a virtual GPU architecture provides a
(largely) generic instruction set, and binary instruction encoding is a non-issue because
PTX programs are always represented in text format.
Hence, a nvcc compilation command always uses two architectures: a compute
architecture to specify the virtual intermediate architecture, plus a real GPU architecture
to specify the intended processor to execute on. For such an nvcc command to be valid,
the real architecture must be an implementation (someway or another) of the virtual
architecture. This is further explained below.
The chosen virtual architecture is more of a statement on the GPU capabilities that
the application requires: using a smallest virtual architecture still allows a widest range
of actual architectures for the second nvcc stage. Conversely, specifying a virtual
architecture that provides features unused by the application unnecessarily restricts the
set of possible GPUs that can be specified in the second nvcc stage.
But still don't quite get how the performance will be affected by different configurations (or, maybe only affect the selection of the physical GPU devices?). In particular, this statement is most confusing to me:
In particular, a virtual GPU architecture provides a
(largely) generic instruction set, and binary instruction encoding is a non-issue because
PTX programs are always represented in text format.
The NVIDIA CUDA Compiler Driver NVCC User Guide Section on GPU Compilation provides a very thorough description of virtual and physical architecture and how the concepts are used in the build process.
The virtual architecture specifies the feature set that is targeted by the code. The table listed below shows some of the evolution of the virtual architecture. When compiling you should specify the lowest virtual architecture that has a sufficient feature set to enable the program to be executed on the widest range of physical architectures.
Virtual Architecture Feature List (from the User Guide)
compute_10 Basic features
compute_11 + atomic memory operations on global memory
compute_12 + atomic memory operations on shared memory
+ vote instructions
compute_13 + double precision floating point support
compute_20 + Fermi support
compute_30 + Kepler support
The physical architecture specifies the implementation of the GPU. This provides the compiler with the instruction set, instruction latency, instruction throughput, resource sizes, etc. so that the compiler can optimally translate the virtual architecture to binary code.
It is possible to specify multiple virtual and physical architecture pairs to the compiler and have the compiler back the final PTX and binary into a single binary. At runtime the CUDA driver will choose the best representation for the physical device that is installed. If binary code is not provided in the fatbinary file the driver can use the JIT runtime for the best PTX implementation.
"Virtual architecture" code will get compiled by a just-in-time compiler before being loaded on the device. AFAIK, it is the same compiler as the one NVCC invokes when building "physical architecture" code offline - so I don't know if there will be any differences in the resulting application performance.
Basically, every generation of the CUDA hardware is binary incompatible with previous generation - imagine next generation of Intel processors sporting ARM instruction set. This way, virtual architectures provide an intermediate representation of the CUDA application that can be compiled for compatible hardware. Every hardware generation introduces new features (e.g. atomics, CUDA Dynamic Parallelism) that require new instructions - that's why you need new virtual architectures.
Basically, if you want to use CDP you should compile for SM 3.5. You can compile it to device binary that will have assembly code for specific CUDA device generation or you can compile it to PTX code that can be compiled into device assembly for any device generation that provides these features.
The virtual architecture specifies what capabilities a GPU has and the real architecture specifies how it does it.
I can't think of any specific examples off hand. A (probably not correct) example may be a virtual GPU specifying the number of cores a card has, so code is generated targeting that number of cores, whereas the real card may have a few more for redundancy (or a few less due to manufacturing errors) and some methods of mapping to the cores that are actually in use, which can be placed on top of the more generic code generated in the first step.
You can think of the PTX code sort of like assembly code, which targets a certain architecture, which can then be compiled to machine code for a specific processor. Targeting the assembly code for the right kind of processor will, in general, generate better machine code.
well usually what nvidia writes as document causes people (including myself) to become more confused! (just me maybe!)
you are concerned with the performance, basically what this says is that don't be (probably) but you should.basically the GPU architecture is like nature. they run something on it and something happens. then they try to explain it. and then they feed it to you.
at the end should probably run some tests and see what configuration gives the best result.
the virtual architecture is what is designed to let you think freely. you should obey that, use as much as threads as you want, you can assign virtually everything as number of threads and blocks, doesn't matter, it will be translated to PTX and the device will run it.
the only problem is, if you assign more than 1024 threads per a single block you will get 0 s as the result, because the device(the real architecture) doesn't support it.
or for example your device support the CUDA 1.2, you can define double pointing variables in your code, but again you will get 0 s as the result because simply the device can't run it.
performance wise you have to know that every 32 thread (e.g. warps) have to access a single position in memory or else your access will be serialized and so on.
So I hope you've got the point by now, It is a relatively new science and GPU is a really sophisticated piece of hardware architecture, everybody is trying to make the best of it but it's a game of testing and a little knowledge of actual architecture behind CUDA. I suggest that search for GPU architecture and see how the virtual threads and thread blocks are actually implemented.

CUDA-enabled graphics processor as VMware?

I'm taking a course that teaches CUDA. I would like to use it my personal laptop, but I don't have Nvidia graphics processor. mine is ATI . so I was thinking is there any Virtual Hardware simulator that I can use? or that there is no other way than using a PC with CUDA Graphics processor.
Thank you very much
The CUDA toolkit used to ship with a host CPU emulation mode, but that was deprecated early in the 3.0 release cycle and has been fully removed from toolkits for the best part of two years.
Your only real option today is to use Ocelot. It has a PTX assembly translator and a pretty reliable reimplementation of the CUDA runtime for x86 CPUs, and there is also a rather experimental PTX to AMD IL translator (I have no experience with the latter). On a modern linux system with an up to date GNU toolchain, Ocelot is reasonably easy to get running. I am not sure if there is a functioning Windows port or not.
CUDA has its own emulation mode witch runs everything on CPU. Problem is that in such case you don't have real concurrency so programs that runs successfully in emulation mode can fail (and usually does) in normal mode. You can develop your code in emulation mode, but then you have to debug it on computer with CUDA card.