Is there a bug in Cuda? I have run the following code on my GTX580 and r1 is zero at the end. I expect that it is one due to carry propagation? I have tested the code with Cuda Toolkit 4.2.9 and 5.5 and use "nvcc -arch=sm_20 bug.cu -o bug && ./bug" to compile and run it.
#include <stdio.h>
#include <cuda.h>
__global__ void bug()
{
unsigned int r1 = 0;
unsigned int r2 = 0;
asm( "\n\t"
"sub.cc.u32 %0, 0, 1;\n\t"
"addc.cc.u32 %1, 0, 0;\n\t"
: "=r"(r1), "=r"(r2) );
printf("r1 >> %04X\n", r1);
printf("r2 >> %04X\n", r2);
}
int main(void)
{
float *a_d;
cudaMalloc((void **) &a_d, 1);
bug <<< 1,1 >>> ();
cudaFree(a_d);
}
Output
r1 >> FFFFFFFF
r2 >> 0000
I believe you're making some assumptions about the CC.CF flag referenced in the PTX ISA documentation that may not be valid.
Note that the definition of specific states (e.g. 0 or 1) of this bit are never given that I can see. Furthermore, I don't find any mapping between the definition of "carry-in/carry-out" and "borrow-in/borrow-out"
Stated another way, I think you are assuming that a "borrow" status in this flag is identical to a "carry" status. In other words, you are assuming something like:
CF:
0 = (NO CARRY) or (NO BORROW)
1 = (CARRY) or (BORROW)
But such a truth table or mapping is never given. Furthermore the manual states:
The condition code register ... is mainly intended for use in straight-line code sequences for computing extended-precision integer addition, subtraction, and multiplication.
I don't think your code satisfies the intent, nor do I think the above assumption of truth table for CC.CF is valid.
In fact what I think is happening is a truth table like this:
CF:
0 = (CARRY) or (NO BORROW)
1 = (NO CARRY) or (BORROW)
(the 0 and 1 here are arbitrary; that is also not defined in the manual.)
All examples of code I have tried (about 6 cases, including yours) have fit the definition I have given above.
Having said this, I would think it unwise to depend on this, as it is mostly undocumented. A safe rule for computer architecture is that undocumented behavior may change in the future.
I think I have found an explanation. There is a note in the PTX manual which says for the sub.cc instruction: "Behavior is the same for unsigned and signed integers."
Related
I have the next code
#include <stdio.h>
#include <cuda.h>
#include <cuda_runtime.h>
__global__ void cuda_test() {
int result;
asm(
".reg .u32 r1;\n\t"
"add.cc.u32 r1, 0, 0;\n\t"
"subc.u32 %0, 0, 0; \n\t"
:"=r"(result)
);
printf("r= %x\n", result);
}
int main() {
cuda_test<<<1, 1>>>();
cudaDeviceSynchronize();
return 0;
}
This code prints
r= ffffffff
Why? As far, as I undertand, operation add.cc.u32 r1, 0, 0 must set carry flag to 0. I am under the impression that the subc.u32 operation uses the inverse of the CC.CF. But from the documentation it shouldn't be that way.
I cannot find information anywhere in the PTX documentation on how what PTX calls the CC.CF flag is actually generated. Looking at the generated machine code (SASS) I see that subtraction is implemented via addition, and the use of an extend flag CC.X.
Based on some quick experiments, this .X flag always seems to be the normal carry-out from the adder. Since a-b = a+~b+1, on a subtraction .X will be set if a >= b. It represents the carry-out from the adder which is the one's complement of an x86-style borrow on subtracts, which is set when a < b.
In other words, the extended arithmetic instructions of the GPU appear to use the same convention that is used by the ARM and PowerPC architectures for their extended arithmetic instructions. The Wikipedia article on the carry flag covers the two design alternatives for handling of the flag during subtraction.
In the code in the question, add.cc.u32 clears CC.CF, which signals to the subsequent subc.u32 that a borrow has occured, causing it to compute a+~b.
You may wish to file an enhancement request with NVIDIA to clarify the PTX documentation regarding details of CC.CF generation and handling.
I've been working on a CUDA program, that randomly crashes with a unspecified launch failure, fairly frequently. Through careful debugging, I localized which kernel was failing, and furthermore that the failure occurred only if certain transcendental functions were called from within the CUDA kernel, (e.g. sinf() or atanhf()).
This led me to write a much simpler program (see below), to confirm that these transcendental functions really were causing an issue, and it looks like that is indeed the case. When I compile and run the code below, which just has repeated calls to a kernel that uses tanh and atanh, repeatedly, sometimes the program works, and sometimes it prints Error with Kernel along with a message from the driver that says:
NVRM: XiD (0000:01:00): 13, 0002 000000 000050c0 00000368 00000000 0000080
With regards to frequency, it probably crashes 50% of the time that I run the executable.
From what I've read online, it sounds like XiD 13 is analogous to a host-based seg fault. However, given the array indexing, I can't see how that could be the case. Furthermore the program doesn't crash if I replace the transcendental functions in the kernel with other functions (e.g. repeated floating point subtraction and addition). That is, I don't get the XiD error message, and the program ultimately returns the correct value of atanh(0.7).
I'm running cuda-5.0 on Ubuntu 11.10 x64 Desktop. Driver version is 304.54, and I'm using a GeForce 9800 GTX.
I'm inclined to say that this is a hardware issue or a driver bug. What's strange is that the example applications from nvidia work fine, perhaps because they do not use the affected transcendental functions.
The final bit of potentially important information is that if I run either my main project, or this test program under cuda-memcheck, it reports no errors, and never crashes. Honestly, I'd just run my project under cuda-memcheck, but the performance hit makes it impractical.
Thanks in advance for any help/insight here. If any one has a 9800 GTX and would be willing to run this code to see if it works, it would be greatly appreciated.
#include <iostream>
#include <stdlib.h>
using namespace std;
__global__ void test_trans (float *a, int length) {
if ((threadIdx.x + blockDim.x*blockIdx.x) < length) {
float temp=0.7;
for (int i=0;i<100;i++) {
temp=atanh(temp);
temp=tanh(temp);
}
a[threadIdx.x+ blockDim.x*blockIdx.x] = atanh(temp);
}
}
int main () {
float *array_dev;
float *array_host;
unsigned int size=10000000;
if (cudaSuccess != cudaMalloc ((void**)&array_dev, size*sizeof(float)) ) {
cerr << "Error with memory Allocation\n"; exit (-1);}
array_host = new float [size];
for (int i=0;i<10;i++) {
test_trans <<< size/512+1, 512 >>> (array_dev, size);
if (cudaSuccess != cudaDeviceSynchronize()) {
cerr << "Error with kernel\n"; exit (-1);}
}
cudaMemcpy (array_host, array_dev, sizeof(float)*size, cudaMemcpyDeviceToHost);
cout << array_host[size-1] << "\n";
}
Edit: I dropped this project for a few months, but yesterday upon updating to driver version 319.23, I'm no longer having this problem. I think the issue I described must have been a bug that was fixed. Hope this helps.
The asker determined that this was a temporary issue fixed by a newer CUDA release. See the edit to the original question.
I am reading the CUDA 5.0 samples (AdvancedQuickSort) now. However, I cannot understand this sample totally due to following codes:
// Now compute my own personal offset within this. I need to know how many
// threads with a lane ID less than mine are going to write to the same buffer
// as me. We can use popc to implement a single-operation warp scan in this case.
unsigned lane_mask_lt;
asm( "mov.u32 %0, %%lanemask_lt;" : "=r"(lane_mask_lt) );
unsigned int my_mask = greater ? gt_mask : lt_mask;
unsigned int my_offset = __popc(my_mask & lane_mask_lt);
which is in the __global__ void qsort_warp function, especially for this assemble language in the codes. Can anyone help me to explain the meaning of this assemble language?
%lanemask_lt is a special, read-only register in PTX assembly which is initialized with a 32-bit mask with bits set in positions less than the thread’s lane number in the warp. The inline PTX you have posted is simply reading the value of that register and storing it in a variable where it can be used in the subsequent C++ code you posted.
Every version of the CUDA toolkit ships with a PTX assembly lanugage reference guide you can use to look up things like this.
I have a 2D matrix SIZE x SIZE, which I'm trying to copy to the GPU.
I allocate the matrix this way:
#define SIZE 1024
float (*a)(SIZE) = (float(*)[SIZE]) malloc(SIZE * SIZE * sizeof(float));
And I have this on my ACC region:
void mmul_acc(restrict float a[][SIZE],
restrict float b[][SIZE],
restrict float c[][SIZE]) {
#pragma acc data copyin(a[0:SIZE][0:SIZE], b[0:SIZE][0:SIZE]) \
copyout c[0:SIZE][0:SIZE])
{
... code here...
}
When compiling with the PGI compiler, using -Minfo=acc, the compiler tells me:
Generating copyin(a[0:1024][0:])
What does a[0:1024][0:] mean? Why not a[0:1024][0:1024] ???
If instead of declaring matrices I declare arrays with size SIZE*SIZE, doing
#pragma acc copyin(a[0:SIZE*SIZE])
Generates the following compiler message
Generating copyin(a[0:16777216])
The code actually works the same way, same performance, same result.
Apparently in both ways the compiler generates the same code, as it should be, but the message is not straightforward.
I'm using the PGI accelerator 12.8, in a Linux64 machine. I'm compiling with -Minfo=acc
Note: this question was edited and now it doesn't really make much sense, but maybe it can useful to more people.
This issue is fixed in latest PGI Compiler 12.9.0. The compiler now returns following messsage:
Generating copyin(a[0:1024][0:1024])
I know that there is function clock() in CUDA where you can put in kernel code and query the GPU time. But I wonder if such a thing exists in OpenCL? Is there any way to query the GPU time in OpenCL? (I'm using NVIDIA's tool kit).
There is no OpenCL way to query clock cycles directly. However, OpenCL does have a profiling mechanism that exposes incremental counters on compute devices. By comparing the differences between ordered events, elapsed times can be measured. See clGetEventProfilingInfo.
Just for others coming her for help: Short introduction to profiling kernel runtime with OpenCL
Enable profiling mode:
cmdQueue = clCreateCommandQueue(context, *devices, CL_QUEUE_PROFILING_ENABLE, &err);
Profiling kernel:
cl_event prof_event;
clEnqueueNDRangeKernel(cmdQueue, kernel, 1 , 0, globalWorkSize, NULL, 0, NULL, &prof_event);
Read profiling data in:
cl_ulong ev_start_time=(cl_ulong)0;
cl_ulong ev_end_time=(cl_ulong)0;
clFinish(cmdQueue);
err = clWaitForEvents(1, &prof_event);
err |= clGetEventProfilingInfo(prof_event, CL_PROFILING_COMMAND_START, sizeof(cl_ulong), &ev_start_time, NULL);
err |= clGetEventProfilingInfo(prof_event, CL_PROFILING_COMMAND_END, sizeof(cl_ulong), &ev_end_time, NULL);
Calculate kernel execution time:
float run_time_gpu = (float)(ev_end_time - ev_start_time)/1000; // in usec
Profiling of individual work-items / work-goups is NOT possible yet.
You can set globalWorkSize = localWorkSize for profiling. Then you have only one workgroup.
Btw: Profiling of a single work-item (some work-items) isn't very helpful. With only some work-items you won't be able to hide memory latencies and the overhead leading to not meaningful measurements.
Try this (Only work with NVidia OpenCL of course) :
uint clock_time()
{
uint clock_time;
asm("mov.u32 %0, %%clock;" : "=r"(clock_time));
return clock_time;
}
The NVIDIA OpenCL SDK has an example Using Inline PTX with OpenCL. The clock register is accessible through inline PTX as the special register %clock. %clock is described in PTX: Parallel Thread Execution ISA manual. You should be able to replace the %%laneid with %%clock.
I have never tested this with OpenCL but use it in CUDA.
Please be warned that the compiler may reorder or remove the register read.
On NVIDIA you can use the following:
typedef unsigned long uint64_t; // if you haven't done so earlier
inline uint64_t n_nv_Clock()
{
uint64_t n_clock;
asm volatile("mov.u64 %0, %%clock64;" : "=l" (n_clock)); // make sure the compiler will not reorder this
return n_clock;
}
The volatile keyword tells the optimizer that you really mean it and don't want it moved / optimized away. This is a standard way of doing so both in PTX and e.g. in gcc.
Note that this returns clocks, not nanoseconds. You need to query for device clock frequency (using clGetDeviceInfo(device, CL_DEVICE_MAX_CLOCK_FREQUENCY, sizeof(freq), &freq, 0))). Also note that on older devices there are two frequencies (or three if you count the memory frequency which is irrelevant in this case): the device clock and the shader clock. What you want is the shader clock.
With the 64-bit version of the register you don't need to worry about overflowing as it generally takes hundreds of years. On the other hand, the 32-bit version can overflow quite often (you can still recover the result - unless it overflows twice).
Now, 10 years later after the question was posted I did some tests on NVidia. I tried running the answers given by user 'Spectral' and 'the swine'. Answer given by 'Spectral' does not work. I always got same invalid values returned by clock_time function.
uint clock_time()
{
uint clock_time;
asm("mov.u32 %0, %%clock;" : "=r"(clock_time)); // this is wrong
return clock_time;
}
After subtracting start and end time I got zero.
So had a look at the PTX assembly which in PyOpenCL you can get this way:
kernel_string = """
your OpenCL code
"""
prg = cl.Program(ctx, kernel_string).build()
print(prg.binaries[0].decode())
It turned out that the clock command was optimized away! So there was no '%clock' instruction in the printed assembly.
Looking into Nvidia's PTX documentation I found the following:
'Normally any memory that is written to will be specified as an out operand, but if there is a hidden side effect on user memory (for example, indirect access of a memory location via an operand), or if you want to stop any memory optimizations around the asm() statement performed during generation of PTX, you can add a "memory" clobbers specification after a 3rd colon, e.g.:'
So the function that actually work is this:
uint clock_time()
{
uint clock_time;
asm volatile ("mov.u32 %0, %%clock;" : "=r"(clock_time) :: "memory");
return clock_time;
}
The assembly contained lines like:
// inline asm
mov.u32 %r13, %clock;
// inline asm
The version given by 'the swine' also works.