getrusage vs. clock_gettime() - getrusage

I am trying to obtain the CPU time consumed by a process on Ubuntu. As far as I know, there are two functions can do this job: getrusage() and clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &tp). In my code, calling getrusage() immediately after clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &tp), always gives different results.
Can anyone please help me understand which function gives higher resolution, and what advantages/disadvantages of these functions have?
Thanks.

getrusage(...)
Splits CPU time into system and user components in ru_utime and ru_stime respectively.
Roughly microsecond resolution: struct timeval has the field tv_usec, but this resolution is usually limited to about 4ms/250Hz (source)
Available on SVr4, 4.3BSD, POSIX.1-2001: this means it is available on both Linux and OS X
See the man page
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, ...)
Combined total of system and user time with no way to separate it into system/user time components.
Nanosecond resolution: struct timespec is a clone of struct timeval but with tv_nsec instead of tv_usec. Exact resolution depends on how the timer is implemented on given system, and can be queried with clock_getres.
Requires you to link to librt
Clock may not be available. In this case, clock_gettime will return -1 and set errno to EINVAL, so it's a good idea to provide a getrusage fallback. (source)
Available on SUSv2 and POSIX.1-2001: this means it is available on Linux, but not OS X.
See the man page

Related

What is the canonical way to compare memory ranges in the CPU and in the GPU

I have to contiguous ranges (pointer + size), one in the GPU and one in the CPU and I want to compare if they are equal.
What the canonical way to compare these ranges for equality?
my_cpu_type cpu; // cpu.data() returns double*
my_gpu_type gpu; // gpu.data() returns thrust::cuda::pointer<double>
thrust::equal(cpu.data(), cpu.data() + cpu.size(), gpu.data());
gives illegal memory access.
I also tried
thrust::equal(
thrust::cuda::par // also thrust::host
, cpu.data(), cpu.data() + cpu.size(), gpu.data()
);
You can't do it the way you are imagining in the general case with thrust. Thrust does not execute algorithms in a mixed backend. You must either use the device backend, in which case all data needs to be on the device (or accessible from device code, see below), or else the host backend in which case all data needs to be on the host.
Therefore you will be forced to copy the data from one side to the other. The cost should be similar (copy host array to device, or device array to host) so we prefer to copy to the device, since the device comparison can be faster.
If you have the luxury of having the host array be in a pinned buffer, then it will be possible to do something like what you are suggesting.
For the general case, something like this should work:
thrust::host_vector<double> cpu(size);
thrust::device_vector<double> gpu(size);
thrust::device_vector<double> d_cpu = cpu;
bool are_equal = thrust::equal(d_cpu.begin(), d_cpu.end(), gpu.begin());
In addition to Robert's valid answer, I would claim you are following the wrong path in trying to employ C++-STL-like code where GPU computation is involved.
The issue is not merely that of where pointers point to. Something like std::equal is inherently sequential. Even if its implementation involves parallelism, the assumption is still of a computation which is to start ASAP, blocking the calling thread, and returning a result to that calling thread to continue its work. While it's possible this is what you want, I would guess that in most cases, it probably isn't. I believe thrust's approach, of making developers feel as though they're writing "C++ STL code, but with the GPU" is (mostly) misguided.
If there had been some integration of GPU task graphs, the C++ future/async/promise mechanism, and perhaps something like taskflow or other frameworks, that might have somehow become more of a "canonical" way to do this.

What is the fastest way to update a single float value to the GPU to access it in a CUDA kernel?

I have a opengl particle simulation, where the position of each particle is calculated in a CUDA kernel. Most memory resides within the GPU memory, but there is a single float value, I have to update from the CPU each frame.
At the moment I use cudaMemcpyAsync() to copy the float value to the GPU, but (at least from what I can tell), this slows down the performance quite a bit. I tried to use nvproof to see, which calls take the longest, with these results:
Calls Avg Min Max Name
477 2.9740us 2.8160us 4.5440us simulation(float3*, float*, float3*, float*)
477 89.033us 18.600us 283.00us cudaLaunchKernel
477 47.819us 10.200us 120.70us cudaMemcpyAsync
I think I can't really do much about the kernel launch itself, but from the calls, that happen every frame cudaMemcpyAsync() seems to be taking the longest.
I have also tried to use pinned memory and cudaHostGetDevicePointer() as described here, however for some reason this increases the kernel launch times even more, making more than up for the time saved for not needing the memcopy function.
I guess there has to be a better/faster way to update my single float variable to the GPU?
Easiest way is, that you can add an extra parameter to the simulation kernel function as a value of simple float but not as a pointer to float so that the data goes directly by the kernel launch parameters structure that CUDA sends to GPU when you launch the kernel. Then you evade that data copy command altogether. (I'm assuming CUDA packs whole function parameter descriptor data of kernel into a single copy command because kernel parameter descriptor space is limited by a few kBs or less).
simulation(fooPointer,
barPointer,
fooBarPointer,
floatVariable
);
Or, try double buffering between data update and rendering or between data update and compute so that simulation image follows the simulation calculation by 1-2 frames behind (and per-frame time gets worse) but "frames per second" increases.
If its not an interactive simulation, hiding compute/render/data latencies by double or triple buffering should work.
If you are after minimizing per-frame timing (quicker response to a user-input into simulation?) then you should embed the float variable to the end of an array that you already send/use in simulation or whatever structure you are using. If you already have a 1MB+ float buffer to send to GPU, then appending 4B(float) to end of it should not make much difference then you can access it from there. 1 copy operation should be faster than 2 copy operations with same total size.
If you are literally sending just 4B to GPU at each frame (with a simple function to generate that data), then (as 3Dave said in comments) you can try adding an extra kernel function to update the value in the GPU and just have the overhead of kernel launch command instead of both copy command overhead and data copy overhead. On a positive side, that extra kernel overhead might be hidden if there is a "graph" of kernels running for each frame automatically without enqueueing all of them again and again.
Here,
https://devblogs.nvidia.com/cuda-graphs/
The part
We are going to create a simple code which mimics this pattern. We will then use this to demonstrate the overheads involved with the standard launch mechanism and show how to introduce a CUDA Graph comprising the multiple kernels, which can be launched from the application in a single operation.
cudaGraphLaunch(instance, stream);
They say per-kernel launch overhead in this "graph" feature is only 3-4 microseconds when there are many(20) kernels in the algorithm.
Since graph supports other commands too, you can try both copy and compute parts in parallel cuda-streams within a graph and switch their inputs with double buffering so all CUDA things can stay within CUDA's context before sending output to rendering.
(Maybe)You don't even have to change the data mechanism at all. Just try sending data of float as binary representation into the pointer value and only read the pointer value (not data value) from kernel and convert it back to float. I don't know if CUDA returns an error for this if you don't try reaching the (wrong) pointer address that the float data represents, in the kernel.
simulation(fooPointer,
barPointer,
fooBarPointer,
toPtr(floatData) // <----- float to 64/32 bit pointer value
);
and in kernel
float val = fromPtrToFloat(parameter4); // converts pointer itself, not the data
But this may not be a preferred practice if you can simply use "value" type parameters.

cuFFT don't see what my kernel did

I have a very strange issue using cuFFT. I tried to prepare my input data with a kernel that apply a Hanning window. Everything seems fine. here the issue: cuFFT run on the data WITHOUT the hanning window applied. I don't understand why.
I tried the following:
test1:
- I run the kernel to apply window
- get back the data to the host and check the values: all is OK. the window is applied
- copy back the values to the device
- run the fft: no luck, it eat non windowed data!
test2:
- I don't use a kernel, I apply the window with CPU
- I run the fft: it works. it eat the windowed data
Is there any rational explanation to this?
Is there some kind of cache involved here ?
NOTE: I use the same device memory pointer in my kernel and in cuFFT
It appears that I ran my kernel applyWindow only on the first 512 samples of my data. This explain why it did nothing. Running the kernel with the right settings did the job on the entire buffer.
int nbBlock = inputSizeInSamples / deviceProp.maxThreadsPerBlock;
applyWindow <<<nbBlock, deviceProp.maxThreadsPerBlock >>>( ... )

Why is the first cudaMalloc the only bottleneck?

I defined this function :
void cuda_entering_function(...)
{
StructA *host_input, *dev_input;
StructB *host_output, *dev_output;
host_input = (StructA*)malloc(sizeof(StructA));
host_output = (StructB*)malloc(sizeof(StructB));
cudaMalloc(&dev_input, sizeof(StructA));
cudaMalloc(&dev_output, sizeof(StructB));
... some more other cudaMalloc()s and cudaMemcpy()s ...
cudaKernel<< ... >>(dev_input, dev_output);
...
}
This function is called several times (about 5 ~ 15 times) throughout my program, and I measured this program's performance using gettimeofday().
Then I found that the bottleneck of cuda_entering_function() is the first cudaMalloc() - the very first cudaMalloc() throughout my whole program. Over 95% of the total execution time of cuda_entering_function() was consumed by the first cudaMalloc(), and this also happens when I changed the size of first cudaMalloc()'s allocating memory or when I changed the executing order of cudaMalloc()s.
What is the reason and is there any way to reduce the first cuda allocating time?
The first cudaMalloc is responsible for the initialization of the device too, because it's the first call to any function involving the device. This is why you take such a hit: it's overhead due to the use of CUDA and your GPU. You should make sure that your application can gain a sufficient speedup to compensate for the overhead.
In general, people use a call to an initialization function in order to setup their device. In this answer, you can see that apparently a call to cudaFree(0) is the canonical way to do so. This sample shows the use of cudaSetDevice, which could be a good habit if you ever work on machines with several CUDA-ready devices.

CUDA samples matrixMul error

I am very new to cuda and started reading about parallel programming and cuda just a few weeks ago. After I installed the cuda toolkit, I was browsing the sdk samples (which come with the installation of the toolkit) and wanted to try some of them out. I started with matrixMul from 0_Simple folder. This program executes fine (I am using Visual Studio 2010).
Now I want to change the size of the matrices and try with a bigger one (for example 960X960 or 1024x1024). In this case, something crashes (I get black screen, and then the message: display driver stopped responding and has recovered).
I am changing this two lines in the code (from main function):
dim3 dimsA(8*4*block_size, 8*4*block_size, 1);
dim3 dimsB(8*4*block_size, 8*4*block_size, 1);
before they were:
dim3 dimsA(5*2*block_size, 5*2*block_size, 1);
dim3 dimsB(5*2*block_size, 5*2*block_size, 1);
Can someone point to me what I am doing wrong. and should I alter something else in this example for it to work properly. Thx!
Edit: like some of you suggested, i changed the timeout value (0 somehow did not work for me, I set the timeout to 60), so my driver does not crash, but I get huge list of errors, like:
... ... ...
Error! Matrix[409598]=6.40005159, ref=6.39999986 error term is > 1e-5
Error! Matrix[409599]=6.40005159, ref=6.39999986 error term is > 1e-5
Does this got something to do with the allocation of the memory. Should I make changes there and what could they be?
Your new problem is actually just the strict tolerances provided in the NVidia example. Your kernel is running correctly. It's just complaining that accumluated error is greater than the limit that they had set for this example. This is just because you're doing a lot more math operations which are all accumulating error. If you look at the numbers it's giving you, you're only off of the reference answer by about 0.00005, which is not unusual after a lot of single-precision floating-point math. The reason you're getting these errors now and not with the default matrix sizes is that the original matricies were smaller and thus required a lot less operations to multiply. Matrix multiplication of N x N matricies requires on the order of N^3 operations, so the number of operations required increases much faster than the size of the matrix and the accumulated error would increase in proportion with the number of operations.
If you look near the end of the runTest() function, there's a call to computeGold() which computes the reference answer on your CPU. There should then be a call to something like shrCompareL2fe that compares the results. The last parameter to this is a tolerance. If you increase the size of this tolerance (say, to 1e-3 or 1e-4 instead of 1e-5,) you should eliminate these error messages. Note that there may be a couple of these calls. The version of the SDK examples that I have has an optional CUBLAS implementation, so it has a comparison for that against the gold, too. The one right after the print statement that says "Comparing CUDA matrixMul & Host results" is the one you'd want to change.
I'd advise looking at the indexing used in the kernel (matrixMulCUDA) a bit closer - it sounds like you're writing to unallocated memory.
More specifically, is the only thing that you changed the dimsA and dimsB variables? Inside the kernel they use the thread and block index to access the data - did you also increase the data size accordingly? There is no bounds checking going on in the kernel, so if you just change the kernel launch configuration, but not the data, then odds are you're writing past your data into some other memory
Have you disabled Timeout Detection and Recovery (TDR) in Windows? It is entirely possible that your code is running fine but that the larger matricies caused the kernel execution to exceed Windows' timeout, which causes Windows to assume the card is locked up, so it resets the card and gives you a message identical to the one you describe. Even if that is not your problem here, you definitely want to disable that before doing any serious CUDA work in Windows. The timeout is quite short by default, since normal graphics rendering should take small fractions of a second per frame.
See this post on the NVidia forums that describes TDR and how to turn it off:
WDDM TDR - NVidia devtalk forum
In particular, you probably want to set the key HKLM\System\CurrentControlSet\Control\GraphicsDrivers\TdrLevel to 0 (Detection Disabled).
Alternatively, you can increase the timeout period by setting
HKLM\System\CurrentControlSet\Control\GraphicsDrivers\TdrDelay. It defaults to 2 and is specified in seconds. Personally, I have found that TDR is always annoying when doing work in CUDA, so I just turn it off entirely. IIRC, you need to restart your system for any TDR-related changes to take effect.