Nsight report: No kernel launches captured - cuda

I wrote a simple cuda program in a .cu file. When I want to see the performance of this program. I choose "Nsight->Start Performance Analysis...." Then choose "Profile CUDA Application". After launching the application for a while and finishing capture, the report say "No kernel launches captured" The summary report say" 1 error encountered". Can someone help me to figure out why this happened?

Do you call cudaDeviceSynchronize() or cudaDeviceReset() after all the CUDA work is done in your sample? Otherwise Nsight cannot guarantee that all the launch and memcpy record buffers are flushed.

Related

Nvidia visual profiler not showing cudaMalloc() after kernel launch

I am trying to write a program that runs almost entirely on the GPU (with very little interaction with the host). initKernel is the first kernel that is being launched from the host. I use Dynamic parallelism to launch successive kernels from the initKernel, two of which are thrust::sort(thrust::device,...).
Before launching the initKernel, I do a cudaMalloc() on the host code and it is shown in the Runtime API of the Visual profiler. None of the cudaMallocs that appear in the __device__ functions and successive kernels (after the launch of initKernel) are shown in the Runtime API of the Visual profiler. Can someone help me understand why I cannot see the cudaMallocs in the Visual profiler?
Thank you for your time.
Can someone help me understand why I cannot see the cudaMallocs in the Visual profiler?
Because it is a documented limitation of the tool. From the documentation:
The Visual Profiler timeline does not display CUDA API calls invoked from within device-launched kernels.

let Nsight start debugging after certain kernel function is executed

My CUDA program have too many kernel functions and if I open the CUDA debugging mode, I'll have to wait for an whole hour after the breakpoint in certain kernel function is triggered.
Is there any way for Nsight to start debugging after certain kernel functions, or only debug the certain kernel function?
I'm using Nsight with VS2012
In theory you can follow the instructions in the Nsight help file (either the online help or local help. At the time of writing the page is here).
In short:
In the Nsight Monitor options, CUDA » Use this Monitor for CUDA attach should be True.
Before starting your application, set an environment variable called NSIGHT_CUDA_DEBUGGER to 1.
Then in your CUDA kernel, you can add a breakpoint like this:
asm("brkpt;");
This will work similar to the __debugbreak() intrinsic or int 3 assembly instruction in host code. When hit you will get a dialog prompting you to attach the CUDA debugger.
In practice, at least for me it Just Doesn't Work™. Maybe you'll have more luck.

Would CPU continue executing the next line code when the GPU kernel is running? Would it cause an error?

I am reading the book "CUDA by example" by Sanders, where the author mentioned that p.441: For example, when we launched the kernel in our ray tracer, the GPU begins executing our code, but the CPU continues executing the next line of our program before the GPU finishes. -- Highlighted mar 4, 2014
I am wondering if this statement is correct. For example, what if the next instruction CPU continues executing depends on the variables that the GPU kernel outputs? Would it cause an error? From my experience, it does not cause an error. So what does the author really mean?
Many thanks!
Yes, the author is correct. Suppose my kernel launch looks like this:
int *h_in_data, *d_in_data, *h_out_data, *d_out_data;
// code to allocate host and device pointers, and initialize host data
...
// copy host data to device
cudaMemcpy(d_in_data, h_in_data, size_of_data, cudaMemcpyHostToDevice);
mykernel<<<grid, block>>>(d_in_data, d_out_data);
// some other host code happens here
// at this point, h_out_data does not point to valid data
...
cudaMemcpy(h_out_data, d_out_data, size_of_data, cudaMemcpyDeviceToHost);
//h_out_data now points to valid data
Immediately after the kernel launch, the CPU continues executing host code. But the data generated by the device (either d_out_data or h_out_data) is not ready yet. If the host code attempts to use whatever is pointed to by h_out_data, it will just be garbage data. This data only becomes valid after the 2nd cudaMemcpy operation.
Note that using the data (h_out_data) before the 2nd cudaMemcpy will not generate an error, if by that you mean a segmentation fault or some other run time error. But any results generated will not be correct.
Kernel launches in CUDA are by default asynchronous, i.e., the control will return to CPU after the launch. Now if the next instruction of the CPU is another kernel launch, then you don't need to worry, this launch will be done only after the previously launched kernel has finished its execution.
However, if the next instruction is some CPU instruction only which is accessing the results of the kernel, there can be a problem of accessing garbage value. Therefore, excessive care has to be taken and device synchronization should be done as and when needed.

'Flush records'-Warning in Parallel Nsight profiling results

I'm trying to profile my CUDA-Kernels running on a Windows 7 32 bit machine with a NVIDIA GTX 480 board. I'm using the CUDA 4.1 32 bit toolkit and the Parallel Nsight 2.1 edition for VS 2010.
The profiling results of my program always show the same warning on an irregular basis:
Message: Flush records, Event Type: Range, Level: 50
After this event there is always a processing break of several milliseconds. Then the GPU proceeds the computing at the speed it had before.
I havn't found any information about this warning in CUDA documentation and on the web and I don't even know if it is a problem that only occours during profiling.
Has anyone an idea what this warning is about and how to avoid it?
The warning "Flush Record" is used to show when the Nsight CUDA Trace Activity is adding additional overhead to your application. This is to allow you to interpret periods of high CPU activity. There is no way to remove this warning. Your application is not doing anything wrong.
The Nsight CUDA Trace Activity collects timestamps for the start and end of GPU work including kernels launches, memory copies, and memory sets. When an application launches a task on the GPU the tool allocates a trace record for the task and programs the GPU to write a time stamp into the record. The collection of timestamps is done in a way that should not break concurrency and should not stall the CPU. When the work is completed the tools collects the information and streams it to memory. The Flush range includes the time to collect the results and write out the information. This can include time to perform additional kernel launches and copy memory from device to host. The tool will collect results when the application synchronizes a context (cuCtxSynchronize or cuda{Thread, Device}Synchronize) or when it runs out of trace records.
I will enter a bug to improve the user documentation and tool tips.

Time between Kernel Launch and Kernel Execution

I'm trying to optimize my CUDA programm by using the Parallel Nsight 2.1 edition for VS 2010.
My program runs on a Windows 7 (32 bit) machine with a GTX 480 board. I have installed the CUDA 4.1 32 bit toolkit and the 301.32 driver.
One cycle in the program consits of a copy of host data to the device, execution of the kernels and copy of the results from the device to the host.
As you can see in the picture of the profiler results below, the kernels run in four different streams. The kernel in each stream rely on the data copied to the device in 'Stream 2'. That's why the asyncMemcpy is synchronized with the CPU before launch of the Kernels in the different streams.
What irritates me in the picture is the big gap between the end of the first kernel launch (at 10.5778679285) and the beginning of the kernel execution (at 10.5781500). It takes around 300 us to launch the kernel which is a huge overhead in a processing cycle of less than 1 ms.
Furthermore there is no overlapping of kernel execution and the data copy of the results back to the host, which increases the overhead even more.
Are there any obvious reasons for this behavior?
There are three issues that I can tell by the trace.
Nsight CUDA Analysis adds about 1 µs per API call. You have both CUDA runtime and CUDA Driver API trace enabled. If you were to disable CUDA runtime trace I would guess that you would reduce the width by 50 µs.
Since you are on GTX 480 on Windows 7 you are executing on the WDDM driver model. On WDDM the driver must make a kernel call to submit work which introduces a lot of overhead. To avoid reduce this overhead the CUDA driver buffers requests in an internal SW queue and sends the requests to the driver when the queue is full you it is flushed by a synchronize call. It is possible tu use cudaEventQuery to force the driver to flush the work but this can have other performance implications.
It appears you are submitting your work to streams in a depth first manner. On compute capability 2.x and 3.0 devices you will have better results if you submit to streams in a breadth first manner. In your case you may see overlap between your kernels.
The timeline screenshot does not provide sufficient information for me to determine why the memory copies are starting after completion of all of the kernels. Given the API call pattern I you should be able to see transfers starting after each streams completes its launch.
If you are waiting on all streams to complete it is likely faster to do a cudaDeviceSynchronize than 4 cudaStreamSynchronize calls.
The next version of Nsight will have additional features to help understand the SW queuing and the submission of work to the compute engine and the memory copy engine.